article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
libor ( london inter bank overnight rates ) are the interest rates for eurodollar deposits .libor is one of the main instruments for interest rates in the debt market , and is widely used for multifarious purposes in finance .the main focus of this paper is on the properties of libor , and in particular finding a common measure that yields a martingale evolution for all libor .two other numeraires for the forward interest rates are also considered , namely the money market numeraire and the forward measure for bonds .all calculations are performed using the field theory for the forward interest rates that has been introduced in .the main advantage of modelling the forward interest rates using field theory is that there are infinitely many random variables at each instant driving the forward rates .in particular , for the case of libor rates , it will be shown , unlike the usual models in finance , a numeraire can be chosen so that * all * the libor instruments simultaneously have a martingale evolution .the price of any financial instrument in the future has to be discounted by a numeraire to obtain its current price .the freedom of choosing a numeraire results from the fact that for every numeraire there is a compensating drift such that the price of any traded instrument is independent of the numeraire .numeraire invariance is an important tool in creating models for the pricing of financial instruments , and is verified by using three numeraires for pricing an interest caplet . as expected , the price of the caplet is numeraire invariant . in section 2 the field theory of forward rates is briefly reviewed . in section 3the three numeraires are discussed , and the corresponding drift velocities are evaluated . in section 4the price of a mid - curve interest caplet is priced for the three numeraires , in section 5 put - call parity is derived for the three cases , in section 6 interest swaps are discussed , and with some conclusion drawn in section 7 .the field theory of forward rates is a general framework for modelling the interest rates that allows for a wide choice of evolution equation for the interest rates .the libor forward interest rates are the interest rates , fixed at time , for an instantaneous loan at future times .let be a two dimensional field driving the evolution of forward rates through time , defined by where is the drift of the forward interest rates that will be fixed by a choice of numeraire , and is the volatility that is fixed from the market .one is free to choose the dynamics of how the field evolves . integrating eq .[ fft ] yields where is the initial forward interest rates term structure that is specified by the market .the price of a libor bond , at present time , that matures at some future time is denoted by , and is defined in terms of the forward interest rates as follows . following baaquie and bouchaud , the lagrangian that describes the evolution of instantaneous libor forward rates is defined by three parameters , and is given by = -\frac{1}{2 } \left\{a^2(t , z)+\frac{1}{\mu^2 } \left ( \frac{\partial a(t , z)}{\partial z } \right)^2 + \frac{1}{\lambda^4 } \left ( \frac{\partial^{2}a(t , z)}{\partial^{2}z } \right)^2\right\}\ ] ] where market ( psychological ) future time is defined by .the lagrangian in eq .[ la ] contains a squared laplacian term that describes the stiffness of the forward rate curve .baaquie and bouchaud have determined the empirical values of the three constants , and have demonstrated that this formulation is able to accurately account for the phenomenology of libor interest rate dynamics .ultimately , all the pricing formulae for caps and floors depend on 1 ) the volatility function , 2 ) parameters contained in the lagrangian , and lastly 3 ) on the initial term structure .the action ] , are evaluated by integrating over all possible values of the quantum field .the quantum theory of the forward interest rates is defined by the generating ( partition ) function given by &= & e\big[e^{\int_{t_0}^{\infty } dt \int_0^\infty dz j(t , z)a(t , z)}\big]\nonumber\\ & \equiv & \frac{1}{z}\int da \ , \, e^{s[a]+\int_{t_0}^{\infty } dt \int_0^\infty dz j(t , z)a(t , z)}\nonumber\\ \label{zj } & = & \exp\big(\frac{1}{2}\int_{t_0}^{\infty } dt \int_0^{\infty } dz dz ' j(t , z)d(z , z';t)j(t , z')\big)\end{aligned}\ ] ] all financial instruments of the interest rates are obtained by performing a path integral over the ( fluctuating ) two dimensional quantum field .the expectation value for an instrument , say ] , is given by \end{aligned}\ ] ] in other words , the market measure is defined such that each libor is a martingale ; that is , for \end{aligned}\ ] ] in terms of the underlying forward interest rates , the libor s are given by the following and hence from eqs .[ mrktmt ] and [ mrktmt2 ] the martingale condition for libor can be written as \end{aligned}\ ] ] denote the drift for the market measure by , and let ; the evolution equation for the libor forward interest rates is given , similar to eq .[ frcfrwd ] , by hence =e^{f_0+\int_{\cal m } \alpha_l(t',x ) } \frac{1}{z } \int da e^{\int_{\cal m } \sigma(t',x)a(t',x)}e^{s[a]}\end{aligned}\ ] ] where the integration domain is given in fig .[ doml ] .hence , from from eqs . [ zj ] , [ mrtnmkt1 ] and [ mrtnmkt2 ] }\nonumber\\ & = & \exp\{\frac{1}{2}\int_{t_0}^{t_*}dt \int_{t_n}^{t_n+\ell } dx dx'\sigma(t , x)d(x , x';t)\sigma(t , x')\}\end{aligned}\ ] ] hence the libor drift velocity is given by the libor drift velocity is * negative * , as is required for compensating growing payments due to the compounding of interest .there is a discontinuity in the value of at forward time ; from its definition approaching the value from , the discontinuity is given by since the time - interval for libor days is quite small , one can approximate the drift by the following since the normalization of the volatility function can always be chosen so that .the value of discontinuity at is then approximately given by fig .[ drift ] shows the behaviour of the drift velocity , with the value of taken from the market , .one can see from the graph that , in a given libor interval , the drift velocity is approximately linear in forward time and the maximum drift goes as , both of which is expected from eq .[ driftapp ] .it is often convenient to have a discounting factor that renders the futures price of libor bonds into a martingale .consider the libor forward bond given by the forward numeraire is given by ; the drift velocity is fixed so that the future price of a libor bond is equal to its forward value ; hence \end{aligned}\ ] ] in effect , as expressed in the equation above , the forward measure makes the forward libor bond price a martingale . to determine the corresponding drift velocity , the right side of eq .[ marfor1 ] is explicitly evaluated .note from eq .[ frcfrwd ] &=&e^{-\int_{t_n}^{t_{n+1}}dx f(t_0,x)-\int_{\cal m } \alpha_f(t',x ) } \int da e^{-\int_{\cal m } \sigma(t',x)a(t',x)}e^{s[a]}\nonumber\end{aligned}\ ] ] where the integration domain is given in fig .[ doml ] .hence , from eqs . [ zj ] , [ marfor1 ] and [ marfor2 ] }\nonumber\\ & = & \exp\{\frac{1}{2}\int_{t_0}^{t_*}dt \int_{t_n}^{t_{n+1 } } dx dx'\sigma(t , x)d(x , x';t)\sigma(t , x')\}\end{aligned}\ ] ] hence the drift velocity for the forward measure is given by the libor drift velocity is the negative of the drift for the forward measure , that is fig .[ drift ] shows the behaviour of the drift velocity . in heath , jarrow , and morton ,the martingale measure was defined by discounting treasury bonds using the money market account , with the money market numeraire defined by for the spot rate of interest .the quantity is defined to be a martingale \nonumber\\ \label{mmmar } \rightarrow b(t , t)&=&e_m\big[e^{-\int_{t}^{t_*}r(t')dt'}b(t_*,t)\big]\end{aligned}\ ] ] where $ ] denotes expectation values taken with respect to the money market measure . the martingale condition can be solved for it s corresponding drift velocity , which is given by interest rate cap is composed out of a linear sum of individual caplets . the pricing formula for an interest rate caplet is obtained for a general volatility function and propagator that drive the underlying libor forward rates .a mid - curve caplet can be exercised at any fixed time that is less then the time at which the caplet matures .denote by the price at time of an interest rate european option contract that must be exercised at time for an interest rate caplet that puts an upper limit to the interest from time to .let the principal amount be equal to , and the caplet rate be .the caplet is exercised at time , with the payment made in arrears at time .note that although the payment is made at time , the * amount * that will be paid is fixed at time .the various time intervals that define the interest rate caplet are shown in fig.[times ] .the payoff function of an interest rate caplet is the value of the caplet when it matures , at , and is given by + \\ & = & \ell v \big[\frac{b(t_*,t_n)- b(t_*,t_n+\ell)}{\ell}-kb(t_*,t_n+\ell)\big]_+ \nonumber\\ \label{payoffcaplet } & = & \tilde{v } b(t_*,t_n+\ell)\big(xe^{f_*}-1 \big)_+ \end{aligned}\ ] ] where recall from eq .[ marfor2 ] the payoff for an interest rate floorlet is similarly given by + \nonumber\\ \label{payofffloor } & = & \tilde{v } b(t_*,t_n+\ell)\big(1-xe^{f _ * } \big)_+ \end{aligned}\ ] ] as will be shown in section [ pcsec ] , the price of the caplet automatically determines the price of a floorlet due to put - call parity , and hence the price of the floorlet does not need an independent derivation .an interest rate cap of a duration over a longer period is made from the sum over the caplets spanning the requisite time interval .consider a mid - curve cap , to be exercised at time , with strike price from time to time , and with the interest cap starting from time and ending at time ; its price is given by and a similar expression for an interest rate floor in terms of the floorlets for a single libor interval .the numeraire for the forward measure is given by the libor bond .hence the caplet is a martingale when discounted by ; the price of the caplet at time is consequently given by \nonumber\\ & = & \tilde{v}e_f\big(x - e^{-f_*}\big)_+\nonumber\end{aligned}\ ] ] hence , in agreement with eq .[ payoffcaplet ] , the price of a caplet is given by the payoff function for the caplet given in eq .[ capletfm1 ] above for the interest caplet has been obtained in and using a different approach .the price of the caplet is given by from the derivation given in , the pricing kernel is given by the price of the caplet is given by the following black - scholes type formula \end{aligned}\ ] ] where is the cumulative distribution for the normal random variable with the following definitions \\end{aligned}\ ] ] the libor market measure has as its numeraire the libor bond ; the caplet is a martingale when discounted by this numeraire , and hence the price of the caplet at time is given by \nonumber\\ & = & \tilde{v}e_l\big(xe^{f_*}-1\big)_+\nonumber \\\label{capletlmm } \rightarrow caplet(t_0,t_*,t_n)&=&\tilde{v}b(t_0,t_n+\ell)e_l\big(xe^{f_*}-1\big)_+\end{aligned}\ ] ] where , similar to the derivation given in , the price of the caplet is given by for the pricing kernel is given by the price of the caplet obtained from the forward measure is equal to the one obtained using the libor market measure since , from eqs .[ capletfm ] and [ kerfm ] , one can prove the following remarkable result the identity above shows how the three factors required in the pricing of an interest rate caplet , namely the discount factors , the drift velocities and the payoff functions , all ` conspire ' to yield numeraire invariance for the price of the interest rate option .the payoff function is correctly given by the price of the caplet , since in the limit of , eq .[ qsqr ] yields where is a constant , and .hence , from eqs . [ pricecapl ] and [ psil ] verifying the payoff function as given in eq .[ payoffcaplet ] .the money market numeraire is given by the spot interest rate .expressed in terms of the money martingale numeraire , the price of the caplet is given by \nonumber\\ \rightarrow caplet(t_0,t_*,t_n)&=&e_m\big[e^{-\int_{t_0}^{t_*}dt r(t)}caplet(t_*,t_*,t_n)\big]\nonumber\end{aligned}\ ] ] to simplify the calculation , consider the change of numeraire from to discounting by the treasury bond ; it then follows that where the drift for the action is given by in terms of the money market measure , the price of the caplet is given by \\ & = & b(t_0,t_*)e_m^*\big[caplet(t_*,t_*,t_n)\big]\nonumber\\ & = & \tilde{v}b(t_0,t _ * ) e_m^*\big[b(t_*,t_n+\ell)\big(xe^{f_*}- 1\big)_+\big]\nonumber\end{aligned}\ ] ] from the expression for the forward rates given in eq .[ frcfrwd ] the price of the caplet can be written out as follows \nonumber\\ & = & \tilde{v}b(t_0,t_n+\ell)e^{-\int_{\cal r}\alpha_*}~ \frac{1}{z}\int da e^{-\int_{\cal r}\sigma a}e^{s _ * } \big(xe^{f_*}-1 \big)_+\end{aligned}\ ] ] where the integration domain is given in fig .[ domr ] .the payoff can be re - expressed using the dirac delta - function as follows from eq .[ frcfrwd ] , and domain of integration given in fig .[ doml ] , one obtains hence , from eqs.[mnymrft ] and [ drdelfn ] the price of the caplet , for , is given by \nonumber\\ & = & \tilde{v } b(t_0,t_n+\ell)e^{-\int_{\cal r}\alpha_*}~\int dg \int \frac{d\xi}{2\pi } e^{i\xi(g - f_0-\int_{{\cal m } } \alpha_*)}\big(xe^{g}-1 \big)_+ \times \nonumber\\ & & \frac{1}{z}\int da e^{-\int_{\cal r}\sigma a}e^{-i\xi\int_{{\cal m}}\sigma a } e^{s_*}\end{aligned}\ ] ] to perform path integral note that and the gaussian path integral using eq .[ zj ] yields where the expression for above , using the definition of given in eqs .[ qsqr ] and [ alpstr ] respectively , can be shown to yield the following simplifying eq .[ capletmm ] using eq .[ gamma ] yields the price of the caplet as given by hence we see that the money market numeraire yields the same price for the caplet as the ones obtained from the forward and libor market measure , as expected , but with a derivation that is very different from the previous ones .put - call parity for caplets and floorlets is a model independent result , and is derived by demanding that the prices be equal of two portfolios having identical payoffs at maturity formed out of a caplet and the money market account on the one hand , and a floorlet and futures contract on the other hand .failure of the prices to obey the put - call parity relation would then lead to arbitrage opportunities .more precisely , put - call parity yields the following relation between the price of a caplet and a floorlet where the other two instruments are the money market account and a futures contract .re - arranging eq .[ pcmip ] and simplifying yields \\ & = & \mathrm{value~of~swaplet}\nonumber\end{aligned}\ ] ] the right hand side of above equation is the price , at time , of a forward or deferred swaplet , which is an interest rate swaplet that matures at time ; swaps are discussed in section [ swaps ] . in this sectiona derivation is given for put - call parity for ( libor ) options ; the derivation is given for the three different numeraires , and illustrates how the properties are essential for the numeraires to price the caplet and floor so that they satisfy put - call parity .the payoff for the caplet and a floorlet is generically given by where the heaviside step function is defined by the derivation of put - call parity hinges on the following identity since it yields , for the difference of the payoff functions of the put and call options , the following the price of a caplet and floorlet at time is given by discounting the payoff functions with the discounting factor of . from eq .[ capletfm ] \nonumber\\ & = & \tilde{v}b(t_0,t_n ) e_f\big(x - e^{-f_*}\big)_+\nonumber\end{aligned}\ ] ] and the floorlet is given by consider the expression \nonumber\\ \label{pcfm } & = & \tilde{v}b(t_0,t_n)e_f\big(x - e^{-f_*}\big)\end{aligned}\ ] ] where eq .[ pcparity ] has been used to obtain eq .[ pcfm ] . for the forward measure , from eq .[ marfor1 ] =e^{-f_0}\end{aligned}\ ] ] hence , since for constant we have , from above equation and eq .[ pcfm ] , the price of a caplet and floorlet obeys the put - call relation and yields eq .[ pcmip ] as expected .the price of a caplet for the libor market measure is given from eq .[ capletlmm ] by and the floorlet is given by hence , similar to the derivation given in eq.[pcfm ] , we have for the libor market measure , from eq.[mrtnmkt1 ] =e^{f_0 } \nonumber\end{aligned}\ ] ] and hence equation above , together with eq .[ pclmm1 ] , yields the expected eq .[ pcmip ] put - call parity relation the money market measure has some interesting intermediate steps in the derivation of put - call parity .recall the caplet for the money market measure is given from eq .[ capletmm ] as \nonumber\end{aligned}\ ] ] using the definition of the payoff function for a caplet given in eq .[ payoffcaplet ] yields +\big ) \nonumber\end{aligned}\ ] ] the price of the floor is given by + \big)\nonumber\end{aligned}\ ] ] consider the difference of put and call on a caplet ; similar to the previous cases , using eq .[ theta ] yields the following \big)\end{aligned}\ ] ] the martingale condition given in eq .[ mmmar ] yields the expected result given in eq .[ pcmip ] that \nonumber\\ & = & \tilde{v}b(t_0,t_n+\ell)(xe^{f_0}-1)\nonumber\\ & = & \ell vb(t_0,t_n+\ell)(l(t_0,t_n)-k)\nonumber\end{aligned}\ ] ] to obtain put - call parity for the money market account , unlike the other two cases , * two * instruments , namely and , have to be martingales , which in fact turned out to be the case for the money market numeraire .an interest swap is contracted between two parties .payments are made at fixed intervals , usually or days , denoted by , with the contract having notional principal , and a pre - fixed total duration , with the last payment made at time .a swap of the first kind , namely swap , is where one party pays at a fixed interest rate on the notional principal , and the other party pays a floating interest rate based on the prevailing libor rate .a swap of the second kind , namely swap , is where the party pays at the floating libor rate and receives payments at fixed interest rate . to quantify the value of the swap ,let the contract start at time , with payments made at fixed interval , with times .consider a swap in which the payments at the fixed rate is given by ; the values of the swaps are then given by \nonumber\\ \mathrm{swap}_{ii}(t_0 , r_s)=v\big [ \ell r_s\sum_{n=0}^n b(t_0,t_n+\ell)+b(t , t_n+\ell)-1\big]\end{aligned}\ ] ] the par value of the swap when it is initiated , that is at time , is zero ; hence the par fixed rate , from eq .[ swap12 ] , is given by recall from eqs .[ capfllt ] and [ capprice ] that a cap or a floor is constructed from a linear sum of caplets and floorlets .the put - call parity for interest rate caplets and floorlets given in eq . [ capfllt ] in turn yields \nonumber\\ & = & \ell v\sum_{n=0}^n b(t_0,t_n+\ell)\big[l(t_0,t_n)-k\big]\end{aligned}\ ] ] the price of a swap at time is similar to the forward price of a treasury bond , and is called a forward swap or a deferred swap . can also be priced and is given in ; however , for the case of a swaption , this case is not relevant . ]put - call parity for caps and floors gives the value of a forward swap , and hence \\ \label{swap2 } \mathrm{swap}_{ii}(t_0 , r_s)=\ell v\sum_{n=0}^n b(t_0,t_n+\ell)\big[r_s - l(t_0,t_n)\big]\end{aligned}\ ] ] the value of the swaps , from eqs .[ swap1 ] and [ swap2 ] , can be seen to have the following intuitive interpretation : at time the value of swap is the difference between the floating payment received at the rate of , and the fixed payments paid out at the rate of .all payments are made at time , and hence for obtaining its value at time need to be discounted by the bond . the definition of given in eq .[ liborfrbond ] yields the following \nonumber\\ & = & v \big [ b(t_0,t_0)-b(t_0,t_n+\ell)\big]\nonumber\\ \label{swap1fl } & \rightarrow&v\big [ 1-b(t_0,t_n+\ell)\big]~~\mathrm{for}~t_0=t_0\end{aligned}\ ] ] hence , from eq . [ swap1 ] \end{aligned}\ ] ] with a similar expression for . note that the forward swap prices , for , converge to the expressions for swaps given in eqs .[ swap1 ] and [ swap1 ] . at time the par value for the fixed rate of the swap , namely , is given by both the forward swaps being equal to zero .hence we have obtained the anticipated result that the par value for the forward swap is fixed by the forward bond prices , and converges to the par value of the swap when it matures at time . in summary , put - call parity for cap and floor , from eqs .[ capflrprty ] and [ swap1 ] yields , for as expected .a common libor market measure was derived , and it was shown that a single numeraire renders all libor into martingales .two other numeraires were studied for the forward interest rates , each having its own drift velocity .all the numeraires have their own specific advantages , and it was demonstrated by actual computation that all three yield the same price for an interest rate caplet , and also satisfy put - call parity as is necessary for the prices interest caps and floors to be free from arbitrage opportunities . the expression for the payoff function for the caplet given in eq . [ payoffcaplet2 ] , namely +\nonumber\end{aligned}\ ] ]is seen to be the correct one as it reproduces the payoff functions that are widely used in the literature , yields a pricing formula for the interest rate caplet that is numeraire invariant , and satisfies the requirement of put - call parity as well .an analysis of swaps shows that put - call parity for caps and floors correctly reproduces the swap future as expected .i am greatly indebted to sanjiv das for sharing his insights of libor with me , and which were instrumental in clarifying this subject to me .i would like to thank mitch warachka for a careful reading of the manuscript , and to cui liang for many discussions .
|
the main result of this paper that a martingale evolution can be chosen for libor such that all the libor interest rates have a common market measure ; the drift is fixed such that each libor has the martingale property . libor is described using a field theory model , and a common measure is seen to be emerge naturally for such models . to elaborate how the martingale for the libor belongs to the general class of numeraire for the forward interest rates , two other numeraire s are considered , namely the money market measure that makes the evolution of the zero coupon bonds a martingale , and the forward measure for which the forward bond price is a martingale . the price of an interest rate cap is computed for all three numeraires , and is shown to be numeraire invariant . put - call parity is discussed in some detail and shown to emerge due to some non - trivial properties of the numeraires . some properties of swaps , and their relation to caps and floors , are briefly discussed .
|
recent research carried out on competing interacting networks does not take into account the fact that real - world networks often compete not only to survive but also to take over or even destroy their competitors .for example , in international politics and economics , when one country imposes economic sanctions on another , feedback mechanisms can cause the country imposing the sanctions to also be adversely affected .the decision by a wealthier country to keep military spending at a high level long enough to exhaust its poorer competitor can also contribute to its own exhaustion .similarly , in warfare , any attack depletes the resources of the attacking force and can elicit a counter - attack from the competing force .also , in nature , an incursion between species can alter the dynamics of predator - prey interaction .although , these competing interactions are a widespread real - world phenomenon , current studies analyze only effects of attack on attacked networks , but disregarding its effect on the external attacking network . for example , for both single and interactive networks , existing studies on network robustness report that every network , regardless of the size and architecture , eventually can be destroyed .but , what then prevents a network from attacking a weaker competitor or , what is the optimal moment for initiating or ending an attack ? in order to identify the factors that inhibit a network from attacking and demolishing a weaker competitor and to determine the optimal moment and duration of an attack , we develop a theoretical framework that quantifies the cost of an attack by connecting the feedback mechanisms and resilience dynamics between two competing dynamic networks with differing levels of resilience introduce a general methodology that can be applied to networks of any size and structure .first , as an illustrative example , we describe two competing barabsi - albert ( ba ) networks which we designate network s and network w. this model differs from the single network ba model in that the two interconnected networks have both intra - network and inter - network links . one real - world example of this kind of network interaction is firms in an economic network that link with other firms both domestically and abroad . using the preferential attachment ( pa ) rule , we generate networks s and w starting with nodes in each network . at each time stepwe add a new node that connects with existing nodes in network s and with existing nodes in network w , where the probability of each connection depends on the total node degrees in networks s and w. similarly , using the pa rule we connect a new node in network w with nodes inside network w and with nodes in network s. in a broad class of real - world networks , nodes can fail either due to inherent reason or because their functionality depends on their neighborhood .hence , any node in either of the two networks , e.g. , a node inside network s with neighbors in its own network and neighbors in network w , can fail at any moment , either internally independent of other nodes with a probability or externally with a probability .node externally fails with a probability when , similar to the watts model , the total fraction of its active neighbors is less than or equal to a fractional threshold which is equal for all nodes in both networks .the larger the value , the less resilient the network .we assume that one of the two networks is more resilient than the other , distinguishing between strong network s and weak network w. we do so by assigning different fractional thresholds to the strong and weak networks , and , respectively , with .as in ref . , we assume that an internally - failed node in network s or network w recovers from its last internal failure after a period . consecutive failures of the same node stretch the effective failure times and introduce heterogeneity into the distribution of inactivity periods .since in real - world networks it is dangerous for nodes to be inactive , we allow the strong network to take over nodes in the weak network when a node spends more time in internal failure than , where is a constant .figure [ 10 ] qualitatively shows the interaction process .we quantify the current collective state of the strong and weak networks in terms of the fraction of active nodes , and , respectively .we assume that initially on both networks have internal and external failure probability values of and , respectively .figure [ 1](a ) shows a two - parameter phase diagram for each network in which the hysteresis is composed of two spinodals separating two collective states , i.e. , the primarily `` active '' and the primarily `` inactive . ''figure [ 1](b ) shows that increasing the value of leads to catastrophic first - order phase transitions in both networks .when each network recovers ( i.e. , when is decreased to previous values ) , the fraction of active nodes returns to an upper state . nevertheless , the critical point in the recovery is well beyond the point at which the network collapses .figure [ 1](b ) also shows ( solid line ) that the initial choice of parameters makes network s more resilient to network fluctuations in the value of and that the fluctuation needed to initiate the collapse of network s ( ) is much larger than the fluctuation needed to initiate the collapse of network w ( ) .furthermore , network w is closer to a critical transition than network s. ) endangers and partially destroys its own nodes by increasing their internal failure probability in order to more severely damage the least resilient network w ( with ) .each of s and w has the hysteresis composed of two spinodals , representing attacking and recovery phase .the recovery time is and the takeover and cost mechanisms are disregarded .( a ) attacking strategy between two competing ba networks with parameters : and .strong network s wants to bring w in parameter space between hystereses of w ( black lines ) and s ( white lines ) where s is predominantly active and w is predominantly inactive ( see , b ) .dark red ( blue ) is parameter space where both s and w are active ( inactive ) .( b ) for , fraction of active nodes in the strong ( blue lines ) and weak ( red lines ) networks as a function of the internal failure probability .hysteresis is a result of increasing from zero to one and then decreasing it back to zero .the increase of accounts for the attacks and the decrease for a repair of the network .( c ) same case as ( a ) but for two randomly connected competing erds - renyi networks .( d ) same case as ( c ) but with an assortative mixing in the connection between networks : nodes with degree link , with probability , with nodes in the other network with degree ., title="fig:",scaledwidth=23.0% ] ) endangers and partially destroys its own nodes by increasing their internal failure probability in order to more severely damage the least resilient network w ( with ) .each of s and w has the hysteresis composed of two spinodals , representing attacking and recovery phase .the recovery time is and the takeover and cost mechanisms are disregarded .( a ) attacking strategy between two competing ba networks with parameters : and .strong network s wants to bring w in parameter space between hystereses of w ( black lines ) and s ( white lines ) where s is predominantly active and w is predominantly inactive ( see , b ) .dark red ( blue ) is parameter space where both s and w are active ( inactive ) .( b ) for , fraction of active nodes in the strong ( blue lines ) and weak ( red lines ) networks as a function of the internal failure probability .hysteresis is a result of increasing from zero to one and then decreasing it back to zero .the increase of accounts for the attacks and the decrease for a repair of the network .( c ) same case as ( a ) but for two randomly connected competing erds - renyi networks .( d ) same case as ( c ) but with an assortative mixing in the connection between networks : nodes with degree link , with probability , with nodes in the other network with degree ., title="fig:",scaledwidth=23.0% ] ) endangers and partially destroys its own nodes by increasing their internal failure probability in order to more severely damage the least resilient network w ( with ) .each of s and w has the hysteresis composed of two spinodals , representing attacking and recovery phase .the recovery time is and the takeover and cost mechanisms are disregarded .( a ) attacking strategy between two competing ba networks with parameters : and .strong network s wants to bring w in parameter space between hystereses of w ( black lines ) and s ( white lines ) where s is predominantly active and w is predominantly inactive ( see , b ) .dark red ( blue ) is parameter space where both s and w are active ( inactive ) .( b ) for , fraction of active nodes in the strong ( blue lines ) and weak ( red lines ) networks as a function of the internal failure probability .hysteresis is a result of increasing from zero to one and then decreasing it back to zero .the increase of accounts for the attacks and the decrease for a repair of the network .( c ) same case as ( a ) but for two randomly connected competing erds - renyi networks .( d ) same case as ( c ) but with an assortative mixing in the connection between networks : nodes with degree link , with probability , with nodes in the other network with degree ., title="fig:",scaledwidth=23.0% ] ) endangers and partially destroys its own nodes by increasing their internal failure probability in order to more severely damage the least resilient network w ( with ) .each of s and w has the hysteresis composed of two spinodals , representing attacking and recovery phase .the recovery time is and the takeover and cost mechanisms are disregarded .( a ) attacking strategy between two competing ba networks with parameters : and .strong network s wants to bring w in parameter space between hystereses of w ( black lines ) and s ( white lines ) where s is predominantly active and w is predominantly inactive ( see , b ) .dark red ( blue ) is parameter space where both s and w are active ( inactive ) .( b ) for , fraction of active nodes in the strong ( blue lines ) and weak ( red lines ) networks as a function of the internal failure probability .hysteresis is a result of increasing from zero to one and then decreasing it back to zero .the increase of accounts for the attacks and the decrease for a repair of the network .( c ) same case as ( a ) but for two randomly connected competing erds - renyi networks .( d ) same case as ( c ) but with an assortative mixing in the connection between networks : nodes with degree link , with probability , with nodes in the other network with degree ., title="fig:",scaledwidth=23.0% ] because network s has a higher resilience than network w and can more easily withstand fluctuations , s could induce the collapse of w by increasing , but only if the fraction of its active links is not dramatically reduced . figure [ 1](b ) shows how when network s attacks network w by increasing to the weak network becomes abruptly dysfunctional .figure [ 1](b ) also shows that when the values of are reset to their pre - attack levels the collapse of network w is permanent ( red dashed line ) and , if it ceases its attack , the recovery of network s is complete and all of its inactive nodes are reactivated ( see blue dashed line figure 2(b ) ) .similarly , when economic sanctions in a financial system are lifted the weak economies are not restored but the strong economics recover after suffering little damage .figure [ 1](c ) shows a modified competing network structure in which there are two interconnected erds - reny networks with inter - network links randomly chosen .although this structure quantitatively differs from the phase diagram of competing ba networks , the same kind of transition occurs in the random configuration .this indicates the generality of these critical transitions in competing networks .we obtain similar results when degree - degree correlations are introduced between the links connecting both networks .figure [ 1](d ) shows nodes in the strong network linking with nodes in the weak network only when they are of similar degree ( i.e. , `` assortative mixing '' ) . as in the other configurations ,the better position of the attacker enables the strong network to destroy the weak one and then return safely to its initial state . using mean - field theory we analytically describe the attack - and - recovery process between two interconnected networks with random regular topologies where all nodes within the same network have the same degree .we assume that each node in network s is linked with nodes in its own network and nodes in network w. similarly , each node in network w is linked with nodes inside network w and nodes in network s. in both networks the fraction of failed nodes is , where is the fraction of functional nodes .we can approximate the values of at each network by where denotes the average fraction of internally failed nodes and denotes the probability that a node in network s has externally failed , here represents the absolute threshold of network s simply related to the fractional threshold as : a node in network s can externally fail with a probability only when the number of active neighbors in both network s and network w is less than or equal to . similarly , we obtain for network w by replacing s with w , and vice versa , in eq .( [ a2 ] ) . finally , we set network s to be more resilient than network w , by setting . to cause damage in weak network w and , unavoidably , induce a partial self - destruction .model parameters are : , , , , , , and . in ( a ) , fraction of active nodes in network s and w , and , respectively .strong network s ( blue ) deliberately initiates its own failures ( increasing ) to create larger damage in a weak ( more vulnerable ) network w ( red ) .note that the fraction of active nodes exhibits a hysteresis behavior for both networks , with a critical point at . in ( b ) , we investigate when s should stop attacking w by increasing its probability of internal failure . shownare the fractions of failed nodes , and . between points c and d ( dashed lines ) ,an increase of induces more failures in the weaker network , leading to a comparative benefit . beyond point d ,the attack is not worthwhile for network s since it suffers the consequences more intensely than its competitor ., title="fig:",scaledwidth=23.0% ] to cause damage in weak network w and , unavoidably , induce a partial self - destruction .model parameters are : , , , , , , and . in ( a ) , fraction of active nodes in network s and w , and , respectively .strong network s ( blue ) deliberately initiates its own failures ( increasing ) to create larger damage in a weak ( more vulnerable ) network w ( red ) .note that the fraction of active nodes exhibits a hysteresis behavior for both networks , with a critical point at . in ( b ) , we investigate when s should stop attacking w by increasing its probability of internal failure .shown are the fractions of failed nodes , and . between points c and d ( dashed lines ) ,an increase of induces more failures in the weaker network , leading to a comparative benefit . beyond point d ,the attack is not worthwhile for network s since it suffers the consequences more intensely than its competitor ., title="fig:",scaledwidth=23.0% ] networks .( a ) threshold of network s as a function of time for two competing networks with and .fluctuations in the evolution of are a consequence of the degree of the acquired node : the higher the degree the higher the increase of .( b ) fraction of takeovers and final threshold as a function of the time required to acquire a node from the weak network , with . as increases, the number of takeovers decreases to zero . at the same time, the resilience of network s tends to the initial value .( c ) due to takeovers the fraction of active nodes in the more resilient network s can increase to values higher than one . in this example , network w is irreversibly damaged after is restored to its initial value . , title="fig:",scaledwidth=23.0% ] networks .( a ) threshold of network s as a function of time for two competing networks with and .fluctuations in the evolution of are a consequence of the degree of the acquired node : the higher the degree the higher the increase of .( b ) fraction of takeovers and final threshold as a function of the time required to acquire a node from the weak network , with . as increases, the number of takeovers decreases to zero . at the same time, the resilience of network s tends to the initial value .( c ) due to takeovers the fraction of active nodes in the more resilient network s can increase to values higher than one . in this example , network w is irreversibly damaged after is restored to its initial value ., title="fig:",scaledwidth=23.0% ] networks .( a ) threshold of network s as a function of time for two competing networks with and .fluctuations in the evolution of are a consequence of the degree of the acquired node : the higher the degree the higher the increase of .( b ) fraction of takeovers and final threshold as a function of the time required to acquire a node from the weak network , with . as increases, the number of takeovers decreases to zero . at the same time, the resilience of network s tends to the initial value .( c ) due to takeovers the fraction of active nodes in the more resilient network s can increase to values higher than one . in this example , network w is irreversibly damaged after is restored to its initial value . , title="fig:",scaledwidth=23.0% ] the analytical results of fig .[ 2](a ) indicate that when network s increases the internal failure probability and so in an effort to damage network w it also causes partial damage to itself .although it first seems that increasing reduces more active nodes in network s than in network w , when the fraction of active nodes in network w drops sharply and eventually .this attack strategy by network s is thus effective .if , however , network s undergoes a first order transition that leads to collapse , a situation that network s clearly must avoid . inspecting the recovery of the previous internal failure probability values after the attack we find that the fraction of active nodes in both networks exhibit a hysteresis behavior .note that when the transition at is surpassed neither network is able to restore its functioning to the levels previous to the attack .the analytical results indicate that attacking network s is effective only for certain values of .thus network s should increase only as long as the damage to network w continues to be greater than the damage to itself , i.e. , only when .figure [ 2](b ) shows the region in which attacks by network s are effective by showing the fraction of failed nodes in both networks in a two - dimensional phase space as the value of is increased .two solid lines with a slope of one indicate the region in which an attack by network s is effective . when the slope of function is greater than one ( the region between the two shaded lines ) , increasing produces more damage in network w than in network s and is thus an effective attack strategy . in order to measure the effect of capturing nodes from a competitor network and how takeovers can modify the resilience properties of a network , we design a model in which network s is again more resilient than network w ( ) and where node of network w is taken over by network s if its internal failure time exceeds , where is a certain failure time and a constant .note that the longer a node in network w remains inactive ( i.e. , the higher the value of ) , the higher the probably that it will be acquired by network s. real - world examples of this mechanism include sick or disabled prey in an ecological system or countries whose economic systems remain in recession for too long a period . to evaluate the acquisition costs in both networks we define network wealth ( capital ) as proportional to two variables : the total number of links in the network as defined in conservation biology and the resilience of the network .note that if two networks have the same number of links but different resiliencies their wealth is not equal .note also that when network s acquires a node of degree from network w the overall resilience of network s decreases because it has acquired a weaker node .thus network s pays a instantaneous , collective cost through a feedback mechanism that decreases its resilience from an initial threshold to a new threshold .one of the important issues in dynamic systems that have a critical point as an attractor is whether a conservation of energy is required in local dynamic interactions . to quantify how threshold changes in competing networks , we define a conservation law that relates the feedback mechanism to the resilience dynamics as here is the size of the strong network , its average degree , and the degree of the node that has been taken over .thus , we assume that the more important the acquired node ( i.e. , the larger its degree ) , the greater the cost to the resilience of network s , making it more vulnerable to future attacks . as a result ,when a predator ( strong ) network s increases its size and its degree , its acquisition cost , , will decrease . herewe quantify how threshold of the stronger network changes in competing networks where we assume that threshold of the weaker network does not change because every node has the same threshold .the stronger network s has the initial number of nodes , the average degree .after a multiple takeovers , where took over nodes , , ... , with degrees , , ... , , respectively , by using eq .( 4 ) we obtain figure [ 33](a ) shows that when network s acquires nodes in network w the threshold of network s is increasingly affected as time passes . in this example , a node in network w is taken over by network s when the node is in failure state longer than time steps , where and .note that as network s acquires weak nodes , its threshold increases and it becomes more vulnerable .figure [ 33](b ) shows the interplay between the time required to acquire a node and the threshold .note that as increases , takeovers become increasingly rare and the final threshold of network s approaches its initial resilience , here .figure [ 33](c ) shows that , if the example in figure [ 1](b ) is extended to include a takeover mechanism , a fraction of active nodes in network s measured relative to the initial number of nodes in each network reaches values higher than one , with a peak at .note that when attacks cease ( e.g. , when , in an economic system , sanctions are lifted ) decreasing the value of , , the fraction of active nodes in network s increases but network w is left irreversibly damaged ( see the closed hysteresis ) .thus far we have studied competing interconnected networks in which there is only one threshold characterizing each network . however , in real - world interconnected networks commonly the functionality of a node in a given network is not equally sensitive on the neighbors in its own and the other network . to this end , we assume that node in network s can externally fail with probability if the fraction of the active neighbors of node in network s is equal to or lower than some threshold , or if the fraction of the active neighbors of node in network w is equal to or lower than some threshold .we similarly define external failure in the less resilient network w by replacing threshold with .the functioning of each node is thus dependent on its neighbors in network s and network w , but with different sensitivities different resilience to external fluctuations , .similar as in fig .[ 33 ] but now with two thresholds used for defining resilience of each network .external failure thresholds are , , and . in ( a ), we find the phase diagram in model parameters where each network has its own hysteresis .the takeover mechanism is disregarded and recovery time is . in ( b ) , for each network we show the time evolution of the fractions of active nodes with the takeover mechanism included where we use a takeover period of , , and . at some point , s creates larger damage in a weak ( less resilient ) network w than to itself .( c ) related with ( b ) , as a result of the attack , both networks are damaged , but w is damaged more .we also show how the fractions change with decreasing during the recovery phase .( d ) early - warning signal for determining when the attack should be stopped , defined as the change in the ratio between two fractions , , where .the attack should be stopped when the indicator reaches the maximum . , title="fig:",scaledwidth=23.5% ] , .similar as in fig .[ 33 ] but now with two thresholds used for defining resilience of each network .external failure thresholds are , , and . in ( a ), we find the phase diagram in model parameters where each network has its own hysteresis .the takeover mechanism is disregarded and recovery time is . in ( b ) , for each network we show the time evolution of the fractions of active nodes with the takeover mechanism included where we use a takeover period of , , and . at some point , s creates larger damage in a weak ( less resilient ) network w than to itself .( c ) related with ( b ) , as a result of the attack , both networks are damaged , but w is damaged more . we also show how the fractions change with decreasing during the recovery phase .( d ) early - warning signal for determining when the attack should be stopped , defined as the change in the ratio between two fractions , , where .the attack should be stopped when the indicator reaches the maximum ., title="fig:",scaledwidth=23.0% ] , .similar as in fig .[ 33 ] but now with two thresholds used for defining resilience of each network .external failure thresholds are , , and . in ( a ), we find the phase diagram in model parameters where each network has its own hysteresis .the takeover mechanism is disregarded and recovery time is . in ( b ) , for each network we show the time evolution of the fractions of active nodes with the takeover mechanism included where we use a takeover period of , , and . at some point , s creates larger damage in a weak ( less resilient ) network w than to itself .( c ) related with ( b ) , as a result of the attack , both networks are damaged , but w is damaged more .we also show how the fractions change with decreasing during the recovery phase .( d ) early - warning signal for determining when the attack should be stopped , defined as the change in the ratio between two fractions , , where .the attack should be stopped when the indicator reaches the maximum ., title="fig:",scaledwidth=23.0% ] , .similar as in fig .[ 33 ] but now with two thresholds used for defining resilience of each network .external failure thresholds are , , and . in ( a ), we find the phase diagram in model parameters where each network has its own hysteresis .the takeover mechanism is disregarded and recovery time is . in ( b ) , for each network we show the time evolution of the fractions of active nodes with the takeover mechanism included where we use a takeover period of , , and . at some point , s creates larger damage in a weak ( less resilient ) network w than to itself .( c ) related with ( b ) , as a result of the attack , both networks are damaged , but w is damaged more .we also show how the fractions change with decreasing during the recovery phase .( d ) early - warning signal for determining when the attack should be stopped , defined as the change in the ratio between two fractions , , where .the attack should be stopped when the indicator reaches the maximum ., title="fig:",scaledwidth=23.0% ] figure [ 3](a ) shows , for a given set of parameters , a two - parameter phase diagram of competing networks , a model that incorporates the threshold separation for external failure but excludes takeover and feedback mechanisms .this model resembles that in fig . [ 1 ] but utilizes different configurations .suppose network s spontaneously activates at time but , due to differences in the variables characterizing network s and network w , initiates a substitution mechanism , not a takeover .thus each time node in network w spends a time period in an inactive mode that exceeds the substitution time e.g . , in ecology , a time period without food is replaced by a new node from network s. figure [ 3](b ) shows the fraction of active nodes in each network calculated relative to the initial number of nodes at time .fractions of active nodes of both networks exhibit a catastrophic discontinuity ( a phase flip ) at , which is characteristic of a first - order transition .since both networks are interdependent , substituting nodes from the less resilient network w affects the functionality of network s even more dramatically than that shown in fig . [ 1 ] . thus beyond some thresholdwe expect that additional weakening of network w will also permanently damage network s. this demonstrates how dangerous an attacking strategy can be for an attacker in a system of interdependent networks , e.g. , between countries that are at the same time competitors and economics partners .figure [ 3](c ) shows that when the attacks and substitutions cease , the fractions of active nodes in network s and network w reach points and , respectively .if the probability of internal failure spontaneously decreases during the recovery period , because of network interdependence the functionality of network s is not substantially improved .the triumph of network s over network w has its price . in ecology , for example , although the population of each species tends to increase , a dominance strategy is risky , e.g. , the extinction of a key species can trigger , through a cascade mechanism , the extinction of many other species .figure [ 3](d ) shows the change in the ratio between the fraction of active nodes in network s and network w as a function of time .this ratio can serve as an early - warning mechanism that indicates when attacks should be stopped .optimally , the stopping time for attacks will be when the ratio reaches its maximum .finally , fig .[ 4](a ) shows that when the feedback mechanism ( the cost of taking over ) defined in eq .( [ cost ] ) is included , the fraction of active nodes in each network exhibits an even richer discontinuous behavior than in fig . [3](c ) , where the cost was excluded . after 50,000 steps , because of the decrease in network s s resilience after each substitute , the final fraction of active nodes in network s is substantially smaller than the corresponding fraction in fig .[ 3](c ) ( i.e. , when the cost is excluded ) .at the same time , fig .[ 4](b ) shows that an increase in the takeover time decreases the fraction of substitutes . , , .( a ) the cost of the attacking strategy with takeover mechanism additionally decreases network resilience .the fraction of active nodes exhibits more discontinuities than in the case where the cost of an attack was excluded ( fig .this is a consequence of the larger change in the resilience of s due to inclusion of the cost mechanism .( b ) the fraction of takeovers and threshold of the stronger network s as a function of takeover time , with .,title="fig:",scaledwidth=23.0% ] , , .( a ) the cost of the attacking strategy with takeover mechanism additionally decreases network resilience .the fraction of active nodes exhibits more discontinuities than in the case where the cost of an attack was excluded ( fig .this is a consequence of the larger change in the resilience of s due to inclusion of the cost mechanism .( b ) the fraction of takeovers and threshold of the stronger network s as a function of takeover time , with .,title="fig:",scaledwidth=23.0% ]in conclusion , we have presented a theoretical framework based on resilience , competition , and phase transitions to introduce a cost - of - attack concept that relates feedback mechanisms to resilience dynamics defined using a linear conservation law . our model for competing networkscan be applied across a wide range of human activities , from medicine and finance to international relations , intelligence services , and military operations .we focus on a specific context where one more resilient network attacks the less resilient competitor network .the model assumptions about the structure and dynamics for two interactive networks with competing interactions and different resilience levels have to be adjusted in regard to different real world scenarios ( see the electronic supplementary material , s4 ) . the ability to measure attacker network resilience andits attack cost is crucial because every weakening of the resilience reduces the probability of the network survival under future attacks .for example , in political socio - economic systems a network - based approach for overcoming competing countries could be more effective by applying economical sanctions than carrying out military actions .interdependent links established between countries during prosperous times can facilitate sanctions ( intentional fluctuations ) that are used as a weapon when more resilient countries try to overcome less resilient countries .they can also facilitate the global propagation of economic recessions ( spontaneous fluctuations ) . during long economic crisesthese interdependent links can become fatal for less resilient countries , whose weakness is enhanced by being underdogs in a global network - of - networks and , at the same time , whose resources can be captured by more powerful countries .although , our proposed framework is suited for representing the most simplest case of bilateral economic interdependence between just two countries ( networks ) , it provides the basis for more general scenarios of alliances of more countries ( networks ) .the concept of alliance where some countries unite in order to attack some other alliance is especially interesting when there is heterogeneity in resilience of allied attacker countries .for example , economically most dominant countries can increase their dominance at the expense of their partners in the alliance or they can , on the other hand , depend on the alliance s weakest country ( see the supplementary material , s4 a ) .in addition to the intentional fluctuations characteristic of human societies , our methodology can also be applied to a broad class of complex systems in which spontaneous fluctuations occur , from brain functioning to ecological habitats and climate fluctuations .the methodology is based on specific structure , dynamics and mechanisms of the model of networks with competing interactions and different resilience levels , that have to be adjusted for different systems and contexts of application ( see the supplementary material , s4 ) . *acknowledgments * + we thank jacobo aguirre and david papo for discussions .+ * author contributions * + b.p ., j.m.b . and h.e.s .conceived and designed the research .. carried out the numerical simulations , analysed the results and developed the theory .all authors discussed the results and contributed to the text of the manuscript .+ * funding statement * + b.p .work was partially supported by the university of rijeka .m.p . acknowledges support from the slovenian research agency ( grant p5 - 0027 ) , and from the deanship of scientific research , king abdulaziz university ( grant 76 - 130 - 35-hici ) .j.m.b . acknowledges financial support of mineco ( project fis2013 - 41057-p ) .the boston university work was supported by onr grant n00014 - 14 - 1 - 0738 , dtra grant hdtra1 - 14 - 1 - 0017 and nsf grant cmmi 1125290 .the authors declare no competing financial interests .99 bastolla , u. _ et al_. the architecture of mutualistic networks minimizes competition and increases biodiversity ._ nature _ * 458 * , 1018 - 1020 ( 2009 ) .rohr , r. p. , saavedra , s. & bascompte , j. on the structural stability of mutualistic systems ._ science _ * 345 * , 416 ( 2014 ) . aguirre , j. , papo , d. & buld , j. m. successful strategies for competing networks .phys . _ * 9 * , 230 - 234 ( 2013 ) .dsouza , r. m. complex network : a winning strategy .phys . _ * 9 * , 212 - 213 ( 2013 ) .kivela , m. , arenas , a. , barthelemy , m. , gleeson , j.p . , moreno , y. & porter , m. multilayer networks ._ j. complex networks _ * 2 * , 203 - 271 ( 2014 ) .scholtes , i. _ et al_. causality - driven slow - down and speed - up of diffusion in non - markovian temporal networks .commun . _ * 5 * , 5024 ( 2014 ) .thebault , e. & fontaine , c. stability of ecological communities and the architecture of mutualistic and trophic networks . _ science _ * 329 * , 853 - 856 ( 2010 ) .richardson , l. f. the mathematical psychology of war ._ nature _ * 135 * , 830 - 831 ( 1935 ) .shakarian , p. , lei , h. & lindelauf , r. power grid defense against malicious cascading failure .arxiv 1401.1086v1 ( 2014 ) .scheffer , m. _ et al_. catastrophic shifts in ecosystems ._ nature _ * 413 * , 591 - 596 ( 2001 ) .podobnik , b. _ et al_. predicting the lifetime of dynamic networks experiencing persistent random attacks . _rep . _ * 5 * , 14286 ( 2015 ) .albert , r. , jeong , h. & barabsi , a. l. error and attack tolerance of complex networks ._ nature _ * 406 * , 378 - 382 ( 2000 ) .cohen , r. , erez , k. , ben - avraham , d. & havlin , s. resilience of the internet to random breakdowns .lett . _ * 85 * , 46264628 ( 2000 ) .reis , s. d. s. _ et al ._ avoiding catastrophic failure in correlated networks of networks , _ nat .* 10 * , 762 - 767 ( 2014 ) .buldyrev , s. v. _ et al ._ catastrophic cascade of failures in interdependent networks ._ nature _ * 464 * , 1025 - 1028 ( 2010 ) .dorogovtsev , s. n. & goltsev , a. v. critical phenomena in complex networks .phys . _ * 80 * , 1275 - 1335 ( 2008 ) .podobnik , b. _ et al ._ predicting the lifetime of dynamic networks experiencing persistent random attacks . _( accepted for publication ) .may , r. networks and webs in ecosystems and financial systems . _soc . a _ * 371 * , 20120376 ( 2013 ) . downing , a. s. , van nes , e. h. , mooij , w. m. & scheffer , m. the resilience and resistence of an ecosystem to a collapse of diversity . _plos one _ * 7 * , e46135 ( 2012 ) .barabsi , a. -l . & albert , r. emergence of scaling in random networks ._ science _ * 286 * , 509 - 512 ( 1999 ) .podobnik , b. , horvatic , d. , dickison , m. & stanley , h. e. preferential attachment in the interaction between dynamicly generated interdependent networks , _ epl _ * 100 * , 50004 ( 2012 ) .perc , m. the matthew effect in empirical data ._ j. r. soc .interface _ * 11 * , 20140378 ( 2014 ) .majdandzic , a. _ et al . _ spontaneous recovery in dynamical networks .* 10 * , 34 - 38 ( 2014 ) .watts , d. j. a simple model of global cascades on random networks .usa _ * 99 * , 5766 - 5771 ( 2002 ) .podobnik , b. _ et al ._ network risk and forecasting power in phase - flipping dynamic networks .* 89 * , 042807 ( 2014 ) .podobnik , b. _ et al . _ systemic risk in dynamical networks with stochastic failure criterion. _ epl _ * 106 * , 68003 ( 2014 ) .erds , p. & rnyi , a. on random graphs ._ i. publ ._ * 6 * , 290 - 297 ( 1959 ) .newman , m. e. j. assortative mixing in networks .lett . _ * 89 * , 208701 ( 2002 ) .errington , p.l .predation and vertebrate populations ._ * 21 * , 144 - 177 ( 1946 ) .genovart , m. _ et al_. the young , the weak and the sick : evidence of natural selection by predation ._ plos one . _ * 5 * , e9774 ( 2010 ) .hunter , m. l. _ fundamentals of conservation biology_. ( oxford : blackwell science , 1996 ) . costanza , r. _ et al._. the value of the world s ecosystem services and natural capital ._ nature _ * 387 * , 253 - 260 ( 1997 ) .bak , p. , tang , c. & wiesenfeld , k. self - organized criticality : an explanation of 1/f noise .lett . _ * 59 * , 381 - 384 ( 1987 ) .noel , p. a. , brummitt , c. d. & dsouza , r. m. controlling self - organizing dynamics on networks using models that self - organize . _ phys .lett . _ * 111 * , 078701 ( 2013 ) .markovic , d. & gros , c. power laws and self - organized criticality in theory and nature .rep . _ * 536 * , 41 - 74 ( 2014 ) .mold , j. w. & stein , h. f. the cascade effect in the clinical care of patients . _ new england journal of medicine _ * 314 * , 512 - 514 ( 1986 ) .estes , j. a. , duggins , d.o . & rathbun , g. b. the ecology of extinctions in kelp forest communities ._ conservation biology _ * 3 * , 252 - 264 ( 1989 ) .dakos , v. _ et al ._ slowing down as an early warning signal for abrupt climate change .usa _ * 105 * , 14308 - 14312 ( 2008 ) .adger , w. n. _ et al ._ social - ecological resilience to coastal disasters ._ science _ * 309 * , 1036 - 1039 ( 2005 ) .thompson , j. n. _ the geographic mosaic of coevolution_. ( university of chicago press , 2005 ) .de lange , h. j. , sala , s. , vighi , m. & faber , h. j. ecological vulnerability in risk assessment . a review and perspectives : _ science of the total environment _ * 408* , 3871 - 3879 ( 2010 ) .nowak , m. a. & highfield , r. _ supercooperators : altruism , evolution , and why we need each other to succeed_. ( simon and schuster , 2011 ) .vespignani , a. modelling dynamical processes in complex socio - technical systems _ nat ._ * 8 * , 32 - 39 ( 2012 ) .battiston , s. , caldarelli , g. , georg , c. , may , r. & stiglitz j. complex derivatives . _phys . _ * 9 * , 123 - 125 ( 2013 ) . *supplementary material : the cost of attack in competing networks *here we demonstrate that the resilience dynamics in finance can be presented in terms of a conservation ( linear ) law , where the threshold is controlled by an asset - debt ratio .recall that in our model a node externally fails with a certain probability when the total fraction of its active neighbors is equal to or lower than the fractional threshold .the larger the value , the less resilient the network . in quantifying the impact of a perturbation ( attack ) on the network, we shall demonstrate that the more severe the attack , the larger the impact on the network resilience .suppose a bank has an interbank asset invested equally in each of its neighboring banks .bank has also some asset , considered as a stochastic variable .following refs . , we define a bank to be solvent ( active ) when , i.e. , when the bank s assets exceed its liabilities , . here represents the fraction of inactive neighboring banks that can withstand and still function properly .note that this is related to threshold as , since we assume that the number of incoming links is equal to the number of outgoing links , .the larger the value , the more stable the bank .suppose that for each bank there is a linear dependence between , , and the network degree .g ., that and . then a bank is inactive when ( ) or when at least 50% of its neighboring banks are inactive .let us assume next that increases due to an external perturbation .then the new threshold is equal to or if the external perturbation is negative ( positive ) , or alternatively , the asset increases ( decreases ) ( eq .( [ th ] ) ) , the threshold increases ( decreases ) and the resilience decreases ( increases ). the larger the number of neighbors ( ) , the smaller the change in the network resilience .note that if the external perturbation attacking one node is shared not only by its neighbors but by the entire network with nodes , then is replaced by as in eq .( 4 ) in the paper . note that if we replace the linear dependence between assets and degree with a non - linear ( power - law ) dependence , e.g. , , where is a constant parameter , we obtain a similar relationship to eq .( [ cost1 ] ) , .we focus on a mean field approximation of the level of external and internal failures between nodes for two interconnected networks .every node has an internal failure probability assumed , for reasons of simplicity , to be the same in both networks , .if each node in network s has links with nodes in its own network and links with nodes in network w , here we define that there must be more than nodes in network s and nodes in network w if the nodes in s are to function properly .in contrast , if the number of inactive nodes in network s is , the probability that the node in network s will externally fail is .similarly , if the number of inactive nodes in network w is , the probability that the node in network s will externally fail is . to simplify we can use .we denote the time averaged fractions of failed nodes in network s and network w as and , respectively . using combinatorics , we calculate the probability that a node in network s will have a critically damaged neighborhood among its neighbors in s to be .similarly , we calculate the probability that a node in network s will have a critically damaged neighborhood among its neighbors in w to be .the probability that a node will fail externally due to failures in network s ( network w ) is ( ) . if we denote the internal failures in network s by a , and the external failures by b , and the external failures in network w by c , then the probability that a randomly chosen node in s will fail is . if we assume that a , b , and c are not mutually exclusive , but interdependent events , we come to where , and as in ref . node recovers from an internal failure after a time period .similarly , from the above equation we obtain the fraction of failed nodes in network w ( either internally or externally failed ) by interchanging s and w.we now focus on two interdependent ba networks with equal connectivity but different threshold , where .[ 7 ] shows that for each of the two interconnected networks the fraction of active nodes simultaneously jumps from a stable state to another one .phase - flipping is obtained by setting the network close to a critical point that is reported for a single network in ref . .since the threshold in network s is substantially smaller than in network w , and so s is more resilient than network w , the fraction of active nodes in network s is larger than the fraction of active nodes in network w. thus the volatile phase - flipping in network functionality is more dangerous for network w than for network s. the fractions , as a function of time , can approximately model the populations of preys and predators and so be related to the periodic solutions of the lotka - volterra ( predator - prey ) equations . , , , , , , .the larger the value , the less resilient the network .we obtain that both networks exhibit the same hysteresis .the parameters are set in a part of phase space to enable phase flipping between active and inactive states .however , due to different thresholds , network s is more resilient than network w the average fraction of active nodes in network s is practically always larger than in network w. , scaledwidth=24.0% ]most real world systems are composed of networks which interact with each other in different ways . in our modelwe focus on a specific context where one more resilient ( strong ) network ( attack initiator ) attacks the less resilient ( weaker ) competitor network ( attack target ) .the proposed model depends on the structure of two interactive competing networks and the defined dynamics between initiating an attack against a competitor network and the consequences that might weaken the attack initiator resilience . within this framework assumptions about the structure and dynamics for two interactive networks with competing interactions and different resilience levels have to be adjusted in regard to different real world scenarios . for the structure of interactive networks we use multilayer network model distinguishing between intra - network and inter - network connections .although in our analysis we focused on specific architectures of intra - network and inter - network connections , in general they can be arbitrarily chosen depending on real world examples .each competing network is formed by the intra - network connections between its own nodes , while interactions between competing networks are determined by inter - network connections between nodes from two opposing competitor networks .we also assumed static intra - network and inter - network connections , while in different real world examples these connections can also change through time and depend on different spatial and temporal attributes .when we represent both an attacker or attacked system as a network , we must also decide which attributes and level of details of the system are of interest for a specific real - world scenario .a single node in network represents a particular component of a system or aggregated set of similar components , while the consequences of an attack are described by an active or an inactive node s state .the global collective state of both the attacker and the attacked network is measured by the total fraction of active nodes in each network .finally , the interpretation of node s activity and inactivity depends on the application context of different real world attack scenarios . in order to explain the general network vulnerability to global cascades of local and external node failures / inactivations caused by attack consequences , we model node dynamics by a cascade contagion model with stochastic internal and external activation / inactivation process , inspired by the watts threshold model where each node has a probability to internally fail / inactivate independently of other nodes . at the same time , the probability to externally fail / inactivate if a fraction of its active intra - network and inter - network neighbors is less than or equal to a specified fractional threshold .this fractional threshold can be interpreted as the network resilience to failures due to interdependencies , the larger the value of the fractional threshold , the less resilient the network .we assume that the attacker network has a higher resilience than an opponent network and that it is willing to partially weaken its own resilience in order to more severely damage a less resilient competitor .attack concept and also its influence on resilience dynamics of the attacker is based on causing critical fraction of active neighbors for nodes from an opponent network , and using takeover and substitution mechanisms for them after their long inactivity. however , this concept is highly context - dependent and its mechanisms have to be properly mapped in regard to different real work scenarios . as the attacker network has higher resilience than an opponent network it can induce the implicit indirect attack by increasing the probability of internal failures / inactivations of its nodes causing critical fraction of active neighbors for nodes from an opponent network .in addition , takeover and substitution mechanisms define direct attack on an opponent network .takeover mechanism is related to the explicit attack on an opponent network with the same type of nodes as the attacker network and the explicit reduction of the attacker network resilience by increasing its fractional threshold . on the other hand, the substitution mechanism represents the explicit attack on an opponent network with implicit reduction of the attacker network resilience by having nodes whose state is more dependent on nodes form the opponent networks .although we focus on the most simplified scenario of two interactive competing networks where just one and more resilient network can attack a less resilient opponent network , the introduced framework provides the basis for more general scenarios of alliances of more different networks with also different interaction types between them ( e.g. competing intra - specific / network , antagonistic , mutualistic ) and strategies for attack as well as defense .how many new parameters have to be introduced for this scenario also depends highly on the context of the study of interest .we can observe each network collective state and resilience dynamics separately or depending on their affiliation while a specific attack or defense strategy is taking place .how successful our network model might be when applied in practice depends first on how capable we are in estimating model parameters and mapping them to real - world context . as described before our model is structured with the architecture of intra - network and inter - network connections which define networks structure and constraints for an interaction , and with parameters for cascade dynamics including internal and external failure probabilities , the fractional threshold and the limit for inactivity period .we show how these parameters can be mapped to two real world scenarios from ecology and political socio - economy application domains .[ [ socio - economic - system - competing - inter - firm - network - of - countries - under - economic - sanctions ] ] socio - economic system : competing inter - firm network of countries under economic sanctions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ here we demonstrate how our modeling framework is mapped to a specific political - socio - economic scenario where one country imposes economic sanctions or similar political - economic attack / action against another country . in this regard we explain interpretation of model parameters , intra / inter - network architecture and activation / inactivation dynamics . a critical aspect of political - economic attack / action is the economic costs endured not only by the targeted country , but also by the sender country ( the one making the action ) .although , we are focusing here on the most simplest case of bilateral economic interdependence between just two countries , such actions can also bring to economic consequences to other third parties like neighbours countries of the target or its trading partners , but as well as senders .economic sanctions are deliberate , government - inspired withdrawal , or threat of withdrawal , of customary trade or financial relations and are most effective when aimed against friends and close trading partners as these countries have more to lose than countries with which the sender has limited or adversarial relations .interdependent links established between countries during prosperous times can facilitate sanctions ( intentional fluctuations ) that are used as a weapon when more resilient countries try to overcome less resilient countries .they can also facilitate the global propagation of economic recessions ( spontaneous fluctuations ) . during long economic crises these interdependent links can become fatal for less resilient countries . similarly , when economic sanctions are lifted for the weaker economies is much harder to restore than for the stronger economics which recover after suffering little damage . in order to explain the economic cost for the country imposing the sanctions , we first specify state of the country s economic system by collective activity of individual firms which are part of the country economy .firms are connected to each other directly or indirectly through their business transactions ( i.e. obtaining materials from suppliers , delivering goods / products to services or r&d cooperation ) .economic network of each country is represented by inter - firm network assuming that intra - network connections specify cooperative interactions between firms of same country , and inter - network connections specify interactions between firms in different countries that are subjected to the bilateral economic relation between two countries .as many firms borrow from and lend to each other , and in particular when these firms are speculative and dependent on the credit flow , shocks to the liquidity of some firms may cause the other firms to also experience financial difficulties . the way how geographically localized shock propagate through such inter - firms networksdetermines country resilience to the shock experienced by their corresponding firms .these dynamic vulnerabilities of firms can be related to the fractional threshold parameter that can be controlled by its asset - debt ratio where activity / inactivity of a firm is determined by its solvency ( see s1 .resilience dynamics in finance ) .the attack concepts is related to inducing the implicit indirect attack by increasing probability of internal failures / inactivations of its firms causing target country firms to become insolvent .in addition , takeover mechanisms define attack on the target country economy by taking over their firms which are insolvent for some critical time .the extension of our modeling framework from bilateral economic interdependence between just two inter - firm networks of competing countries to more general scenarios of network alliances where some countries unite in order to attack some other alliance is especially interesting when we introduce the resilience heterogeneity of allied attacker countries .the resilience heterogeneity of allied attackers can induce opposing interests and attack strategies between partners as well as provide descriptions of scenarios where economically most dominant countries increase their dominance relative to other countries and at the expense of their partner countries in the alliance or where , on the other hand , economically strong countries depend on alliance s weakest country .for example , suppose two countries , i and ii imposing economic sanctions at the third country , iii , where i is more resilient than ii and both i and ii are more resilient than iii .a country resilience can be different since ii is more linked with iii than i. when the sanctions are lifted , the relative resilience between i and ii can be increased making i even more dominant than before the sanctions .but on the other hand due to the interdependencies between allied countries , i.e. countries i and ii , attack strategy of i is dependent also on the state of ii .[ [ ecological - system - competing - animal - social - networks - under - predator - prey - interactions ] ] ecological system : competing animal social networks under predator - prey interactions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ here we demonstrate how our modeling framework is mapped to two competing animal social networks under predator - prey interactions .we explain possible choices and an interpretation for intra / inter - network structures and activation / inactivation dynamics for a specific scenario where individuals from one predator species attack individuals from one prey species .a predator is an animal that hunts and kills other animals ( its preys ) for food .an interpretation of predators and preys internal or external activity depends on the given context of the study of interest and the interpretation of their intra / inter - network connections .the intra - network structure for attacking ( predator ) and attacked ( prey ) networks can be represented by corresponding socio - spatial network of their individuals . while , inter - network connections , i.e. connections between predator and prey individuals , determine which predator individuals can come in close contact with which prey individuals to hunt them based on spatial and temporal factors . interactions between animal individuals are highly dependent on their spatial proximity . thus determining the structure of social animal networkalso depends on given spatial context .recent technological innovations in tracking devices and reality mining approaches are starting to enable remote monitoring and collection of detailed information of behaviors of individual animals at high spatial and temporal resolution .a structure of social network between animal individuals influence diversity of social behaviors such as finding and choosing a sexual partner , making movement decisions , engaging in foregoing or anti - predator behavior which is manifested at the population level in the form of habitat use , mating systems , information or disease transmission .social interactions between individuals of same species can differ in their type ( competitive , cooperative , sexual ) , frequency and duration .animals modify their social interactions in response to changes in external conditions such as climate , predation pressure , and social environment .although , interactions among animals are dynamic , many animal social network studies examine static structures . for a simplified predator - prey interaction scenariowe can assume static cooperative interactions and that there are no competing interactions between individuals of the same species , i.e. infraspecific competition is not taken into account .cooperating interactions between individuals play an important role in the spreading of information within the network affecting an access to resources and the probability of predation of an individual .for example , individuals that have large number of intra - network connections can be better in discovering new food patches . with that interpretation of the intra - network and inter - network connections, we can model predators coordinated attack by their activation dynamics . in this context it is reasonable to assume that a node in attacked ( prey ) network will externally fail ( be killed by predators ) if there is enough activated predators ( i.e. fraction of active inter - neighbors larger then given threshold ) that are in close range of the prey ( i.e. those connected with inter - network connections ) . as usual , prey can also externally fail ( die or migrate ) if its fraction of active intra - neighbors is equal or smaller then the fractional threshold . in this contextwe can interpret internal and external failure / activation probabilities , and the limit for inactivity period for predators as their opportunity , exigency and willingness to hunt prey together in certain time . while those parameters in prey s case can be interpret as a way how prey sustains predators attacks or keeps a certain habitat patch active with its own species through reproduction .prey organisms that are difficult to find , catch , kill or consume will survive and reproduce .the explained attack concept is in contrast to the indirect attack concept where inactive / failed states of attacker nodes cause external failure of attacked network nodes .this assumption is reasonable in a scenario where inter - network connections represent prey dependencies on its predators , i.e. where for each prey there is a needed fraction of active predators for them to survive because the active predators balance the population size of a prey ( see also section s3 .coupled ba interdependent networks with equal connectivity but different threshold ) . in that scenario ,predators and prey depend on each other in that the predators rely on prey as food source , but in turn they also keep prey population against over - population which could cause decrease of their food source . beside that , in some cases prey species can even facilitate its own predator . a possible context that relies on predator andprey interdependence is explanation of how a disease or other internal disturbances in predator population can effect prey population dynamics and vice versa .in contrast to socio - economic scenario of competing network of countries firms , takeover mechanism is not suitable attack concept for scenario of predator - prey interactions where we model interaction between social network of different node types , but substitution mechanism is suitable to represent additional level of predation influence on prey population . in general predation influences organisms at two ecological levels . at the level of the individual, the prey organism has an abrupt decline of its lifetime reproductive success , because it will never reproduce again , and at the level of the community , predation reduces the number of individuals in the prey population . accordingly , the direct cost for strong predator which easily hunts and kills its prey is generally negligible , but indirect cost can be manifested through the fact that each reduction in prey abundance impedes finding a next prey . on the other hand ,the direct cost is noticeable for intraguild predators , predators that kill and eat other predators of different species at the same trophic level .classical models of predator - prey population dynamics , often considered as the basic building blocks of larger , food - web models , explain how tropic interactions lead to oscillatory population cycles .these models assume well - mixed population where all pairs of individuals have equal probabilities of interacting with each other which corresponds to a complete graph structure .this means that the encounter rate between predators and prey is expressed in a mass - action fashion , i.e. as a product of prey and predator landscape densities .our model can describe mechanistic approach of predator - prey interactions within structured population of individuals of two animal species .animal social networks are harder to determine and observe than human social networks as animal network data must be collected by direct observation of interactions between individual animals .for some large herding species it may be possible to count every individual , but for many species this is not possible . in caseswhen is too time consuming to collect animal network data at individual level due to difficulties of capturing and identifying individuals , it can be useful to observe categories of individuals and consider interactions between them . since spatial proximity is significant factor for establishing and maintaining cooperation between individuals , we can observe interactions between predators and prey through patches they occupy .while in social animal networks , one node represents just a single individual of a given species and each edge represents some form of interaction between two individuals , in food webs and ecological network studies a node typically represents a species while connections between nodes represent different types of interactions between species .species may interact with each other through antagonism ( prey - predator , host - parasitoid , or host - parasite interaction ) , competition , or mutualism .food webs can provide initial blueprints of inter - connections between animal socio - spatial networks when extending our modeling framework from interaction between just two species networks to more general scenario of network alliances where several predator and/or prey networks can cooperate or compete between each other .for example , in the case when two prey species have a common predator , one prey species can lead to indirect exclusion of the other species or the case where two predators alone compete for a single prey species , one species is always excluded by the other , even in the presence of a top predator .
|
real - world attacks can be interpreted as the result of competitive interactions between networks , ranging from predator - prey networks to networks of countries under economic sanctions . although the purpose of an attack is to damage a target network , it also curtails the ability of the attacker , which must choose the duration and magnitude of an attack to avoid negative impacts on its own functioning . nevertheless , despite the large number of studies on interconnected networks , the consequences of initiating an attack have never been studied . here , we address this issue by introducing a model of network competition where a resilient network is willing to partially weaken its own resilience in order to more severely damage a less resilient competitor . the attacking network can take over the competitor nodes after their long inactivity . however , due to a feedback mechanism the takeovers weaken the resilience of the attacking network . we define a conservation law that relates the feedback mechanism to the resilience dynamics for two competing networks . within this formalism , we determine the cost and optimal duration of an attack , allowing a network to evaluate the risk of initiating hostilities .
|
flying animals have since long inspired admiration and fueled the imagination of scientists and engineers . alongside biologists studying form and function of flapping flyers in nature ,the last decade has seen an impressive quantity of studies driven by engineering groups using new techniques to develop and study artificial biomimetic flapping flyers .the widespread availability of high - speed video and in particular the merging of experimental methods borrowed from fluid mechanics into the toolbox of the experimental biologist have permitted to elucidate various key mechanisms involved in the complex dynamics of flapping flight ( see e.g. ) .a recent field of investigation concerns the efficiency of flapping flyers , the major interrogation being about how natural systems optimize energy saving together with performance enhancement .in particular , the passive role of wing flexibility to increase flight efficiency through the bending of the wings while flapping has attracted a lot of attention .it is commonly agreed that this efficiency enhancement comes from the particular shape of the bent wing , which leads to a more favorable repartition of the aerodynamic forces ( see and for an extensive review ) . for flying animals in air , such as insects, it has been proposed that wing inertia should play a major role in competing with the elastic restoring force , compared to the fluid loading .the mechanism governing the propulsive performance of the flapping flyer can therefore be seen at leading order as a two - step process , where the instantaneous shape of the wings is determined by a structural mechanics problem which then sets the moving boundaries for the aerodynamic problem . from a dynamical point of view , if we consider chordwise bending of a wing with a given flapping signal imposed at the leading edge , the instantaneous shape of the structure is strongly dependent on the phase lag between the forcing and the response of the wing ( respectively the leading and trailing edges ) .recent works by and using a simplified model of a flexible wing as a combination of heaving and passive pitching have shown that a transition from enhanced thrust to underperformance occurs for a critical phase value close to the resonant frequency of the system .this sustains the commonly invoked argument suggesting that flapping flyers could take advantage of a structural property to save energy by matching the relaxation frequency of their compliant wings to the wingbeat frequency . in naturethis has been observed in particular for undulatory swimming fish or other swimmers that use deforming propulsive structures , such as jellyfish or scallops ( see and references therein ) . in the case of insects ,however , the few available observations ( especially for large species ) report wingbeat frequencies far below the natural relaxation frequencies .recent experiments using a self - propelled model with large - flapping - amplitude elastic wings are consistent with the latter , since the propulsive efficiency of the model peaks for a flapping frequency lower than the primary linear resonance of the wings . fully predicting the wing beat rate as the undamped resonant frequency of a linear oscillator ( see e.g. ) should be therefore taken with reserve .super - harmonic nonlinear resonances have been invoked , suggesting that flying animals may effectively flap their wings far below the primary resonance while increasing their performance .this is probably one mechanism among others governing the dynamics of flapping flyers , but it is clear that the details of the underlying fluid - structure interaction problem are poorly understood . more specifically , the underlying phase dynamics that set the instantaneous wing shape and lead first to an increase and then a loss of the thrust power ( and even a reversal of the propulsive force as in the case of ) remain unexplained . in this paperwe address these questions using the experimental self - propelled flapping - wing model with elastic wings described in . exploring a wide range of bending rigiditieswe show that , in the simplified context of chordwise - compliant wings , the performance optima of the system are far from being set by a simple resonant condition .we develop a nonlinear one - dimensional beam model for the bending wing which is reduced to a forced oscillator model suitable to study different nonlinear effects . in particular , a set of cubic nonlinearities coming from the clamped - free beam equation and a quadratic damping term representing the fluid drag associated to the fast flapping motion permit to account for the observed behavior .we show that the nonlinear nature of the fluid damping is an essential feature to determine the phase lag that leads to an increase / decrease of the efficiency . as a whole fluid - solid interaction process leading to propulsion , we provide evidence that flapping flyers may optimize their performance not by especially looking for resonance but by using passive deformation to streamline the instantaneous shape of the wing with the surrounding flow .the experimental setup is the same described by thiria & godoy - diana , inspired from the pioneer century experiment by marey : a flapping wing device is attached to a mast that is ball bearing mounted to a central shaft in such a way that the thrust force produced by the wings makes the flyer turn around this shaft .a particular attention has been paid to reduce friction losses in the whole system .wings are made of mylar^^ semicircles of diameter cm .the experimental parameters are the forcing frequency ( ) , the flapping amplitude ( ) and the chordwise rigidity of the wings ( ) governed by their thickness .in contrast with the first study reported with this setup , the set of wings used here covers a larger range of bending rigidities , from near - rigid to very soft materials .six pairs of wings have been tested . their structural properties (thickness , mass , and rigidity ) are summarized in table 1 .this specific setup allows to measure various averaged quantities ( see for details ) : the cruising speed when the device is allowed to turn around , and the thrust force when it is held at a fixed station ( see fig .[ powers ] ( a ) and ( b ) ) which gives the averaged aerodynamic thrust power , being the product . in both cases , the power consumption is measured . on the other hand, we performed a precise dynamical study of the flapping wing .for each set of parameters ( ) , the phase and amplitude of the trailing edge , with respect to the forcing flapping motion , has been measured using a fast cadenced camera ( 1000 fps ) in both air and vacuum .+ ) and ( d ) input ( ) powers as a function of .the gray area represents the optimum region , the dashed line indicates the location of the reduced natural frequency of the wing ( linear resonance ) . ]it is important to recall that for this setup , and more generally for flapping flyers in air , the main bending motor of the flexible wings is wing inertia .the competition between the wing inertia and the elastic restoring force is captured by the scaled elasto - inertial number : the first expression is a direct comparison between both the moments of inertial and elastic forces .interestingly , this number can also be expressed as a function of the ratio between the forcing and relaxation frequencies times the non - dimensional forcing amplitude of the driving motion , which allows to express directly the bending rate as function of a non - dimensional oscillator forcing term .the second expression is therefore useful to explore the nearness of the resonance and will be used to analyze the experimental data in this paper .results will therefore displayed as a function of the reduced frequency , where is the reduced flapping amplitude . in order to compare the aerodynamic performance in all the experiments , both the thrust force and cruising speedwere rendered non - dimensional using the appropriate scalings and . the non - dimensional powers ( displayed in fig . [ powers ] ( c ) and ( d ) ) then read and .mm and ) .as can be seen , the main deformation is mainly performed on the first mode . in this casethe phase lag is quite large , leading to a strong increase of flight performance .b ) : typical time series tracking the motion of the leading ( black curve ) and trailing ( red curve ) edges of the wing at mid - span , obtained from video recordings at 1000 fps .c ) : same as b ) but with a forcing near , exhibiting super - harmonic resonance typical from dynamical systems containing cubic nonlinearities . ] * table 1 . *wing properties & & & & & & wing thickness , ( mm ) & & & & & & mass per unit area ( kg.m ) & & & & & & rigidity ( n.m ) & & & & & & relaxation frequency ( hz ) & & & & & & color label in figures & & & & & & black in both the thrust force and cruising speed curves , it is clear that increasing wing flexibility brings out two distinct regimes : up to a certain flapping frequency , the more flexible wings outperform the rigid linear relationship ( see also ) .the measurements for the two most flexible wings evidence the appearance of an underperformance regime in which both and lie below the rigid wing case .looking now at the nondimensional thrust power , the data from all wings collapse on a single curve with a clear performance peak , which agrees with what has been observed by for heaving / pitching systems .an important point is that the maximum in performance does not take place at the resonant frequency , but much below ( around , represented by the gray shaded area ) .moreover , the nondimensional thrust power at ( see dahsed line in fig .[ powers ] ( c ) ) is even more than 4 times lower than the optimum value . at last, we remark that there is also no sign of a resonant behavior in the consumed power curve ( fig .[ powers ] ( d ) ) .we proceed now to study the behavior of the wings considered as a forced oscillator , assuming the oscillation of the leading edge to be the forcing and that of the trailing edge to be the response ( which means to assume that the wings bend following only the first deformation mode ) . as said before , the amplitude and phase shift of the responsecan thus be measured by following the two wing edges on a high cadenced camera recording ( as seen on fig .[ signalampphaz ] ( a ) ) . figs .[ signalampphaz ] ( b ) and ( c ) display two characteristic time evolutions of the driving oscillation ( the imposed wing beat , shown as black dots ) and the wing elastic response ( the motion of the trailing edge , red dots ) in the moving frame .the first case shows a typical response , at , mainly sinusoidal at the driving frequency , which supports the assumption that the oscillations of the wing follow a single mode . in the second case ,the driving frequency is near one third of the resonant frequency . as can be observed in fig .[ signalampphaz ] ( c ) , the response is then a combination between and , giving evidence of a super - harmonic resonance , pointing out the fact that the system integrates cubic nonlinearities .the non dimensional amplitude ( i.e. scaled by the length of the wing ) and phase have therefore been extracted from those signals for each pair of wings as a function of the reduced driving frequency for two different amplitudes .results are displayed in fig .[ ampphaz ] . in parallel , the same experiments have been conducted in a vacuum chamber at 10 % of the ambient pressure .results are also displayed in fig .[ ampphaz ] for comparison .as can be seen , the evolution of the amplitude shows a fast increase from very low flapping frequencies .this is the expected behavior owing to the inertial character of the forcing .a slight but rather broad peak can be observed in the nearness of in the amplitude curve , confirming the occurrence of the super - harmonic resonance hinted above and strengthening the fact that this type of mechanism may play a role as a strategy for performance enhancement in nature .two more points have to be underlined : first , measurements in air and vacuum are approximately the same , in accordance with the hypothesis that inertia is the main bending factor for flapping flyers .the second point is that no clear resonance is observed around ( only a barely visible peak in the case of the lowest forcing amplitude shown in the insert in fig .[ ampphaz](a ) ) . concerning the phase , the present results recover the trend of what has been observed recently : increases monotonically with .considering the experiments in air at normal conditions , this observation together with the performance increase shown in the first part of the ( fig .[ powers ] ( c ) ) , brings the following conclusion : the increasing phase shift , which corresponds to a situation where the wing experiences a larger bending at the maximal flapping velocity , leads to a more favorable repartition of the aerodynamic forces ( as discussed in ) .+ a simple argument widely shared in the community connecting the phase dynamics to the propulsive performance is : the larger the phase lag is , the best the thrust power would be , until the point where the wing experiences its largest bending at .however , while the argument reasonably agrees with the observations in the range of forcing frequencies where performance increases with , the maximum performance does not actually match with the maximum of bending that occurs at , but relatively far below this expected optimum ( which lies actually around ) .+ one last important remark to be made concerns the phase evolution in vacuum .it is clearly observed that decreases more slowly in the low density environment within the whole range of flapping frequencies studied .in contrast with the amplitude measurements , where the data from the experiments in vacuum follow roughly the same curve of those in air at atmospheric pressure , the large difference in the curves between both cases points out unequivocally the importance of the surrounding fluid in determining the phase dynamics .this point will be discussed later . at this stage , we have shown that , as observed in the pitching / heaving systems of , the increase in performance of elastic wings undergoing large oscillations is essentially governed by a fast growing phase evolution . however , the physical mechanisms governing the propulsive performance remain unclear . in particular , the mechanisms leading to the useful evolution of as well as the link between resonance and performance are still looking for a definitive answer . and ( filled symbols correspond to measurements in air , open symbols in vacuum ) .those results are compared to nonlinear predictions from eq .[ amp ] with ( gray line ) and without ( black line ) nonlinear air drag ( discussed further in the text ) . ]in order to understand those crucial points , one can consider the elastic wing as a clamped - free beam under base harmonic forcing .for simplicity , the beam is considered as one - dimensional taken at mid length in the spanwise direction of the wing .we assume here , according to the experiment , that only flexural displacements ( i.e. perpendicular to the direction of the flight motion ) are allowed .the structural properties of the beam are determined by measuring experimentally the relaxation frequency .thus , the equation governing the motion of the nonlinear flexural oscillations of clamped - free beam writes : dx \bigg ] ' \label{beam0}\end{aligned}\ ] ] where is the transversal local displacement , the young modulus , the second moment of inertia and the mass per unit of length . writing as , where is the driving motion defined by , and using the non - dimensional quantities for space and time ; ; ; with , equation [ beam0 ] reads : d\tilde{x } \bigg ] ' -\bar{a_{\omega } } \ddot{\tilde{w_{0 } } } \label{beam}\end{aligned}\ ] ] which has to satisfy the clamped - free boundary conditions .the last term on the right hand side in eq .[ beam ] , , is a forcing term due to the wing inertia whose amplitude is given by the elasto - inertial number and which is dependent on the square of the driving frequency as seen before .+ the next step is to set apart the spatial dependence by projection of eq .[ beam ] onto the complete set of eigenfunctons defined by the linear part .the displacement is expended as ( see ) where are the non - dimensional _ linear _ modes for clamped - free beams which are not recalled here for the sake of brevity .the problem then writes ( the have been removed for simplicity ) : where and are determined by : ' \phi_{p}dx \label{f}\ ] ] the projection of the forcing term on the mode , , writes at the trailing edge : as the propulsive regimes observed in this work lie below the first relaxation frequency of the wing , we assume that the response of the wing is mainly governed by the first eigenmode . hence , equation [ x ] can be considerably simplified and reduces for the only mode 1 to : a crucial feature is now to choose a damping term to this dynamical system . during a stroke cycle , the wing follows very fast motions involving high local reynolds numbers , which prompt us to include a nonlinear quadratic fluid drag term in addition to the classical linear viscous friction law . the damping is then chosen as a combination of linear and nonlinear terms as follows : the linear and nonlinear coefficients and are estimated studying the impulse response for each wing .the solution of eq .[ x1 ] including damping is determined by using a classical multiple scale method at first order ( see ) . to this end, we introduce a small parameter and a detuning parameter .the problem to be solved reads . according to the multiple scales theory , we express the solution in terms of different time scales as where and are respectively short ( relative to the oscillation of the wing ) and long times scales .the system at order is an gives the straightforward solution where and are complex functions . at order , we obtain : using the expression of found at order into eq .[ eqq ] , an equation for is obtained by elimination of the secular terms : where the pre - factor in front of the nonlinear damping coefficient is obtained during the special integration over one period of the fourier expansion of the function ( see ).as can be seen , eq . [ amp ] is a characteristic equation of a forced damped oscillator with cubic nonlinearities . at last , substituting the polar form , separating into real and imaginary parts and looking only to the steady - state solutions , we find two relations for the amplitude and phase . corresponds to structural damping ( and viscous fluid damping relative to very small displacements ) and is therefore mainly dependent on the only displacement ( i.e. in the wing frame ) .in contrast , is strongly dependent on the global motion of the wing and has therefore to be estimated in the laboratory frame .thus , at first order , a reasonable corrected approximation for this term is . ] where is the nonlinear cubic term coefficient , which is computed from eq .[ h ] and [ f ] .[ gamma ] closely resembles a classic nonlinear duffing oscillator except that the forcing amplitude is frequency dependent and that a nonlinear damping term is present . and phase with the reduced forcing frequency for the first mode of a clamped - free beam forced by inertia for two different ( high and medium ) amplitudes ( chosen arbitrarily for clarity ) .the blue line corresponds to the linear prediction , the black line to the non - linear model from eq [ amp ] with _ linear _ damping ( ) , the red line to the non - linear model with _ nonlinear _ damping ( ) .as can be seen only cases with relatively small flapping amplitude and _ linear _ damping can exhibit a slight resonance peak .greater amplitudes and/or presence of _ nonlinear _ damping behave as a non - resonant system in the domain of flyers capabilities . concerning the phase , models including only linear damping do not produce `` useful '' phase lag except in the nearness of the phase jump .in contrast , the presence of a nonlinear damping produces a fast and helpful evolution . ]predictions of the above model for the parameters of the experiments are plotted in fig .[ ampphaz ] for both cases in air and vacuum .in addition , for a clear understanding of the underlying dynamics described by eqs . [ a ] and [ gamma ] , a comparison between predictions from a linear model , a _ nonlinear _ with _ linear _ damping and a _ nonlinear _ with _ nonlinear _ damping is displayed in fig .[ oscillator ] for two flapping amplitudes .it can be seen that the model based on a single mode is capable of reproducing all the observations made from the experiments both in normal and low density environments .the good agreement between experiments and model allows us to pinpoint some mechanisms underlying the complex mechanisms of flapping flight .+ the first concerns the question of resonance : from fig .[ oscillator ] , it can be observed that the only case ( apart from the linear case ) exhibiting a slight resonance peak corresponds to relatively small flapping amplitude and damping coefficient [ i.e. only linear damping term , see fig .[ oscillator ] ( a ) ] .cases for higher amplitude and/or presence of nonlinear damping behave as a non - resonant like system in the range of flapping frequencies studied . in nonlinear oscillators , it is known that the main effect of the nonlinear term is to distort the resonance curve and shift the resonance peak to higher frequencies ( for a hardening coefficient , as in the present study ) .an important feature of such nonlinear systems is that the distortion of the shape of the resonance curve is directly dependent on the amplitude of the excitation . in the present case where the forcing is inertial , the response depends on the square of the forcing frequency ( or on the elasto - inertial number ) , which provides an increase of the amplitude plotted in fig .[ ampphaz ] independent of an intrinsic resonance mechanism .hence , we can expect the actual resonance curve of the system to be all the more distorted that the flapping frequency increases .another feature that makes it difficult for the flapping flyer to benefit from a resonance mechanism is the presence of a geometric saturation due to the finite length of the wing . always due to the inertia effects, this geometrical saturation will be reached all the more soon that the demand for larger amplitude ( i.e. better performances ) is increased .coming back to the distorted resonance curve , the visible consequence is that the wing , even for a small nonlinear cubic coefficient , behaves as a system never reaching a peak in the range of frequencies commonly used by flapping flyers .additionally , the presence of strong damping accentuates this behavior by smoothing the value of a possible resonance peak .this last observation is consistent with the fact that birds or insects may not especially look for structural resonance to improve their performance .the second point is the crucial role of fluid damping in triggering the phase lag that is useful for thrust enhancement . for the phase , shifting the resonance peak as a result of the nonlinear spring in the oscillator model means shifting the phase jump at to higher frequencies as well .thus , without air drag , as can be seen in fig .[ oscillator ] ( c ) and ( d ) , the nonlinear evolution of the phase would be even slower than in the linear case for which the phase evolution is already not especially favorable except in the nearness of the resonance .this is exactly what is observed for the vacuum measurements where the nonlinear damping due to fluid drag is negligible . on the contrary, the presence of a quadratic fluid damping determines a fast increase of the phase lag ( and a so a thrust improvement ) even from the very first flapping frequencies .this implies of course that strong flapping velocities are a necessary condition for the bending to become efficient ( i.e. elasticity will play a minor role if the flapping beat amplitude is not strong enough ) .+ summarizing , the instantaneous wing shape is given by the two following ingredients : inertia provokes the bending ( gives the amplitude ) and damping , by controlling the phase lag , allows this bending to be usefully exploited .large phase lags will provide largest bending of the wing at maximum flapping speed , leading to a more favorable repartition of aerodynamics forces .since classic resonance mechanisms can not answer it , the question of the performance optimum ( or the transition to underperformance ) remains unclear .we therefore proceeded to study the kinematics of the wing in the laboratory frame .in particular , we have compared both characteristic angles relative to the global wing motion .the first characteristic angle is dependent on the ratio between the maximal vertical flapping velocity and the cruising velocity and reads : . is considered as the instantaneous angle of attack of the wing and as can be seen , is directly related to the strouhal number which determines as well the performance of flapping flyers .we define a second characteristic angle as the trailing - edge angle taken at the maximum flapping velocity .this angle is directly related to the phase lag , and thus determines to what extent the bending of the wing will be useful in terms of performance .[ phitheta ] shows the evolution of the ratio , which is naturally a growing function of because both an increase in or a decrease in lead to an enhancement of the propulsive performance . and as a function of the reduced driving frequency .two regimes can be distinguished : ( i ) : corresponding to the performances increasing stage due to a useful phase lag .( ii ) : corresponding to the transition to under - performances due to a loss of the effective wing area .the optimum occurs therefore when and point at the same direction ( best phase lag ) . ]the interesting point is that the location of the performances / under performances transition takes place at ( i.e. when both angles point instantaneously at the same direction ) .thus , the optimum value of does not corresponds to the maximum bending experienced by the wing ( which would be the optimal solution ) but to the moment when the deflection angle matches the angle of attack as sketched in fig .[ phitheta ] . for a rigid wing , because is fixed ( ) , the optimization problem is here nonexistent and thrust only depends on the driving frequency ( for a given amplitude ) . with flexibility and according to what has been previously observed , starts increasing and tends to align the wing trailing - edge with the flow .as discussed earlier , this leads to a more favorable repartition of the aerodynamics forces as sketched in fig .[ phitheta ] .+ however , this argument is only valid if the surrounding flow is totally attached to the wing ( i.e. separation occurs only at the trailing - edge ) .a situation where is strongly subjected to flow separation before the wing trailing edge . in this casethe effective surface relative to the aerodynamic load can be expected to be drastically reduced leading to a loss of aerodynamic performance .it has to be noticed that the value of , or more generally values of phases greater than observed in this experiments should be , theoretically , more optimal ( i.e. should give more optimal bending shapes for useful projection of forces ) .however , if a separation occurs , the corresponding loss of thrust force ( and so cruising speed ) will accelerate the decoherence of both angles and hence , will provoke the subsidence of the performance , as has been observed on fig .[ powers ] .the more economic strategy to fly is therefore to set which corresponds to the optimum way to transfer useful momentum .in this work , we aimed at describing the dynamics governing the performance of flapping flyers . considering large flapping amplitude and relatively large wings ( as for big insect species ), we have shown that nonlinear and inertia effects , together with geometric limitation , question the prevailing idea that energy - saving strategies in flapping flight must be related to resonance mechanisms . in search of improving performances, animals may actually stay below the resonance point .besides , the nonlinear nature of air drag ( which implies sufficiently strong flapping amplitudes ) seems to be a fundamental ingredient to create the phase lag between the leading and trailing edges of the flapping wing that allows the elasticity energy to be used at its best .one last comment is that the presence of structure resonances for flyers in nature is not invalidated by the mechanism described here .for instance , small insects may not use much elasticity and bending because either their wings are too small or the local reynolds number is not sufficiently high to produce enough damping , and thus a useful phase lag .however , studies containing a large bank of comparative resonant frequencies and wingbeats of insects or birds being rare in the literature , it is consequently hard to draw any conclusion about the existence of two distinct strategies at this state . according to biologists , resonant mechanisms lie at the muscle level more than in the wing structure itself ( see and reference therein ) which would strengthen that there is no reason , _ a priori _ , for flapping flyers to look for structural resonance of the wing .further analysis on such a way would certainly help to discern if there are , or not , universal characteristics for flapping flyers .the authors are grateful to daniel pradal for his help concerning the experimental setup , cyril touz for having shared his knowledge of nonlinear systems and sarah tardy for her careful reading of the manuscript .this work was supported by the french research agency through project anr-08-blan-0099 .w. shyy , h. aono , s.k .chimakurthi , p. trizila , c .- k .kang , c.e.s .cesnik , and h. liu .recent progress in flapping wing aerodynamics and aeroelasticity . , 2010 .in press , corrected proof .doi : 10.1016/j.paerosci.2010.01.001 .
|
saving energy and enhancing performance are secular preoccupations shared by both nature and human beings . in animal locomotion , flapping flyers or swimmers rely on the flexibility of their wings or body to passively increase their efficiency using an appropriate cycle of storing and releasing elastic energy . despite the convergence of many observations pointing out this feature , the underlying mechanisms explaining how the elastic nature of the wings is related to propulsive efficiency remain unclear . here we use an experiment with a self - propelled simplified insect model allowing to show how wing compliance governs the performance of flapping flyers . reducing the description of the flapping wing to a forced oscillator model , we pinpoint different nonlinear effects that can account for the observed behavior in particular a set of cubic nonlinearities coming from the clamped - free beam equation used to model the wing and a quadratic damping term representing the fluid drag associated to the fast flapping motion . in contrast to what has been repeatedly suggested in the literature , we show that flapping flyers optimize their performance not by especially looking for resonance to achieve larger flapping amplitudes with less effort , but by tuning the temporal evolution of the wing shape ( i.e. the phase dynamics in the oscillator model ) to optimize the aerodynamics .
|
the authors thank m. genovese and i. p. degiovanni for useful discussions . this work has been supported by miur ( firb `` lichis '' - rbfr10yq3h ) , mae ( inquest ) , and the university of trieste ( fra 2009 ) .50 bipm , iec , ifcc , ilac , iso , iupac , iupap and oiml 2008 _ evaluation of measurement data supplement 1 to the guide to the expression of uncertainty in measurement propagation of distributions using a monte carlo method _ joint committee for guides in metrology ,jcgm 101 http://www.bipm.org/utils/common/documents/jcgm/jcgm_101_2008_e.pdf jaynes e t 1957 phys . rev . * 106 * 620 ; jaynes e t 1957 phys. rev . * 108 * 171 wger w 1987 ieee trans .measurement * im-36 * 655658 lira i 2009 metrologia * 46 * l27 kullback s _ information theory and statistics _( wiley , new york , 1959 ) jaynes e t 1968 ieee trans .systems science and cybernetics * ssc-4 * 227 olivares s and paris m g a 2007 phys .a * 76 * 042120 hyvarinen a 1998 adv .neural inf .. syst . * 10 * 273 ; hyvarinen a and oja e 2000 neural networks * 13 * 411
|
supplement 1 to gum ( gum - s1 ) recommends the use of maximum entropy principle ( maxent ) in determining the probability distribution of a quantity having specified properties , e.g. , specified central moments . when we only know the mean value and the variance of a variable , gum - s1 prescribes a gaussian probability distribution for that variable . when further information is available , in the form of a finite interval in which the variable is known to lie , we indicate how the distribution for the variable in this case can be obtained . a gaussian distribution should only be used in this case when the standard deviation is small compared to the range of variation ( the length of the interval ) . in general , when the interval is finite , the parameters of the distribution should be evaluated numerically , as suggested by i. lira [ _ metrologia _ , 2009 , * 46 * , l27 ] . here we note that the knowledge of the range of variation is equivalent to a bias of the distribution toward a flat distribution in that range , and the principle of minimum kullback entropy ( mke ) should be used in the derivation of the probability distribution rather than the maxent , thus leading to an exponential distribution with non gaussian features . furthermore , up to evaluating the distribution negentropy , we quantify the deviation of mke distributions from maxent ones and , thus , we rigorously justify the use of gum - s1 recommendation also if we have further information on the range of variation of a quantity , namely , provided that its standard uncertainty is sufficiently small compared to the range . supplement 1 to gum ( gum - s1 ) provides assignments of probability density functions for some common circumstances . in particular , it is stated that if we know only the mean value and the variance of a certain quantity , we should assign a gaussian probability distribution to that quantity , according to the principle of maximum entropy ( maxent ) . the derivation is quite simple , as one has to look for the distribution maximizing the shannon entropy : = -\int_{\mathbbm r } \!\ ! dx\ , p(x ) \log p(x)\ , , \end{aligned}\ ] ] which is given by : where the values of the coefficients should be determined to satisfy the constraints : with : however , sometimes we also know the range of the possible values of the quantity . two relevant examples are given by the phase - shift in interferometry , which is topologically confined in a -window , and by the displacement amplitude of a harmonic oscillator , whose range of variation is dictated by energy constraints . in this case , it has been noticed by i. lira in that a gaussian probability distribution with support on the real axis can be rigorously justified only if the standard uncertainty is sufficiently small with respect to the range of variation of the quantity . more in details , if we have any information about the range of variation , then this information should be employed in deriving the distribution maximizing the entropy as well as in evaluating the values of the coefficients of the distribution . let us denote the range of the quantity , i.e. , the subset of the real line where the values of have nonzero probability to occur . the functional form of the distribution is still given by the exponential function in eq . ( [ expf ] ) , however with nonzero support only in , whereas the coefficients are to be determined by formulas like those in eq . ( [ constraints ] ) , again with replaced by . it then follows , e.g. , that for a variable which is known _ a priori _ to lie in a given interval , the maximum entropy distribution is not gaussian , and the gaussian approximation may be employed only if the standard deviation is small compared to range of the possible values of the quantity . here we point out that having information about the range of variation may be expressed as a bias of the distribution toward a flat distribution in that range and the reasoning presented in may be subsumed by the minimum kullback entropy principle ( mke ) . the kullback entropy , or relative entropy , or kullback - leibler divergence , of two distributions and reads : = \int_{\mathbbm r } \!\ ! dx\ , p(x ) \log\left [ p(x)/q(x)\right].\ ] ] according to the mke , in order to find the distribution given a bias toward , we should minimize the function : = k[p|q ] + \sum_{k=0}^{2 } \lambda_{k}\left [ \int_{\mathbbm r } \!\ ! dx\ , p(x)\,x^k - m_k \right],\ ] ] with respect to the function , obtaining : where the parameters can be still ( numerically ) computed by using eq . ( [ constraints ] ) . eq . ( [ mke : sol ] ) represents the probability distribution satisfying the given constraints , but with a bias toward the distribution , which , for instance , may contains the information about the range of the variable . this information , which in the case of the maxent is not explicitly taken into account , now it is naturally considered from the beginning . remarkably , this is a different scenario from that covered in gum - s1 , i.e. , when further information on the quantity is available , namely , the interval of values within which the quantity is known to lie is finite . indeed , as mentioned above , if the standard uncertainty is sufficiently small with respect to the range of variation of the quantity , we can adopt a gaussian probability distribution over the whole real axis and , thus , use the gum - s1 recommendation . in order to rigorously justify this statement , which has been qualitatively addressed in , we assess quantitatively how the knowledge of the range of variation influences the assignment of a probability distribution by considering the deviation of the mke distribution from a gaussian distribution , which would represents the maxent solution in the absence of any information about the range of variation . the deviation from normality of the mke distribution ( [ mke : sol ] ) may be quantified by its negentropy : = \mbox{ } \left[1+\log \left(2\pi \sigma^2_{{\scriptstyle x}}\right)\right ] -s[p]\,,\ ] ] where ] , , that corresponds to a bias of toward the flat distribution : \frac12 ] , the dependence of the coefficients and is such that we have a scaling law for negentropy , which depends only on the ratio . this is illustrated in fig . [ f : f1 ] , where we report the negentropy as a function of for different values of . $ ] . we report the negentropy of the distribution as a function of the ratio for different values of the variance : ( green squares ) , ( red circles ) , ( blue triangles ) . [ f : f1],width=264 ] in conclusion , we have shown that the determination of the probability distribution of a variable for which we know the first two moments and its range of variation may be effectively pursued by using the mke . furthermore , the negentropy of the distribution may be used to quantify how much the mke solution differs from the maxent one , i.e. to assess how the knowledge of the range of variation influences the assignment of a probability distribution . our analysis quantitatively supports the conclusions of ref . and rigorously justifies the use of gum - s1 recommendation also in the presence of further information on the range of variation of a quantity , namely , provided that its standard uncertainty is sufficiently small compared to the range .
|
the number of mobile phones actively in use worldwide today is about 5 billion , with millions of new subscribers every day .mobile phones allow for unobtrusive and cost - effective access to previously inaccessible sources of behavioral data such as location , communications ( calls and text messages ) , photos , videos , apps and internet access .hence , a result of the ever - increasing adoption of these devices is the availability of large amounts of _ personal data _ related to habits , routines , social interactions and interests .however , the ubiquitous collection of personal data raises unprecedented privacy challenges .users typically have to make decisions concerning the disclosure of their personal information on the basis of a difficult tradeoff between data protection and the advantages stemming from data sharing .perhaps more importantly , people are typically not involved in the life - cycle of their own personal data as it is collected by websites and mobile phone apps , which results in a lack of understanding of who uses their data and for what .several researchers have proposed and investigated new user - centric models for personal data management , which enable individuals to have more control of their own data s life - cycle . to this end, researchers and companies are developing repositories which implement medium - grained access control to different kinds of personally identifiable information ( pii ) , such as _ e.g. _ passwords , social security numbers and health info , and more recently location and personal data collected online by means of smartphones or wearable devices .previous work has introduced the concept of _ personal data markets _ in which individuals sell their own personal data to entities interested in buying it .buyers are likely to be companies and researchers , while sellers are individuals who receive compensation for sharing their own data .et al . _ have recently proposed a mechanism called _ transactional _ privacy , devised to maximize both the user s control of their own pii and the utility of a data - driven market . in the context of prospective personal data markets that offer increased transparency and control , it is of great importance to understand the value that users put to their own pii .recently , carrascal _et al . _ used a refined experience sampling method ( resm ) and a reverse second price auction to assess the monetary value that people assign to their pii shared online via websites _keywords used in a search engine , photos shared in a social network , etc .however , the authors focus only on web - browsing behaviors without taking into account behaviors and personal information that can be captured by mobile phones . taking carrascal _et al . _ as an inspiration , in this paper we investigate the monetary value that people assign to different kinds of pii as collected by their mobile phone , including location and communication information .we carried out a comprehensive 6-week long study in a living lab environment with 60 participants and adopted a day reconstruction method along with a reverse second price auction mechanism in order to poll and collect honest monetary valuations .the main contributions of this paper are : 1 .quantitative valuations of mobile pii as collected by a 6-week long study conducted in the wild ; 2 . qualitative feedback on the valuations provided by each participant as gathered by an end of study ( eos ) survey ; 3 . a segmentation of pii valuations and findings based on 4 categories of mobile pii ( communications , location , media and apps ) , 3 levels of complexity ( individual , processed , aggregated ) , and one level of temporal granularity ( daily ) ; 4 . a set of key insights about people s sensitivities and valuations of mobile pii and implications for the design of mobile services that leverage mobile pii .in recent years , researchers have analyzed the factors that can influence a person s disclosure behavior and economic valuation of personal information .demographic characteristics , such as gender and age , have been found to affect disclosure attitudes and behavior .several studies have identified gender differences concerning privacy concerns and consequent information disclosure behaviors : for example , women are generally more protective of their online privacy .age also plays a role in information disclosure behaviors : in a study on facebook usage , christofides __ found that adolescents disclose more information .prior work has also emphasized the role of an individual s stable psychological attributes - _e.g. _ personality traits - to explain information disclosure behavior . explored the role of the big5 personality traits and found that agreeableness defined as being sympathetic , straightforward and selfless , has a significant influence on individual concerns for information privacy ._ and amichai - hamburger and vinitzky also used the big5 personality traits and found that agreeableness , conscientiousness , and openness affect a person s concerns for privacy .however , other studies targeting the influence of personality traits did not find significant correlations . more recently ,quercia _ et al ._ found weak correlations among openness to experience and , to a lesser extent , extraversion and the disclosure attitudes on facebook . in 2010 , lo suggested that locus of control could affect an individual s perception of risk when disclosing personal information : internals are more likely than externals to feel that they can control the risk of becoming privacy victims , hence they are more willing to disclose their personal information .individual differences are also found when providing economic valuations of personal data .for instance , some individuals may not be concerned about privacy and would allow access to their data in exchange for a few cents , whereas others may only consent if well paid .recently , aperjis and huberman proposed to introduce a realistic market for personal data that pays individuals for their data while taking into account their own privacy and risk attitudes .previous research has shown that disclosure and valuation depend on the kind of information to be released . reported that the valuation of some types of personal information , such as the subject s weight and the subject s age depends on the desirability of these types of information in a social context .some empirical studies have attempted to quantify subjective privacy valuations of personal information in different contexts , such as personal information revealed online , access to location data , or removal from marketers call lists .these studies can be classified into two groups .the first and larger group includes studies that explicitly or implicitly measure the amount of money or benefit that a person considers to be enough to share her / his personal data , namely their _ willingness to accept _ ( wta ) giving away his / her own data ( see for example ) .the second and smaller group includes studies about tangible prices or intangible costs consumers are _ willing to pay _ ( wtp ) to protect their privacy ( see for example , ) . in our paper , we do not deal with wta vs wtp , but we focus on wta for pii captured by mobile phones ( communications , apps and media usage , locations ) . a growing body of studies in the fields of ubiquitous and pervasive computing and human - computer interaction focuses on location sharing behavior and has highlighted the role played by the recipient of sharing ( who can access the information ) , the purpose , the context , how the information is going to be used and the level of granularity of the information shared .finally , studies have suggested the importance of analyzing people s actual behavior rather than attitudes expressed through questionnaires because often the actual behavior of people deviates from what they state .building upon previous work , in this paper we investigate the monetary value that people assign to different kinds of pii as collected by their mobile phone , including location and communication patterns .in particular , we carry out a comprehensive 6-week long study in a living lab environment with 60 participants and adopt a day reconstruction method and a reverse second price auction mechanism in order to poll and collect honest monetary valuations from our sample .next , we describe the methodology followed during our 6-week study . the living laboratory where we carried out our study was launched in november of 2012 and it is a joint effort between industrial and academic research institutions .it consists of a group of more than 100 volunteers who carry an instrumented smartphone in exchange for a monthly credit bonus of voice , sms and data access . the sensing system installed on the smartphonesis based on the funf framework and logs communication events , location , apps usage and photos shot . in addition , the members of the living lab participate in user - studies carried out by researchers .the goals of this living lab are to foster research on real - life behavioral analysis obtained by means of mobile devices , and to deploy and test prototype applications in a real - life scenario .one of the most important features of such a lab is its ecological validity , given that the participants behaviors and attitudes are sensed in the real world , as people live their everyday life , and not under artificial laboratory conditions .all volunteers were recruited within the target group of young families with children , using a snowball sampling approach where existing study subjects recruit future subjects from among their acquaintances . upon agreeing to the terms of participation , the volunteers granted researchers legal access to their behavioral data as it is collected by their smartphones .volunteers retain full rights over their personal data such that they can order deletion of personal information from the secure storage servers .moreover , participants have the choice to participate or not in a given study . upon joining the living lab, each participant fills out an initial questionnaire which collects their demographics , individual traits and dispositions ( _ e.g. _ big five personality traits , trust disposition , locus of control , etc . ) information .a total of 60 volunteers from the living lab chose to participate in our mobile personal data monetization study .participants age ranged from 28 to 44 years old ( , ) .they held a variety of occupations and education levels , ranging from high school diplomas to phd degrees .all were savvy android users who had used the smartphones provided by the living lab since november 2012 .regarding their socio - economic status , the average personal net income amounted to 21169 per year ( ) ; while the average family net income amounted to 36915 per year ( ) .all participants lived in italy and the vast majority were of italian nationality .our study ran for six weeks from october 28th , 2013 to december 11th , 2013 . at the beginning of the study , participants were explained that the study consisted of three phases : 1 . an initial questionnaire , which focused on their general perception of privacy and personal data ; 2 .a daily data collection phase that lasted 6 weeks where participants answered daily surveys to valuate their mobile personal data ; 3 . a final survey that aimed to clarify the results obtained and to collect qualitative feedback from participants .[ [ daily - surveys ] ] daily surveys + + + + + + + + + + + + + ad - hoc ` java ` code was developed and scheduled to run on a secure server each night in order to automatically generate personalized daily surveys for each participant .the survey questions were generated based on the mobile data collected during the previous day .everyday , at 12 pm , participants received an sms reminding them to fill out their survey via a personalized url ( through a unique hash ) . in order to test the live system and identify bugs , we ran a pilot for 10 days with a small set of volunteers who were not participants in the study .in addition , we allocated a _ training _week prior to starting the actual study so participants would get accustomed to the survey / auction scheme .next we describe the data that we collected during the study .we collected 4 categories of mobile personal data : ( 1 ) _ communications _ , in the form of calls made / received ; ( 2 ) _ locations _ , collected by the device gps sensor every minutes ; ( 3 ) _ running applications _ , sampled every 25 minutes ; and ( 4 ) _ media _ , _ i.e. _ number and timestamp of pictures taken and obtained by monitoring the device file system .the sampling rates for the different categories of data were empirically determined in order to have good resolution without significantly impacting the device s battery life . moving from finer to coarser granularity ,we probed participants about the following three levels of complexity for each category of data : ( 1 ) _ individual _ , encompassing individual data points ( _ e.g. _ a call made / received , a picture taken , a specific gps location ) ; ( 2 ) _ processed _ , depicting higher level information derived from the sensed events ( _ e.g. _ a given application has been running for n minutes , total distance traveled ) ; and ( 3 ) _ aggregated _ , portraying cumulative event information ( _ e.g. _ number of places visited , number of calls made / received ) . for each data category and level of complexity , participants were asked to fill out daily surveys that asked them about data from the previous day for each category and for a specific level of complexity ( up to 4 questions per day ) . for each question in the surveys , participants always had the option to opt - out and not sell that particular piece of information .next , we describe in detail the 4 categories and the 3 levels of complexity of mobile personal data that we collected in this study , which are summarized in table [ tab : features ] .[ cols="<,>,>,>",options="header " , ] _ individual _ communication data was restricted to voice calls made / received ; missed calls were discarded .the _ processed _ communication variable referred to the total duration of calls in the previous day , resulting in questions such as ` yesterday , you spoke on the phone for a total of 52 minutes ` . with respect to _ aggregated _ communications data , we alternated between two different aggregated variables on a weekly basis : on even weeks subjects were asked to monetize information about the total number of calls made / received during the previous day , while on odd weeks they were asked about call diversity , _ i.e. _ the number of different people that they talked to on the phone during the previous day .examples of questions related to aggregate communications are ` yesterday , you made / received 8 phone calls ` , or ` yesterday , you spoke on the phone with 3 different persons ` ._ individual _ location referred to a specific place visited by the participant in the previous day .semantic information associated to gps locations was derived via reverse geo - coding using yahoo query language . for individual locations , details on street , neighborhood and townwere included in the question .for example , ` yesterday , at 23:56 you were in via degli orbi 4 , trento ` .the _ processed _ location variable referred to the total distance traveled in the previous day , resulting in questions such as ` yesterday you covered a total distance of 13 km ` .finally , location data was spatially clustered over the reference time - range using a threshold of 100 meters to generate the _ aggregated _ location question ( _ e.g. _ ` yesterday you have been in 23 different places ` ) . with respect to running apps , the _ individual _ variable included the timestamp and the name of the app running in the foreground ._ processed _ app information referred to the total number of minutes that a particular app was running over a specific time in the previous day , whereas _ aggregated _ app variables referred to the total number of different apps that the participant ran the previous day .examples of questions on app - related information for each level of complexity are ` 2=0.4em 3=0.2em 4=0.1em 7=0.1em = yesterday , at 10:23 you were using the firefox browser application ` , ` 2=0.4em 3=0.2em 4=0.1em 7=0.1em = yesterday night , the google talk application run on your device for 82 minutes ` , and ` yesterday 9 applications were running on your device ` , respectively ._ individual _ media asked participants about the fact that they shot a photo at a specific time ( ` yesterday , at 14:23 , you shot one picture ` ) . for legal privacy reasons ,the questions referring to individual media data could not include the actual picture they referred to ._ processed _ media probed participants about their photo - taking activity during specific times of the day ( _ e.g. _ ` yesterday morning you took 4 pictures ` ) .finally , the _ aggregated _ media variable referred the total number of pictures shot the previous day ( _ e.g. _ ` yesterday you took 9 pictures ` ) .as previously mentioned , upon joining the lab each participant filled out 4 questionnaires to collect information about their personality , locus of control , dispositional trust and self - disclosure behaviors .the big five personality traits were measured by means of the bfms questionnaire , which is validated for the italian language and covers the traditional dimensions of extraversion , neuroticism , agreeableness , conscientiousness , and openness .participants also provided information about their _ locus of control _( loc ) , a psychological construct measuring whether causal attribution for subject behavior or beliefs is made to oneself or to external events and circumstances .the loc measures whether the outcomes of a set of beliefs are dependent upon what the subject does ( internal orientation ) or upon events outside of her / his control ( external orientation ) .loc was measured by the italian version of craig s locus of control scale . moreover , we collected information about the participants _ dispositional trust_. rotter was among the first to discuss trust as a form of personality trait , defining interpersonal trust as a generalized expectancy that the words or promises of others can be relied on . in our study, we resort to mayer and davis s trust propensity scale . finally , we targeted the _ self - disclosure _ attitudes of our subjects .self - disclosure has been defined as any message about the self that an individual communicates to another one .we used wheeless scale measuring five dimensions of emphself disclosure , namely ( i ) amount of disclosure , ( ii ) positive - negative nature of disclosure , ( iii ) consciously intended disclosure , ( iv ) honesty and accuracy of disclosure , and ( v ) general depth or intimacy of disclosure .wheeless scale has been utilized to measure self - disclosure in online communication and in interpersonal relationships .the personalized daily survey asked each participant to place a bid to sell one piece of their mobile personal information for each of the four categories of study ( communications , location , apps and media ) , for a specific level of complexity ( individual , processed , or aggregated ) and for the previous day .the winner of each auction won the monetary value associated with that auction . in exchange , (s)he sold that particular piece of information to the living lab which could use it for whatever purpose it wanted . in order to ensure a balanced sample ,surveys were generated by rotating the different levels of complexity described above , such that each day participants placed bids in up to 4 auctions : one for each category of personal information and for a particular level of complexity ( individual , aggregated or processed ) .note that in the case a participant did not generate any data for a particular category , s(he ) was still asked to provide a valuation to the fact that there was no data in that category , _e.g. _ ` yesterday you did not make any phone call ` . the participants bids entered a reverse second - price auction strategy , _i.e. _ , the winner was the participant(s ) who bid the lowest , and the prize was the second lowest bid .the choice of this auction mechanism was due to the following reasons : ( 1 ) the mechanism is truth telling given that the best strategy for the auction participants is to be honest about their valuation , ( 2 ) it is easy to explain and understand , and ( 3 ) it has successfully been used before to evaluate location information in and web - browsing information in .interventions , _i.e. _ individual communications of auction outcomes to participants , took the form of e - mails sent every thursday . in order to evaluate possible effects of winning frequency on bidding behavior , we employed two different auction strategies for the first and second halves of the study . during the first 3 weeks ( phase 1 ), we carried out weekly auctions on wednesday , taking into account all bids that had been entered during the previous 7 days for each category .therefore , in this phase , 12 weekly auctions took place with the daily bids for each category and level of complexity ( 4 categories x 3 levels of complexity ) . during the last 3 weeks of the study ( phase 2 ), we switched to daily auctions ; furthermore , the sample of bidding participants was split into 3 random subsets in order to increase their chances of winning , resulting in a total of 12 auctions per day .email interventions were always on thursdays and therefore this change was transparent to participants .interventions were sent to all participants , whether they had won auctions or not . in the case of winners ,the intervention email included the specific piece of information that the participant had sold , the corresponding winning bid , and the amount won .in the case of losers , the intervention email simply communicated the participant that s(he ) did not win any of their auctions .all emails were kept neutral for both winners and losers . in total, 596 auctions were run during the entire study ( 36 in the first three weeks , 560 afterwards ) . as previously explained , at the beginning and at the end of the data collection participants were required to fill out initial and end - of - study ( eos ) questionnaires .the initial questionnaire consisted of 5 questions ( see table [ tab : initsurvey ] ) and was used to gather information about the participants perception of privacy issues related to mobile personal data . from the responses provided to this survey , we notice that participants are concerned about mobile pii protection ( q1 ) but do not tend to read the terms of service ( q4 ) nor are aware of current legislation on data protection ( q5 ) .moreover , they do not seem to trust how neither application providers ( q2 ) nor telecom operators ( q3 ) use their data .the eos survey was designed to gather additional quantitative and qualitative information from our participants after the data collection was complete . in particular , we asked participants to put a value ( under the same auction game constraints ) on category - specific _ bulk information _ _ i.e. _ all the data gathered in the study for each category .for instance , in the case of location information , a visualization of a participant s mobility data collected over the 6-weeks period was shown in the web questionnaire ( as depicted in figure [ fig : bulk - screen ] ) and the participant was asked to assign it a monetary value .furthermore , for each category , we asked participants about the minimum / maximum valuations given during the study , in order to understand the reasons why they gave these valuations .table [ tab : finalsurvey ] contains all the questions of the eos survey .the eos questionnaire was administered through a slightly modified version of the same web application used for the daily surveys .the main difference are the visualizations of the collected data .the data used throughout this paper was collected from october 28th and december 11th 2013 , inclusive .data was not collected for the first 3 days of november , due to the all saints festivities in italy ; hence , our data - set encompasses 43 days .a total of 2838 daily surveys were administered during this period .statistics on bidding data and participation follow .table [ tab : bid - values ] summarizes the bidding values for each personal data category and level of complexity .figure [ fig : medianbids ] depicts median bid values each day for each category and level of complexity .the total amount won by participants in the form of auction awards was 262 which was paid in amazon vouchers . additionally , we selected the ten subjects with the highest response rate and ran a raffle to select the winner of a final prize of 100 .a total of 29 subjects won at least one auction during the study ; the cardinality of the winning set ramped from 5 to 29 as an effect of the increased number of auctions run in the second phase of the study .the participation rates for daily surveys is 79% .as mentioned earlier , users were granted opt - out options for each survey question by ticking a check - box which portrayed ` i do not want to sell this information ` . table [ tab : rr - stats ] reports statistics of opt - out and distributions of valid responses ( _ i.e. _ survey items for which participants did not opt - out and entered their bid ) for each category .the bidding data that was collected in the study is not normally distributed .hence , we applied non - parametric analysis to test whether significant differences exist in the value distributions of different types of personal data .thus , we report results using the kruskal - wallis test with a level of significance of . furthermore , we carried out correlation analyses to investigate whether associations between mobile phone usage patterns , demographics , subjects predispositions , traits and auction behavior exist .for these analyses we employed the non - parametric spearman s rho method with a level of significance of . we investigate first daily bids and specifically whether significant differences exist between ( 1 ) the categories and ( 2 ) levels of complexity within each category of mobile personal data we collected .significant differences in bid distributions were found between all data categories , with the only exception of communications and apps . the lack of statistically significant differences between communications and apps could be partially explained by the fact that most of the apps installed and used by participants in the study are communication apps . in terms of both running time and installations , % ofthe top 20 apps are messaging apps ( whatsapp and similar ) , email ( gmail , hotmail , y!mail ) , voice - over - ip clients ( skype , viber ) and social networking clients ( facebook ) .we thus hypothesize that the distinction between communication and apps might be blurred .we leave the validation of such a hypothesis to future work .nonetheless , the finding that participants seem to perceive , and consequently valuate , communications provided by a telco company and those provided by mobile apps in a similar manner , is intriguing and worth investigating .next we analyzed the differences in the distribution of bids within the different levels of complexity of mobile personal data .in other words , we looked if bid distributions within a given mobile data category showed significant differences for individual , aggregated , and processed information .[ [ applications . ] ] applications .+ + + + + + + + + + + + + significant differences emerged between individual and aggregated information ( ) , and between aggregated and processed information ( ) . in particular ,aggregated information about running applications ( e.g. _ yesterday 7 applications were running on your device _ ) was valued less ( 1 ) than individual ( e.g. _ yesterday the gmail application was running on your device _ ) or processed ( e.g. _ yesterday the gmail application ran for 120 minutes on your device _ ) information ( 2 ) .no significant difference was found between monetary valuations of individual and processed information on running applications ( ) .[ [ media . ] ] media .+ + + + + + within the media category , a significant difference in bid distributions was found ( ) between aggregated ( e.g. _ yesterday you shot 8 pictures _ ) and processed ( e.g. _ yesterday night you shot 3 pictures _ ) information . while for both information types the median bid value is 1 , a significant difference exists in terms of dispersion : the quartile coefficient of dispersion ( _ i.e. _ the ratio between difference and sum of the 3rd and 1st quartiles ) is , respectively , and .[ [ communications - and - locations . ] ] communications and locations .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + no significant differences were found in within - category analyses for communications and locations . in other words , participants valued similarly the communication and location data with each of the 3 levels of complexity . as described earlier , in the middle of the study we increased the frequency of auctions from weekly ( phase 1 ) to daily ( phase 2 ) .this change was transparent to participants and the frequency of email interventions was kept constant every thursday .we designed these two phases to assess if the probability of winning had an effect on bidding behavior .indeed , we observe significant differences in bid distributions between the two phases for all categories : locations ( ) , communications ( ) , apps ( ) and media ( ) .moreover , we find that mobile pii valuations drop for all categories in the second phase , as more participants won the auctions to monetize their data .the monetary valuations gathered in the final questionnaire for bulk information ( _ i.e. _ all the data collected in the 6-weeks presented in aggregated / anonymized form ) are summarized in table [ tab : bulk - bids - stats ] . since participants could opt - out , we also report opt - out percentages for bulk information . comparing with daily bids ( see table [ tab : bid - values ] ) , the median bids for bulk information are one order of magnitude larger than the median individual bids , except for the media category .mean opt - out percentages are similar except for the apps category . the value ranking obtained from daily bids ( location communications apps media ) is different from that obtained in bulk bids ( location apps communications media ) . in particular ,application - related bulk data is valued significantly higher than communications - related bulk information . in order to assess whether significant effects exist between mobile phone usage patterns and bidding behavior , we first computed daily behavioral variables from the sensed data .table [ tab : usage - feats ] depicts the variables that we extracted with a daily granularity and for each participant . with respect to _ location _ data , information about the number of places visited was derived under the assumption that two locations would correspond to different places if the distance between them was larger than a threshold set to 100 meters .the radius of gyration corresponds to the radius of the smallest circle encompassing all location points registered each day .for all these behavioral variables , we computed higher - order features corresponding to their statistical behavior over the 6-weeks period : mean , median , standard deviation , coefficient of variation ( ratio of the standard deviation to the mean ) and the quartile coefficient of dispersion .the last two features capture dispersion effects . furthermore , for each participant and data category , we computed mean , median , and standard deviation of their bids .we studied all correlations found between daily behavioral variables and bids in each category .we found a positive correlation between the mean location bid value and the median of daily distance traveled ( ) .that is , the larger the daily distance traveled , the higher the valuations of location information . with respect to applications , there are several statistically significant correlations . in particular ,the total app running time is negatively correlated with the median app bid value ( ) , meaning that the more time a participant spent using mobile apps , the lower the median valuations of app information .no significant correlation was found between communication and photo - taking behavioral features and bids on the communications and media categories . there were a number of significant correlations between bids on bulk information and daily behaviors .below we summarize the most notable correlations that we found .mobility information was positively correlated with bids on bulk _ location _ , _ communication _ and _ application _ information .in particular , with the median of the i ) radius of gyration ( for loc . ; for comm . ; for apps ) ; and ii ) daily mean speed ( for loc . ; for comm . ; for apps ) .location and application data was also positively correlated with the median of the daily mean distance traveled ( for loc . ; for apps ) whereas communication bids were also positively correlated with the median of the i ) total distance traveled ( ) and ii ) number of places visited ( ) .we also found statistically significant negative correlations of bulk location , communication and application bids with the coefficients of variation of mobility variables .these correlations imply that the larger the daily distance traveled , the higher the valuation of location , communication and application bulk bids .conversely , the higher the variation in the patterns of mobility of a person , the lower his / her valuation of location , communication and app bulk information .note that bulk communication bids were not correlated with communication variables .in addition , bulk application bids are negatively correlated with the cumulative sum of daily unique total apps ( ) and with the median ( ) and mean ( ) of total apps running daily .finally , bulk media bids are correlated with the cumulative sum of daily unique total apps ( ) . in the case of daily bids, we did not find any meaningful statistically significant correlation between bids and our participants demographics or personality .there were statistically significant correlations with _ self - disclosure _ variables that could be explained by the relevance of privacy aspects for all types of self - disclosure .in particular , the intentional / unintentional factor in self - disclosure is positively correlated with bids in three categories ( communication , applications and media ) : ( 1 ) mean ( ) , median ( ) and standard deviation ( ) in communication bids , ( 2 ) median application bid value ( ) , and ( 3 ) median ( ) , mean ( ) , and standard deviation ( ) of media bids .bulk location bids are found to be negatively correlated with creativity ( ) , while having positive correlations with the intentional / unintentional factor in self - disclosure ( ) and agreeableness ( ) .interestingly , a positive correlation exists between bulk location bids and personal income ( ) .furthermore , bulk communication information positively correlates with agreeableness ( ) , and with the intentional / unintentional factor in self - disclosure ( ) .in the final survey , we asked our participants about particular bids they made during the 6-week data collection phase , and gave them the opportunity to express their views and concerns in free - form text ( see table [ tab : finalsurvey ] for details ) .as seen in table [ tab : finalsurvey ] , q4 asked our participants about their trust preferences with respect to 5 different entities who could be the safekeepers of their personal data : themselves , banks , telcos , governments and insurance companies . from the trust rankings provided by our participants, we computed a _ trust score _ for each entity by assigning a 1 to 5 value according to its rank and subsequently normalizing by the number of respondents .the final ranking that we obtained was : _ yourself _ ( .997 ) , _ banks _ ( .537 ) , _ telcos _ ( .513 ) , _ government _ ( .49 ) , and _ insurance companies _ ( .46 ) .this result is aligned with the initial survey answers ( q2 and q3 in table [ tab : initsurvey ] ) where participants conveyed that they do not trust telco operators or app providers with how they use their data . in sum , overwhelmingly our participants trust themselves with their personal data more than any other entity , followed by banks and telcos .insurance companies were the least trusted party .a similar question was also asked by carrascal _ obtaining similar results : the most trusted entity for a subject was the subject himself and the least trusted entities were the insurance companies .interestingly , in our study , conducted in italy , government was the second _ least _ trusted entity while in carrascal_ et al . _ , conducted in spain , the government was the second _ most _ trusted entity . when analyzing the lowest / highest bids per category , we found that 70% of the highest bids for all categories took place in the first phase of the study ( during the first three weeks ) . adding more auctions ( as it happened in the second phase of the study ) led to lower bids .in the communications category , 61% of the time participants entered a low bid to win and sell the associated communications information .this was significantly higher than for any other category .for all other categories , the most common reason reported for entering the low bid was that the information was not important .this finding suggests that participants found communication data to be the most desirable to sell .conversely , location was the most sensitive category of information as 25% of the time participants entered a high location bid in order to avoid selling the information .this was significantly higher than for the other categories ( 5% for communications , 3% for apps and 6% for media ) .location was the category of data for which subjects opted - out the most ( 56% ) , followed by media ( 24% ) , apps ( 18% ) and communications ( 2% ) . in the free - text explanations provided by our subjectsit is clear that location is deemed to be the most sensitive category of information , _e.g. _ : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` i do nt like the idea of being geo - localized . '' _ + _ `` this kind of information is too detailed and too personal . ''_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ interesting explanations were also provided to justify the choice of not selling apps information , including that from apps usage is possible to infer information related to interests , opinions ( expecially political opinions ) , and tastes : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ `` from the usage of some applications it is possible infer information such as political orientation and other opinions and interests . '' _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _from the previously described analyses we can draw six insights related to mobile personal data : carrascal _ et al ._ have reported higher values in their study on valuation of personal web - browsing information than the ones we obtained in our study .the overall median bid value in our study was 2 while carrascal __ reported an overall median bid value equal to 7 when they took in account context - dependent personal information .there are a few methodological differences between both studies which might explain the differences in bid values .in particular , asked participants to provide a valuation of personal information captured while browsing the web _ in - situ _ using a rsem methodology . instead, we employed a drm methodology querying participants about ther mobile pii from the previous day . from the valuations obtained in and our study , it seems that individual pieces of pii are not as valuable when queried _ out - of - context _ such as in our study than _ in context _ such as in .conversely , bulk mobile pii was valued higher in our study than in and significantly higher than individual pii .as shown in tables [ tab : bid - values ] and [ tab : bulk - bids - stats ] , bulk information was valued an order of magnitude higher than individual data except for information in the media category .this finding is probably due to the power of the visualizations in the eos survey , particularly for location and apps data .one hypothesis for this higher valuation is that participants realized how bulk data conveyed information about their life - style and habits and therefore considered it to be more valuable than daily items .recently , tang _et al . _ have shown the impact of different visualization types ( text- , map- , and time - based ) on social sharing of location data .this result has a direct consequence for the design of trading mobile pii and highlights an asymmetry between buyers and sellers : for buyers , it would be more profitable to implement mechanisms to trade single pieces of information that they could later aggregate . for sellers , however , it would be more advantageous to sell bulks of information . as shown in tables [tab : bid - values ] and [ tab : bulk - bids - stats ] , location information received the highest valuation for all levels of complexity and was the most opted - out category of mobile pii .bulk location information was very highly valued , probably due to the powerful effect of the map visualization in the eos survey .several participants also expressed that they did not want to be geolocalized and considered location information to be highly sensitive and personal .moreover , we found statistically significant correlations between mobility behaviors ( _ e.g. _ mean daily distance traveled , daily radius of gyration , etc . ) and valuations of personal data .not all users value their personal data equally : the more someone travels on a daily basis , the more s / he values not only her / his location information but also her / his communication and application information . regarding this relation , previous works who focused on location information have presented contrasting results ; as we probe participants daily about fine - grained personal data they have just produced , our approach substantially differs from these survey - based studies , and it is thus difficult to directly compare with these works . generally , our results seem to support the findings presented in .these insights may have an impact on the design of commercial location - sharing applications . while users of such applications might consent at install time to share their location with the app, our work suggests that when explicitly asked about either individual or bulk location data , of users decide not to share their location information .in addition , mobility behaviors will influence the valuations of pii .tsai _ et al . _ conducted an online survey with more than 500 american subjects to evaluate their own perceptions of the likelihood of several location - sharing scenarios along with the magnitude of the benefit or harm of each scenario ( _ e.g. _ being stalked or finding people in an emergency ) .the majority of the participants found the risks of using location - sharing technologies to be higher than the benefits .however , today a significant number of very popular mobile apps such as foursquare and facebook places make use of location data . these popular commercial location sharing apps seem to mitigate users privacy concerns by allowing them to selectively report their location using check - in funtionalities instead of tracking them automatically . based on our findings and given our participants concerns and high valuations of bulk location information , we believe that further user - centric studies on sharing and monetary valuation of location data are needed .when we correlated bid values against socio - demographic characteristics , we did not find significant correlations .this result is in contrast to previous work that found socio - demographic ( mainly sex and age ) differences in privacy concerns and consequent information disclosure behaviors .however , these previous studies were focused mainly on online information and on disclosure attitudes and privacy concerns than on monetary valuation of personal data . , instead , found results in line with ours ( no significant correlations ) except for a surprising low valuation of online information from older users . on the other hand, we found statistically significant correlations between behavior ( particularly mobility and app usage ) and valuations of bids . from our findingsit seems that personal differences in valuations of mobile pii are associated with behavioral differences rather than demographic differences . in particular , the larger the daily distance traveled and radius of gyration , the higher the valuation of pii .conversely , the more apps a person used , the lower the valuation of pii . a potential reason for this correlation is due to the fact that savvy app users have accepted that mobile apps collect their mobile pii in order to provide their service and hence value their mobile pii less .we found a positive correlation between the intentional / unintentional dimension of self - disclosure and the median values of the bids .this result could be explained by the fact that people with more intentional control about disclosing their own personal information , may be more aware of their personal data and hence also value it more from a monetary point of view .interestingly , we did not find significant correlations between bid values and other traits with the exception of agreeablenness ( with bulk location and communication bids ) .previous studies on the influence played by individual traits ( usually personality traits and loc ) on privacy dispositions and privacy - related behaviors have provided contrasting evidence : some of them found small correlations , while schrammel _et al . _ found no correlations .hence , our results require additional investigations in order to clarify which are , if any , the dispositions and individual characteristics to take in account when a buyer makes a monetary offer for personal data . from our study and from carrascal_ et al . _ , it clearly emerges that individuals mainly trust themselves to handle their own personal data .this result suggests the adoption of a decentralized and _ user - centric _architecture for personal data management .recently , several research groups have started to design and build personal data repositories which enable people to control , collect , delete , share , and sell personal data , and whose value to users is supported by our findings . during our studythere were two unusual days : december 8th ( immaculate conception holiday ) and november 11th ( a day with extremely strong winds which caused multiple road blocks and accidents ) .as can be seen in figure [ fig : medianbids ] , the median bids for all categories in these two days were significantly higher than for the rest of the days in the study .perhaps not surprisingly , participants in our study value their pii higher in days that are unusual when compared to typical days .this result suggests that not all pii even within the same category and level of complexity is valued equally by our participants , which has a direct implication for personal data markets and for services that monetize mobile personal data .we have investigated the monetary value that people assign to their pii as it is collected by their mobile phone .in particular , we have taken into account four categories of pii ( location , communication , apps and media ) with three levels of complexity ( individual , aggregated and processed ) .we have carried out a comprehensive 6-week long study in a living lab environment with 60 participants adopting a day reconstruction method along with a reverse second price auction mechanism to collect honest monetary valuations .we have found that location is the most valued category of pii and that bulk information is valued much higher than individual information ( except for the media category ) .we have identified individual differences in bidding behaviors which are not correlated with socio - demographic traits , but are correlated with behavior ( mobility and app usage ) and intentional self - disclosure .l. barkhuus , b. brown , m. bell , s. sherwood , m. hall , and m. chalmers . from awareness to repartee : sharing location within social groups . in _ proceedings of the sigchi conference on human factors in computing systems _ , pages 497506 .acm , 2008 .j. p. carrascal , c. riederer , v. erramilli , m. cherubini , and r. de oliveira . your browsing behavior for a big mac : economics of personal information online . in _ proceedings of the 22nd international conference on world wide web _ ,pages 189200 , 2013 .s. consolvo , i. e. smith , t. matthews , a. lamarca , j. tabert , and p. powledge .location disclosure to social relations : why , when , & what people want to share . in _ proceedings of the sigchi conference on human factors in computing systems _ ,pages 8190 .acm , 2005 .j. i. hong and j. a. landay .an architecture for privacy - sensitive ubiquitous computing . in _ proceedings of the 2nd international conference on mobile systems , applications , and services_ , mobisys 04 , pages 177189 .acm , 2004 .j. lin , g. xiang , j. i. hong , and n. sadeh .modeling people s place naming preferences in location sharing . in _ proceedings of the 12th acm international conference on ubiquitous computing _ ,pages 7584 .acm , 2010 .j. lindqvist , j. cranshaw , j. wiese , j. hong , and j. zimmerman .i m the mayor of my house : examining why people use foursquare - a social - driven location sharing application . in _ proceedings of the sigchi conference on human factors in computing systems _ ,pages 24092418 .acm , 2011 .m. mun , s. hao , n. mishra , k. shilton , j. burke , d. estrin , m. hansen , and r. govindan .personal data vaults : a locus of control for personal data streams . in _ proceedings of the 6th international conference _ , co - next 10 , pages 112 , 2010 .m. perugini and l. di blas .the big five marker scales ( bfms ) and the italina ab5c taxonomy : analyses from an emic - etic perspective . in b.de raad b. and m. perugini , editors , _ big five assessment_. gottingen : hogrefe and huber publishers , 2002 .d. quercia , r. lambiotte , d. stillwell , m. kosinski , and j. crowcroft .the personality of popular facebook users . in _ proceedings of the acm 2012 conference on computer supported cooperative work _ , pages 955964 .acm , 2012 . c. riederer , v. erramilli , a. chaintreau , b. krishnamurthy , and p. rodriguez . for sale : your data : by : you . in_ proceedings of the 10th acm workshop on hot topics in networks _ , hotnets - x , pages 13:113:6 , new york , ny , usa , 2011 .acm .j. schrammel , c. kffel , and m. tscheligi .personality traits , usage patterns and information disclosure in online communities . in _ proceedings of the 23rd british hci groupannual conference on people and computers : celebrating people and technology _ , pages 169174 .british computer society , 2009 .k. p. tang , j. i. hong , and d. p. siewiorek . understanding how visual representations of location feeds affect end - user privacy concerns . in _ proceedings of the 13th international conference on ubiquitous computing _ ,pages 207216 .acm , 2011 .e. toch and i. levi .locality and privacy in people - nearby applications . in _ proceedings of the 2013 acm international joint conference on pervasive and ubiquitous computing _ , pages 539548 .acm , 2013 .r. want , t. pering , g. danneels , m. kumar , m. sundar , and j. light . the personal server : changing the way we think about ubiquitous computing . in _ in proceedings of 4th international conference on ubiquitous computing _ , pages 194209 , 2002 .j. wiese , p. g. kelley , l. f. cranor , l. dabbish , j. i. hong , and j. zimmerman .are you close with me ?are you nearby ? : investigating social groups , closeness , and willingness to share . in _ proceedings of the 13th international conference on ubiquitous computing _ , pages 197206 .acm , 2011 .a. l. young and a. quan - haase .information revelation and internet privacy concerns on social network sites : a case study of facebook . in _ proceedings of the fourth international conference on communities and technologies _ ,pages 265274 .acm , 2009 .
|
in the context of a myriad of mobile apps which collect personally identifiable information ( pii ) and a prospective market place of personal data , we investigate a user - centric monetary valuation of mobile pii . during a 6-week long user study in a living lab deployment with 60 participants , we collected their daily valuations of 4 categories of mobile pii ( communication , _ e.g. _ phonecalls made / received , applications , _ e.g. _ time spent on different apps , location and media , _ e.g. _ photos taken ) at three levels of complexity ( individual data points , aggregated statistics and processed , _ i.e. _ meaningful interpretations of the data ) . in order to obtain honest valuations , we employ a reverse second price auction mechanism . our findings show that the most sensitive and valued category of personal information is location . we report statistically significant associations between actual mobile usage , personal dispositions , and bidding behavior . finally , we outline key implications for the design of mobile services and future markets of personal data . 2
|
the unavoidable interaction of a quantum system with its environment generally causes decoherence and a loss of quantumness . on the other hand ,the possibility to perform quantum operations within the coherence time of a quantum system lies at the heart of quantum information processing . a deep understanding of the decoherence mechanisms in quantum systems , together with the capability to engineer the environment in order to reduce its detrimental effects , are thus essential steps toward the development of quantum technologies .the interaction of a quantum system with its environment may be described using either a classical or a quantum mechanical picture of the environment .understanding whether and in which conditions the two descriptions are equivalent is still a debated topic .when the environment has many degrees of freedom and/or a structured noise spectrum , a quantum description may be challenging , and the approximations may be crude enough to prevent a reliable description of the dynamics . in these situations ,a classical description may be convenient and also more accurate .several systems of interest belong to these categories and many efforts have been devoted to study situations where quantum systems are affected by classical noise .examples include the dynamics of quantum correlations , the simulation of motional averaging , or decoherence in solid state qubits and the characterization of the environment using quantum probes .when the environment affecting the quantum system may be described as collection of fluctuators , a gaussian statistics for the noise can be assumed .moreover , the gaussian approximation is valid even in the presence of non - gaussian noise , as far as the coupling with the environment is weak . in this paper, we address the dynamics of entanglement for a two - qubit system subject to a classical noise induced by a gaussian stochastic process .specifically , we consider the case where the typical frequencies of the system are larger compared to those of the environment , so that the system dynamics can be described as a pure dephasing . dephasing induced by classical noise has been studied previously , and it is known to induce a monotonic decay of entanglement , including the phenomenon of sudden death ( esd ) i.e. the transition from an entangled to a separable state after a finite interaction time . herewe quantitatively compare the degrading effects of different kinds of environments by defining the _ entanglement - preserving time _ and the _ entanglement - survival time _ and by studying their dependence on the nature and on the parameters of the stochastic process that models the environment .we focus on two paradigmatic examples of gaussian processes describing normal and anomalous diffusion processes : the ornstein - uhlenbeck process and the fractional gaussian noise .this paper is organized as follows : in sec .[ sec : model ] we describe the physical model that accounts for the system - environment interaction and introduce the gaussian processes that drive the noise . in sec .[ sec : results ] we look at the dynamics of the system and analyze in some detail the dependence of the entanglement - preserving time and the entanglement - survival time on the nature of the gaussian process and the initial state of the system .[ sec : conclusions ] closes the paper with some concluding remarks .we consider a system of two non - interacting , identical qubits , characterized by the same energy splitting and coupled to two external classical fluctuating fields .the effective hamiltonian is thus of the form where the local hamiltonians are \sigma_z.\ ] ] here , is a coupling constant and is an external classical field acting on each qubit , which we describe by means of a zero - mean gaussian stochastic process .we consider both the case in which the two qubits are interacting with two independent environments , i.e. and are totally uncorrelated , and the case in which the two qubits are subject to a common environment , .the hamiltonian models an effective interaction between a quantum system and a noisy environment having characteristic frequencies much smaller than the typical frequencies of the system .the hamiltonian in eqs . andcan also describe a two - level quantum degree of freedom coupled to a classical degree of freedom , for example the spin of a spin- particle undergoing a diffusion process in an external field .a gaussian process can be described completely by its second order statistics , i.e. by its mean and its autocorrelation function , in formula : } = 0 \\ k(t , t ' ) & = { \mathbb{e}\left[b(t)b(t')\right]}\end{aligned}\ ] ] where }$ ] denotes the average over all possible realizations of the process .the characteristic function of a gaussian process is defined as } = \notag \\ & \quad \exp\left[- \frac 12 \int_0^t\int_0^t\!\ !ds\ , ds'\ , f(s ) k(s , s ' ) f(s ' ) \right],\end{aligned}\ ] ] where is an arbitrary function of time .if is constant with respect to time , eq . rewrites as } = \exp \left[- \frac12 \kappa^2 \beta(t ) \right]\ ] ] where in this work , we focus on two paradigmatic gaussian processes : the ornstein - uhlenbeck ( ou ) process and the fractional gaussian noise ( fgn ) .the ou process describes a diffusion process with friction and it is characterized by the autocorrelation function where plays the role of a memory parameter and is the correlation time of the process . for increasing noise spectrum becomes broader and in the limit one achieves white noise .the fractional gaussian noise describes anomalous diffusion processes , with a diffusion coefficient proportional to , where is known as the hurst parameter .the covariance function may be written when we have and the fgn reduces to the wiener process ( i.e. brownian motion ) .when , the increments of the process have positive correlation and the regime is called super - diffusive ; when , we are in the sub - diffusive regime and the increments are negatively correlated .the functions for the ou and fgn processes are given by : the evolution operator for a given realization of the process , is expressed as : = \notag \\ = & \exp\{-i[\omega_0 t + \lambda \varphi_1(t)]\sigma_z \}\notag \\ & \otimes \exp\{-i[\omega_0t+\lambda \varphi_2(t)]\sigma_z\}\end{aligned}\ ] ] where we defined the phase noise .if the system is initially prepared in the state , the density matrix at a time is given by the expected value of the evolved density matrix over all possible realizations of the stochastic processes , i.e. }.\ ] ] as initial state , we consider a system prepared in a bell - state mixture : where , , and the are the three pauli matrices .the coefficients satisfy the condition , and are related to the through the equalities : we evaluate the entanglement by means of the negativity where are the negative eigenvalues of the partial transpose of the system density matrix .negativity is zero for separable states and one for maximally entangled states , such as pure bell states .here , we consider the case of independent environments , i.e. each qubit is coupled to its own environment , described by the stochastic field . in order to obtain the evolved density matrix of the system , we calculate the expectation value in eq . over all possible realizations of the two uncorrelated processes and .the evolved density matrix for the two qubits can be written explicitly by using eq . .we find that is , a pure dephasing map . by applying the local unitary transformation , we can write in the diagonal bloch form where and are the components of the initial state .since the density matrix depends on time only through the function , the system will reach the separable steady state for .the trajectories of the evolved states in the -parameter space are shown in fig .[ fig : trajectories ] ( left ) .we notice that , with the exception of initial bell states , the trajectories of the system actually enter the set of separable states at a finite time , thus showing a sudden death of entanglement .the negativity as a function of time , for an initial arbitrary bell - state mixture , is given by : as we can see from eq ., the evolution of negativity does nt depend on the energy splitting of the two qubits .if the two qubits interact with the same environment , we can assume that and thus \sigma_z \ } \otimes \exp\{-i[\omega_0t+\lambda \varphi(t)]\sigma_z\}.\ ] ] the evolved density matrix at time is given by and the bloch - diagonal form of the state ( after a local unitary transformation ) is \sigma_x \otimes \sigma_x + \notag \right .\\ & \left .+ \frac{1}{2 } \left[e^{-8 \lambda^2 \beta(t ) } ( a_2-a_1)+a_1+a_2\right ] \sigma_y \otimes \sigma_y + a_3 \sigma_z \otimes\sigma_z \right \}.\end{aligned}\ ] ] in this case , the negativity as a function of time for an initial arbitrary mixture of bell states , is \label{eq : negativity_ce}\end{aligned}\ ] ] the trajectories in the bell - state tetrahedron are shown in fig .[ fig : trajectories ] ( right ) .they run orthogonally to the plane . by looking at the figure, we notice that the system experiences esd when the initial state has , except for mixtures of and , for which only for . for those bell - state mixtures that are entangled and for which , the trajectory runs parallel to the surface of the octahedron and hence negativity is constant over time .this set also includes the two bell states which are stable states for the dephasing dynamics . , for two independent environments ( left ) and for a common environment ( right ) .the bell - state mixtures , eq . , form a tetrahedron .the set of separable states is the dark - blue octahedron .the initial states are bell - state mixtures that lie on the surface of the tetrahedron . for independent environments ,the trajectories converge to the green line . for a common environment ,the trajectories are directed orthogonally to the plane , shown in green . in both cases, remains constant.,title="fig : " ] , for two independent environments ( left ) and for a common environment ( right ) .the bell - state mixtures , eq . , form a tetrahedron .the set of separable states is the dark - blue octahedron .the initial states are bell - state mixtures that lie on the surface of the tetrahedron . for independent environments ,the trajectories converge to the green line . for a common environment ,the trajectories are directed orthogonally to the plane , shown in green . in both cases, remains constant.,title="fig : " ] the effect of the longitudinal field is to induce decoherence in the form of a dephasing .the entanglement , computed by the negativity , decays monotonically in time , as shown in fig .[ fig : neg_vs_t ] . in particular ,depending on the initial state different behaviors of quantum correlations appear : for initial bell states , the negativity goes asymptotically to zero , as a smooth function of time ; on the contrary , if the initial state is a mixture of bell states , entanglement displays sudden death , reaching zero abruptly . for a fixed initial state ,the robustness of quantum correlations depends on the nature of the considered stochastic process : different expressions of the function give different decaying velocities for entanglement .( right ) interacting with independent environments driven by different stochastic processes : white noise ( solid blue ) , ou with ( red dashed ) , wiener ( green dotted ) , fgn with ( dot - dashed black ) . for pure bell states ,the negativity decreases smoothly to zero , while for mixtures of bell states esd appears.,title="fig : " ] ( right ) interacting with independent environments driven by different stochastic processes : white noise ( solid blue ) , ou with ( red dashed ) , wiener ( green dotted ) , fgn with ( dot - dashed black ) .for pure bell states , the negativity decreases smoothly to zero , while for mixtures of bell states esd appears.,title="fig : " ] we now investigate the role of the different considered processes in enhancing the system s ability to retain its coherence . to be quantitative ,we define the _ entanglement - preserving time _ as the time at which the negativity of the system falls below a certain threshold , that we fix at the ratio of the initial negativity .we first consider the case in which the initial state is a bell state . in this case , the negativity as a function of time is easily found to be \\n_{\text{ce}}(t ) & = \exp [ -8 \lambda^2 \beta(t)]\end{aligned}\ ] ] for the independent - environment and common - environment case , respectively . upon introducing the quantity , we may write the entanglement - preserving time as in table [ tab : tstar ] , where we show the dependencies of on the parameters of the processes , i.e. the inverse of the correlation time for the ornstein - uhlenbeck process and the hurst parameter for the fractional noise .we also report the results for white noise ( i.e. ou for ) and the wiener process ( i.e. fgn with ) ..the entanglement - preserving time for different environments and for an initial pure bell state .the quantity is given by and is the lambert function , i.e. the principal solution of . [cols="^,^,^",options="header " , ] [ tab : tstar ] the entanglement - preserving time for ou and fgn is shown in fig .[ fig : tstar_ou_fbm ] as a function of the characteristic parameters and . for the ornstein - uhlenbeck process , in the limit of a quasi - static field ,i.e. , the entanglement - preserving time diverges , , such that system retains its coherence indefinitely , while in the markovian limit , , , recovering the behavior typical of the white noise . in the case of fgn ,the dependence of on is well approximated by a linear relation and the higher the diffusion coefficient , the longer the entanglement - preserving time .we also notice that , for vanishing , is comparable to the ou process with . indeed, we have that for small and for vanishing . for general mixtures of bell states, is always smaller than the case of pure bell states . as a function of the characteristic parameter of the external field .we show results for ornstein - uhlenbeck ( left ) and fractional gaussian noise ( right ) and for the case of independent ( solid blue ) and common ( red dashed ) environments.,title="fig : " ] as a function of the characteristic parameter of the external field .we show results for ornstein - uhlenbeck ( left ) and fractional gaussian noise ( right ) and for the case of independent ( solid blue ) and common ( red dashed ) environments.,title="fig : " ] in fig .[ fig : tstar_vs_neg ] we show as a function of the initial negativity for a set of randomly generated initial bell - mixed states interacting with ou and fgn external fields ( blue and red points respectively ) either independently ( left panel ) or as a common environment ( right panel ) .as it is apparent from the plots , the larger is the initial entanglement , the longer is the preserving time .this is true both in the case of independent and common environments . in the former case , the entanglement - preserving time is longer than in case of a common bath , for a fixed value of the initial negativity . in both scenarios ,the entanglement is more robust in the case of fgn , rather than the ou process , with longer values of the preserving time . by looking at fig .[ fig : tstar_vs_neg ] we see that the values of are not much dispersed .rather , they concentrate around typical values which strongly depend on the kind of environment and only slightly on the initial negativity itself .besides , the value of is bounded from below by an increasing function of the initial negativity , the analytical expression of which can be obtained by determining the entanglement - preserving time for mixtures of a and a bell state . in this case , for a given ratio to the initial negativity , satisfies the equation .\ ] ] where for independent environments and for a common environment . from eq .( [ eq : tstar_lower_bound ] ) we obtain lower bounds to as a function of , which are shown ( solid and dashed black lines ) in fig .[ fig : tstar_vs_neg ] .( for a ratio to the initial negativity ) as a function of the initial negativity for randomly chosen initial bell - state mixtures .we show results for the ornstein - uhlenbeck process with ( blue points ) and the wiener process , i.e. fractional gaussian noise with ( red points ) .the solid and dashed black lines are the lower bounds for for the ou and wiener process respectively , obtained from eq .[ eq : tstar_lower_bound ] . left : independent environments .right : common environment.,title="fig : " ] ( for a ratio to the initial negativity ) as a function of the initial negativity for randomly chosen initial bell - state mixtures .we show results for the ornstein - uhlenbeck process with ( blue points ) and the wiener process , i.e. fractional gaussian noise with ( red points ) .the solid and dashed black lines are the lower bounds for for the ou and wiener process respectively , obtained from eq .[ eq : tstar_lower_bound ] . left : independent environments .right : common environment.,title="fig : " ] as previously discussed , the interaction of the two - qubit system with the external classical field induces a sudden death of entanglement for most of the bell - state mixtures . in this sectionwe study how the nature of the stochastic gaussian process affects the _ entanglement survival time _, , i.e. the time at which the state becomes separable and its negativity goes to zero . in fig .[ fig : t_esd_vs_neg ] we show versus the initial negativity for randomly generated bell - state mixtures for the ou process and the fgn with .we can see that is bounded from below by a monotonically increasing function of negativity , which itself diverges for , i.e. as the initial state gets closer to a pure bell state .the analytical expression of this function is obtained by considering initial states belonging to a face of the bell - state tetrahedron , and thus easily follows from eq .( [ eq : tstar_lower_bound ] ) by substituting .we have where for the independent - environments case and for the common - environment case .survival time is thus longer for larger values of the initial entanglement . in the case of independent environmentsthe lower bound is larger than in the case of a common environment , confirming the tendency of entanglement to be more robust in the case of independent noises affecting the two qubits . as opposed to the entanglement - preserving time , the behavior of is comparable for the two considered processes . as a function of the initial negativity for randomly chosen ( initial ) bell - state mixtures , for the ornstein - uhlenbeck process with ( blue ) and the wiener process , i.e. fractional gaussian noise with ( red ) .left : independent environments .the solid and dashed lines are the lower bounds for for the ou and wiener process respectively , obtained from eq .[ eq : tes_lower_bound ] .right : common environment.,title="fig : " ] as a function of the initial negativity for randomly chosen ( initial ) bell - state mixtures , for the ornstein - uhlenbeck process with ( blue ) and the wiener process , i.e. fractional gaussian noise with ( red ) . left : independent environments .the solid and dashed lines are the lower bounds for for the ou and wiener process respectively , obtained from eq .[ eq : tes_lower_bound ] .right : common environment.,title="fig : " ]the decoherence caused by the interaction of a quantum system with the external environment is one of the main obstacle to the large scale deployment of quantum communication protocols and quantum information processing .a deep understanding of the decoherence mechanisms and the ability to engineer the environment are thus in order to obtain more robust quantum correlations and to design robust implementations of quantum technologies . in this paper, we have addressed the dynamics of a two - qubit system interacting with classical noise generated by a stochastic gaussian process and leading to a dephasing time evolution . in particular , we considered two diffusion processes : the ornstein - uhlenbeck process , characterized by a decoherence time and the fractional gaussian noise , characterized by the hurst parameter .we computed the time evolved density matrix of the two - qubit system by performing the average over the stochastic processes , both in the case of independent and common environments .we have characterized the trajectories of the system inside the set of mixtures of bell - states and shown the occurence of sudden death of entanglement for certain sets of initial quantum states .we introduced the entanglement - preserving time and the entanglement - survival time in order to analyze the effects of the nature of noise on the decoherence mechanism .we found that is larger for fgn than ou process and that a larger initial entanglement corresponds to a longer preserving time .we also found that is bounded from below by an increasing function of the initial negativity and that independent environments degrade quantum correlations more weakly than a common one . also the survival time is bounded from below by a ( different ) increasing function of the initial negativity but , contrarily to the preserving time , has comparable values for the two considered processes .overall , our results indicate that engineering the environment has only a slight influence over the entanglement - survival time , i.e. the occurence of entanglement sudden - death , while it represents a valuable resource to increase the entanglement - preserving time , i.e. to maintain entanglement closer to the initial level for a longer interaction time .this work has been supported by the miur project firb - lichis - rbfr10yq3h .10 j. helm and w. t. strunz , _ phys . rev .a _ * 80 * ( 2009 ) 042108 . j. helm , w. t. strunz , s. rietzler , and l. e. wrflinger , _ phys .a _ * 83 * ( 2011 ) 042103 .d. crow and r. joynt , _ phys .a _ * 89 * ( 2014 ) 042123 . w. m. witzel , k. young , s. das sarma , arxiv:1307.2597v1 . t. yu and h. h. eberly , _ opt ._ * 283 * ( 2010 ) 676 .b. bellomo , g. compagno , a. darrigo , g. falci , r. lo franco , and e. paladino , phys .a 81 , 062309 ( 2010 ) j .- q . li and j .- q .liang , _ phys .a _ * 375 * ( 2011 ) 1496 . c. benedetti , f. buscemi , p. bordone , m. g. a. paris , _ int .j. quantum inf . _* 10 * ( 2012 ) 1241005 .p. bordone , f. buscemi , c. benedetti , _ fluct .noise lett . _* 11 * ( 2012 ) 1242003 .r. lo franco , b. bellomo , e. andersson , and g. compagno , phys .rev . a * 85 * ( 2012 ) 032318 r. lo franco , b. bellomo , s. maniscalco , and g. compagno , int . j. mod . phys .b * 27 * ( 2012 ) 1345053 j .- s .xu , k. sun , c .- f .li , x .- y .xu , g .- c .guo , e. andersson , r. lo franco , nat . commun . 4 ( 2013 ) 2851 .a. darrigo , r. lo franco , g. benenti , e. paladino , and g. falci , ann . phys .* 350 * , ( 2014 ) 211 .j. li , m. p. silvestri , k. s. kumar , j .-pirkkalainen , a .vepslinen , w. c. chien , j. tuorila , m. a. sillanp , p. j. hakonen , e. v. thuneberg , and g. s. paraoanu , _ nat .* 4 * ( 2013 ) 1420 .k. kakuyanagi , t. meno , s. saito , h. nakano , k. semba , h. takayanagi , f. deppe , a. shnirman , _ phys .* 98 * ( 2007 ) 047004 f. yoshihara , k. harrabi , a. o. niskanen , y. nakamura , and j. s. tsai , _ phys .* 97 * ( 2006 ) 167001 e. paladino , l. faoro , g. falci , and r. fazio , _ phys .* 88 * ( 2002 ) 228304 ; g. falci , a. darrigo , a. mastellone , and e. paladino , _ phys . rev .* 94 * ( 2005 ) 167002 g. bukard , _ phys .b _ * 79 * 125317 ( 2009 ) . h.j. wold , h. brox , y. m. galperin , and j. bergli _ phys .b _ * 86 * ( 2012 ) 205404 . r. lo franco , a. darrigo , g. falci , g. compagno , and e. paladino , phys .scripta * t147 * ( 2012 ) 014019 c. benedetti , m. g. a. paris , and s. maniscalco , _ phys .a _ * 89 * ( 2014 ) 012114 . c. benedetti , f. buscemi , p. bordone and m. g. a. paris , _ physa _ * 87 * ( 2013 ) 052328 . m. mannone , r. lo franco and g. compagno , _ phys ._ * t153 * ( 2013 ) 014047 .e. paladino , m. galperin , y. , g. falci and b. l. altshuler , _ rev .* 86 * ( 2014 ) 361 . c. benedetti ,m. g. a. paris , _ int .j. quantum inform ._ * 12 * ( 2014 ) 1461004 .c. benedetti , m. g. a. paris , _ phys .a _ * 378 * ( 2014 ) 2495 .m. g. a. paris , _ physica a _ * 413 * ( 2014 ) 256 .j. bergli , y. m. galperin and b. l. altshuler , _ new j. phys . _ * 11 * ( 2009 ) 0250022 .o. astafiev , y. a. pashkin , y. nakamura , t. yamamoto and j. s. tsai , _ phys .* 93 * ( 2004 ) 267007. y. m. galperin , b. l. altshuler , j. bergli and d. v. shantsev , _ phys ._ , * 96 * ( 2006 ) 097009. b. abel and f. marquardt , _ phys .b _ * 78 * ( 2008 ) 201302(r ) k. rabenstein , v. a. sverdlov and d. v. averin , _ jetp lett . _* 79 * ( 2004 ) 646 .y. makhlin and a. shnirma,_phys ._ * 92 * ( 2004 ) 178301 .m. bana , s. kitajimac , f. shibata , _ phys .a _ * 349 * ( 2006 ) 415 .cywinski , r. m. lutchyn , c. p. nave , s. das sarma , _ phys .b _ * 77 * ( 2008 ) 174509 . j .-. hung , l. cywinski , x. hu , s. das sarma , phys . rev .b * 88 * ( 2013 ) 085314 . t. yu and j. h. eberly , _ opt .* 264 * ( 2006 ) 393 .t. yu and j. h. eberly , _ science _ * 323 * ( 2009 ) 598 .g. de chiara and g. m. palma , _ phys .* 91 * ( 2003 ) 090404 .a. flasconaro , b. spagnolo , _ phys .* 80 * ( 2009 ) 041110 .b. b. mandelbrot and j. w. van ness , _ siam rev . _ * 10 * ( 1968 ) 422 .r. r. puri , _ mathematical methods of quantum optics _( springer , berlin , 2001 ) .
|
we address the dynamics of a two - qubit system interacting with a classical dephasing environment driven by a gaussian stochastic process . upon introducing the concept of _ entanglement - preserving time _ , we compare the degrading effects of different environments , e.g. those described by ornstein - uhlenbeck or fractional noise . in particular , we consider pure bell states and mixtures of bell states and study the typical values of the entanglement - preserving time for both independent and common environments . we found that engineering environments towards fractional gaussian noise is useful to preserve entanglement as well as to improve its robustness against noise . we also address entanglement sudden death by studying the _ entanglement - survival time _ as a function of the initial negativity . we found that : i ) the survival time is bounded from below by an increasing function of the initial negativity , ii ) the survival time depends only slightly on the process used to describe the environment and exhibits typicality . overall , our results show that engineering the environment has only a slight influence over the entanglement - survival time , i.e. the occurence of entanglement sudden - death , while it represents a valuable resource to increase the entanglement - preserving time , i.e. to maintain entanglement closer to the initial level for a longer interaction time .
|
the field of complex networks has grown in recent years with applications across many scientific and engineering disciplines .network science has generally focused on how topological characteristics of a network affect its structure or performance . unlike purely topological networks , spatial networks like roadways , pipelines , and the power grid must take physical distance into consideration .topology offers indicators of the network state , but ignoring the spatial component may neglect a large part of how the network functions . for spatial networks in particular , links of different lengths may have different costs affecting their navigability and construction .percolation provides a theoretical framework to study how robust networks are to failure . in traditional bond percolation , each link in the network is independently removed with a constant probability , and it is asked whether or not the network became disconnected .theoretical studies of percolation generally assume very large networks that are locally treelike , often requiring millions of nodes before finite - size effects are negligible . yetmany physical networks are far from this size ; even large power grids may contain only a few thousand elements .there is a need to study the robustness of small spatial networks .microgrids are one example .microgrids are small - area ( 3050 km ) , standalone power grids that have been proposed as a new model for towns and residential neighborhoods in light of the increased penetration of renewable energy sources .creating small robust networks that are cost - effective will enable easier introduction of the microgrid philosophy to the residential community .due to their much smaller geographic extent , an entire microgrid can be severely affected by a single powerful storm , such as a blizzard or hurricane , something that is unlikely to happen to a single , continent - wide power grid .thus building on previous work , we consider how robustness will be affected by spatial and financial constraints .the goal is to create model networks that are both cost - effective , small in size , and at the same time to understand how robust these small networks are to failures .the rest of this paper is organized as follows . in sec .[ sec : infrastructuremodel ] a previous model of spatial networks is summarized .section [ sec : robustphysinf ] contains a brief summary of percolation on networks , and applies these predictions to the spatial networks . in sec .[ sec : modelinfrob ] we introduce and study a new model of percolation for spatial networks as an important tool for infrastructure robustness .section [ sec : discuss ] contains a discussion of these results and future work .in this work we consider a spatial network model introduced by gastner & newman , summarized as follows .a network consists of nodes represented as points distributed uniformly at random within the unit square .links are then placed between these nodes according to associated construction costs and travel distances .the construction cost is the total euclidean length of all edges in the network , , where is the euclidean distance between nodes and and is the set of undirected links in the network .this sum represents the capital outlay required to build and maintain the network .when building the network , the construction cost must be under a specified budget .meanwhile , the travel distance encapsulates how easy it is on average to navigate the network and serves as an idealized proxy for the functionality of the network .the degree to which spatial distance influences this functionality is tuned by a parameter via an `` effective '' distance tuning toward 1 represents networks where the cost of moving along a link is strongly * spatial * ( for example , a road network ) while choosing closer to 0 leads to more * non - spatial * networks ( for example , air transportation where the convenience of traveling a route depends more on the number of hops or legs than on the total spatial distance ) . to illustrate the effect of , we draw two example networks in fig .[ samecoords ] .finally , the travel distance is defined as the mean shortest effective path length between all pairs of nodes in the network .taken together , we seek to build networks that minimize travel distance while remaining under a fixed construction budget , i.e. given fixed node positions , links are added according to the constrained optimization problem where is the set of links in the shortest effective path between nodes and , according to the effective distances .this optimization was solved using simulated annealing ( see app . [ app : optnetdeets ] for details ) with a budget of 10 ( as in ) and a size of = 50 nodes .we focus on such a small number of nodes to better mimic realistic microgrid scales . in this work , to average results , 100 individual network realizations were constructed for each .influences network topology .the * non - spatial * case shows long - range hubs due to the lack of restriction on edge distance ; the * spatial * case lacks expensive long distance links leading to a more geometric graph .as examples , the non - spatial case may correspond to air travel where minimizing the number of flights a traveler takes on a journey is more important than minimizing the total distance flown , while the spatial case may represent a road network where the overall travel distance is more important than the number of roads taken to reach a destination .[ samecoords ] , title="fig : " ] an important quantity to understand in these networks is the distribution of euclidean link lengths .if edges were placed randomly between pairs of nodes , the lengths would follow the square line picking distribution with mean distance . instead, the optimized network construction makes long links costly and we observe ( fig . [ gammadistru ] ) that the probability distribution of euclidean link length after optimization is well explained by a gamma distribution , meaning the probability that a randomly chosen edge has length is with shape and scale parameters and , respectively .a gamma distribution is plausible for the distribution of link lengths because it consists of two terms , a power law and an exponential cutoff .this product contains the antagonism between the minimization and the constraint in eq . :since longer links are generally desirable for reducing the travel distance , a power law term with positive exponent is reasonable , while the exponential cutoff captures the need to keep links short to satisfy the construction budget and the fact that these nodes are bounded by the unit square .see fig .[ gammadistru ] .the network parameters were chosen under conditions that were general enough to apply to any small network , for instance a microgrid in a small residential neighborhood .the choices of 50 nodes and a budget of 10 were also made in line with previous studies of this network model to balance small network size with a budget that shows the competition between travel distance minimization and construction cost constraint . between nodes and are well explained for all by gamma distributions , i.e. ( * a * ) maximum likelihood estimates of for multiple .two distributions are * shifted vertically * for clarity .( * b * ) the gamma parameters as a function of .quadratic fits provide a guide for the eye .[ gammadistru],width=321 ]percolation theory on networks studies how networks fall apart as they are sampled .for example , in traditional bond percolation each link in the network is independently retained with probability ( equivalently , each link is deleted with probability ) .this process represents random errors in the network .the percolation threshold is the value of where the giant component , the connected component containing the majority of nodes , first appears .infinite systems exhibit a phase transition at , which becomes a critical point . in this work we focus on small * micronetworks* , a regime under - explored in percolation theory and far from the thermodynamic limit invoked by most analyses . in our finite graphs , we estimate as the value of that corresponds to the largest , where is the fraction of nodes in the largest connected component ( fig .[ s1s2 ] ) . in finite systems the second largest component peaks at the percolation threshold ; for the network is highly disconnected and all components are small , while for a giant component almost surely encompasses most nodes and is forced to be small .note that it is also common to measure the average component size excluding the giant component . and , respectively , as a function of link deletion probability .in finite systems the percolation threshold can be estimated from the maximum of ( dashed line ) .this example used optimized networks with .[ s1s2],title="fig:",width=302 ] for the case of uniformly random link removals ( bond percolation ) it was shown that the critical point occurs when is such that , where and are the first and second moments of the percolated graph s degree distribution , respectively .we denote this theoretical threshold as to distinguish this value from the estimated via . computing this theoretical prediction for the optimized networks ( sec .[ sec : infrastructuremodel ] ) we found between 0.66 and 0.71 for the full range of ( fig .[ theoryqcpred ] ) .it is important to note that the derivation of this condition for makes two related assumptions that are a poor fit for these optimized spatial networks .first , the theoretical model studies networks whose nodes are connected at random .this assumption does not hold for the constrained optimization ( eq . ) we study .second , this calculation neglects loops by assuming the network is very large and at least locally treelike .for the small , optimized networks we build this is certainly not the case .these predictions for the critical point do provide a useful baseline to compare to the empirical estimates of via .we computed this predicted critical point for each finding between 0.660.72 .the quadratic fit provides a guide for the eye .[ theoryqcpred],title="fig:",width=302 ]the work by gastner and newman showed the importance of incorporating spatial distances into the construction of an infrastructure network model . with physical infrastructurewe argue that it is important to also consider spatial distances when estimating how robust a network is to random failures . for example , consider a series of power lines built in a rural area where trees are scattered at random . in a storm trees may fall and damage these lines , and one would expect , all else being equal , that one line built twice as long as another would have twice the chance of a tree falling on it and thus failing . motivated by this example , an intuitive model for how links fail would require an increasing chance of failure with length .the simplest model supposes that the failure of a link is directly proportional to length , i.e. , that each unit length is equally likely to fail . with this in mindwe now introduce the following generalization of bond percolation : each link independently fails with probability , where 12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * ( ) * * , ( ) link:\doibase 10.1103/physrevlett.102.238703 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.104.018701 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.89.218701 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) ( ) * * , ( ) * * , ( ) in _ _ , vol .( , ) pp . in _ _ ( , ) pp . * * , ( ) * * , ( ) http://mathworld.wolfram.com/squarelinepicking.html [ ( ) ] , * * , ( ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physrevlett.87.258701 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( )
|
power lines , roadways , pipelines and other physical infrastructure are critical to modern society . these structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links . traditionally , studies of network robustness have primarily considered the connectedness of large , random networks . yet for spatial infrastructure physical distances must also play a role in network robustness . understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids , small - area distributed power grids that are well suited to using renewable energy resources . we study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness . by introducing a percolation model where the failure of each link is proportional to its spatial length , we find that , when failures depend on spatial distances , networks are more fragile than expected . accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure .
|
precision predictions for observables in elementary particle physics require the calculation of the corresponding feynman diagrams , the number of which grows fast with the order in the coupling constant being considered . according to the relevant number of different ratios of lorentz invariants or scales involved one may group these observables into 0-scale , 1-scale , 2-scale etc processes . in renormalizable quantum field theories the radiative corrections to the couplings , masses and external fields are examples for 0-scale quantities .anomalous dimensions and hard scattering cross sections , as the wilson coefficients for light and heavy flavors ( for ) in deeply inelastic scattering , are single scale quantities , cf .also the sub - system cross sections for the drell - yan process and the cross section for hadronic higgs - boson production in the heavy mass limit for the top quark belong to this class .mellin moments for single scale quantities , (n ) = \int_0 ^ 1~dx~x^{n}~f(x)\end{aligned}\ ] ] are 0-scale quantities again for .here usually denotes a fraction of lorentz - invariants the support of which is or can be extended to ] can be found as the solution of a linear recurrence equation with polynomial coefficients .there is no general proof that the -loop contributions to a 1-scale observable have to obey such a recurrence . on the other hand, it is known that all single scale processes having been calculated so far do , cf .this is due to the fact that the corresponding observables are found as linear combinations of nested harmonic sums .the single harmonic sums obey exploiting holonomic closure properties one obtains higher order difference equations for polynomial expressions in terms of nested harmonic sums .if a suitably large number of moments (n) ] for nonnegative integers .finally , we activate the summation package and solve the recurrence in terms of generalized harmonic sums .in particular , using the underlying summation theory of -difference fields or exploiting the algebraic relations , a closed form for (n)$ ] in terms of an algebraically independent basis of harmonic sums can be computed .we emphasize that eq .( [ eqdi ] ) covers a much wider class in which more general recurrent quantities can represent the corresponding observables .in particular , our general recurrence solver for dalembertian solutions finds any solution that can be expressed in terms of indefinite nested sums and products . in even higher order or massive calculationsfurther functions may contribute , which could be only found in this way .the mellin - moments of the unpolarized 3loop splitting functions and wilson coefficients for deep inelastic scattering are more easily calculated than the complete expressions , cf . . in the present paperwe investigate whether the exact formulae up to the unpolarized 3loop anomalous dimensions and wilson coefficients can be found establishing and solving difference equations for the mellin moments of these quantities , without further assumptions .we consider the various color contributions to these quantities separately and try to find the complete result from a minimal number of moments . as inputwe apply the moments calculated from the exact solution .the paper is organized as follows . in section [ sec:2 ]we describe how the difference equations of the form are found by just using a finite number of starting points of . in section [ sec:3 ]the algorithms are outlined that can solve these recurrences in the setting of difference fields .they lead directly to the corresponding mathematical structures .these are nested harmonic sums in the present case .in course of the solution we compactify the results applying the algebraic relations to the harmonic sums . the results are discussed in section [ sec:4 ] .our method applies in the same way to all other single scale processes of similar complexity , cf .section [ sec:5 ] contains the conclusions . in the appendix we present a compactified form of the non - singlet 3loop anomalous dimensions , which is automatically provided in the formalism by .the corresponding expressions for the other anomalous dimensions and wilson coefficients to 3-loop order are presented in and codes attached .suppose we are given a finite array of rational numbers , which are the first terms of a certain infinite sequence , i.e. , , , etc .let us assume that satisfies a recurrence of type which we would like to deduce from the given numbers ( ) . in a strict sense , this is not possible without knowing how the sequence continues for .one thing we can do is to determine the recurrence equations satisfied by the data we are given .any recurrence for must certainly be among those . to find the recurrence equations of valid for the first terms ,the simplest way to proceed is by making an ansatz with undetermined coefficients .let us fix an order and a degree and consider the generic recurrence ( [ eq : rec ] ) , where the are indeterminates .for each specific choice , we can evaluate the ansatz , because we know all the values of in this range , and we obtain a system of homogeneous linear equations for unknowns . if , this system is under - determined and is thus guaranteed to have nontrivial solutions .all these solutions will be valid recurrences for for , but they will most typically fail to hold beyond .if , on the other hand , , then the system is overdetermined and nontrivial solutions are not to be expected .but at least recurrence equations valid for all , if there are any , must appear among the solutions .we therefore expect in this case that the solution set will precisely consist of the recurrences of of order and degree valid for all . as an example , let us consider the contribution to the gluon splitting function at leading order , .the first 20 terms , starting with , of the sequence are making an ansatz for a recurrence of order 3 with polynomial coefficients of degree 3 leads to an overdetermined homogeneous linear system with 16 unknowns and 17 equations . despite of being overdetermined and dense, this system has two linearly independent solutions . using bounds for the absolute value of determinants depending on the size of a matrix and the bit size of its coefficients, one can very roughly estimate the probability for this to happen `` by coincidence '' to about . and in fact, it did not happen by coincidence .the solutions to the system correspond to the two recurrence equations and which both are valid for all .if we had found that the linear system did not have a nontrivial solution , then we could have concluded that the sequence would _ definitely _( i.e. without any uncertainty ) not satisfy a recurrence of order 3 and degree 3 .it might then still have satisfied recurrences with larger order or degree , but more terms of the sequence had to be known for detecting those .the method of determining ( potential ) recurrence equations for sequences as just described is not new .it is known to the experimental mathematics community as _ automated guessing _ and is frequently applied in the study of combinatorial sequences .standard software packages for generating functions such as for or for provide functions which take as input a finite array of numbers , thought of as the first terms of some infinite sequence , and produce as output recurrence equations that are , with high probability , satisfied by the infinite sequence .these packages apply the method described above more or less literally , and this is perfectly sufficient for small examples .but if thousands of terms of a sequence are needed , there is no way to get the linear systems solved using rational number arithmetic .even worse , already for medium sized problems from our collection , the size of the linear system exceeds by far typical memory capacities of 1664 gb . for the big problem , it would require approximately 11 tb of memory to represent the corresponding linear system explicitly .it is thus evident that computations with rational numbers are not feasible .instead , we use arithmetic in finite fields together with chinese remaindering and rational reconstruction .modulo a word size prime , the size of the biggest systems reduces to a few gb , a size which easily fits on our architecture . and modulo a word size prime , such a system can be solved within no more than a few hours of computation time by. the modular results for several distinct primes can be combined by chinese remaindering to a modular result whose coefficients are correct modulo the product . if the bit size of this product exceeds twice the maximum bit size appearing in the rational solution , then the exact rational number coefficients can be recovered from the modular images by rational reconstruction .the number of primes needed ( and thus the overall runtime ) is therefore proportional to the bit size of the coefficients in the final output .the final output is a recurrence equation for .but the recurrence equation satisfied by a sequence is not unique : if a sequence satisfies a recurrence equation at all , then it satisfies a variety of linearly independent recurrence equations .the bit size of the rational number coefficients in these recurrence equations may vary dramatically . in order to minimize the number of primes needed for the computation of the rational numbers in the recurrence , it seems preferable to compute on a recurrence whose coefficients are as small as possible in terms of bit size . according to our experience, this recurrence happens to be the ( unique ) recurrence whose order is minimal among all the recurrence equations satisfied by .we have no explanation for this , but it seems to be a general phenomenon , as it can also be observed in certain combinatorial applications . also the number of unknowns for the linear system may vary dramatically among the possible recurrence equations for , and it seems preferable to compute on a recurrence where the number of unknowns is as small as possible .small linear systems are not only preferable because of efficiency , but also because the number of unknowns in the linear system determines the number of initial terms that have to be known a priori in order to detect the recurrence . according to our experience, the size of the linear system is minimized when the order and the degree are approximately balanced .unfortunately , it seems that the recurrence with _ minimal _ ( in terms of bit size ) rational number coefficients has the _ maximal _ number of unknowns in the corresponding linear system , and vice versa .but there is a way to combine the advantages of both at a reasonable computational cost .consider the two recurrence equations and from the example of the gluon gluon splitting function at leading order , , quoted above. a recurrence of smaller order can be obtained from these by multiplying by and by , and then subtracting the results .the choice of the multipliers is such that the coefficient of in the difference cancels : we obtain the calculation just performed can be recognized as the first step in a difference operator version of the euclidean algorithm .applied to two recurrence equations satisfied by a sequence , this algorithm yields their `` greatest common ( right ) divisor '' , which is , with high probability , the minimal order recurrence satisfied by . in our example , the algorithm terminates in the next step , and indeed the sequence of does not satisfy a recurrence of order less than two .note that the linear system for finding the second order recurrence directly would have involved unknowns instead of the 16 unknowns we needed for finding the third order recurrences . for the big problem , a direct computation would require 33804 unknowns instead of the 5022 we actually used .we combine the advantage of a small linear system with the advantage of small coefficients in the output as follows .we first compute for several word size primes the solutions of a small linear system , but then instead of applying rational reconstruction to those , we compute , for each prime independently , their greatest common right divisor modulo this prime .we then apply rational reconstruction to recover the rational number coefficients of those . in summary, we used the following procedure for finding the recurrence equations . 1 . [ step:1 ] choose a word size prime .[ step:2 ] choose some bounds and and make an ansatz for a recurrence of order and degree .the linear system is constructed and solved modulo only .[ step:3 ] if there are no solutions , repeat step [ step:2 ] with increased bounds and .[ step:4 ] if there are solutions modulo , compute their greatest common right divisor modulo by the euclidean algorithm for difference operators .[ step:5 ] repeat steps [ step:1][step:4 ] until chinese remaindering and rational reconstruction applied to the greatest common right divisors for the various primes yields a recurrence that matches the given data .[ step:6 ] return the reconstructed recurrence as the final result . for the big problem , most of the computation time ( about 53% )was spent in step [ step:4 ] . solving the modular linear systems consumed about28% of the time , and chinese remaindering and rational reconstruction took about 18% of the time .the memory bottleneck is in step [ step:2 ] where the linear system is constructed .the memory requirements for the other steps , if implemented well , are negligible . for problems that are even bigger than those we considered , further improvements to the procedureare conceivable .first , there are asymptotically fast special purpose algorithms for step [ step:2 ] available .these algorithms outperform the naive linear system approach we are taking for problem sizes where fast polynomial multiplication algorithms outperform classical algorithms .it is likely that their use would have already been beneficial for some of our problems .second , a gain in efficiency might result from running the procedure on a different platform .we have done all our computations within 6 , but we expect that in particular step [ step:4 ] might considerably benefit from a reimplementation in a computer algebra system providing high - performance polynomial arithmetic .s modular arithmetic , on the other hand , appears to be quite competitive .third , it might be worthwhile to run parts of the procedure in parallel .in particular , computations for distinct primes are completely independent from each other and can be done on different processors without any communication overhead .observe that these steps dominate the runtime .after having obtained difference equations of high order and degree we will now discuss general , efficient algorithms by which these equations can be solved . _ given _ a recurrence relation of order , _ find _ all its solutions that can be expressed in terms of indefinite nested sums and products .such solutions are also called dalembertian solutions , they form a subclass of liouvillian solutions .note that such solutions cover as special cases , e.g. , harmonic sums or generalized nested harmonic sums .the solution to this problem consists of two parts . 1 .first , compute all dalembertian solutions by factoring the recurrence as much as possible into linear right factors .then each linear factor contributes to one extra solution . to be more precise , the factor yields a nested sum expression of depth .second , simplify these nested sum solutions to closed form expressions , e.g. , in terms of harmonic sums , that can be processed further in practical problem solving . in general, the package can solve these problems in the setting of -difference fields .this means that the coefficients and the inhomogeneous part of can be given as polynomial expressions in terms of indefinite nested sums and products . for simplicity ,we restrict ourself to the situation that the given coefficients are polynomials in and that the inhomogeneous part is zero . in other words , we assume that we are given a recurrence of the form or that is produced , e.g. , by the method described in the previous section .subsequently , we present algorithms that find all dalembertian solutions of .equivalently , we can say that we look for all dalembertian sequences which are annihilated by the linear operator which is understood to act on a sequence via we start as follows ._ step 1 : finding a product solution ._ first , we look for a solution of which is of the form for some rational function in . in this taskcan be carried out by executing a generalized version of algorithm that works in general -difference fields ; for an alternative algorithm to find such hypergeometric terms we refer to .if there does not exist such a product solution , then there is no dalembertian solution at all ; see , e.g. ( * ? ? ?* theorem 4.5.5 ) . in this case, we just stop .otherwise , we look for additional solutions as follows ._ step 2 : splitting off a linear right factor ._ by dividing the operator from the right with the operator we arrive at an operator of order such that i.e. , is a linear right factor of ._ step 3 : recursion ._ now we continue by recursion and look for all dalembertian solutions for the operator with order .note that after at most steps we end up at a recurrence of order whose dalembertian solution can be read off immediately ._ step 4 : combining the solutions ._ if we do not find any dalembertian solution for , we just return the solution for .+ otherwise , let with be the solutions of that we obtained after the recursion step . to this end , for define for some properly chosen ( i.e. , is nonzero for all with ) .then the final output of our algorithm is the following remarks are in place . by constructionall the elements from are solutions of : for each , and hence but even more holds .the derived solutions are linearly independent . in particular, any solution of in terms of indefinite nested sums and products can be expressed as a linear combination of ; see ( * ? ? ?* theorem 5.1 ) or ( * ? ? ?* proposition 4.5.2 ) . summarizing , with the algorithm sketched above we can produce all dalembertian solutions of , i.e. , all solutions that are expressible in terms of indefinite nested sums and products .we emphasize that the expensive part of the sketched method is the computation of the product solutions .the following improvements were crucial in order to solve the recurrences under consideration ._ improvement 1 ._ if one finds several product solutions , say , one can produce immediately a recurrence like in , but with order instead of order .moreover , given all dalembertian solutions of this operator , one gets all the solutions of the recurrence without any further computations ; see ( * ? ? ?* theorem 4.5.6 ) ._ improvement 2 ._ for the problems under consideration , it turns out that it suffices to search for product solutions that can be written in the form for polynomials and .therefore , we used optimized solvers of which generalize the algorithm presented in .in addition , arithmetic in finite fields is exploited in order to determine the solutions effectively ._ improvement 3 ._ in our applications , rather big factors from and cancel in the summand of ; in particular , the usually irreducible factor from ( substituted with ) cancels .hence it pays off to compute directly the summand expression : namely , instead of the operator we continue with the operator and look for all its dalembertian solutions .then by construction , the solutions of can be given directly in the form as illustrative example we solve the difference equation for the -term of the unpolarized 3-loop splitting function , using the methods from the previous section , we generate the recurrence relation with given this recurrence , we produce its dalembertian solutions as follows .first , computes a rational solution , namely now we can divide from the right by the operator then the resulting recurrence of the operator is with next , we proceed recursively and can compute the rational solution of .thus we divide by the factor which leads to the first order recurrence with here we can read off directly the solution going back , we obtain besides the solution of . hence by we obtain , besides , the solutions for .since all three solutions , and are linearly independent over , say , the complex numbers , any solution of can be described as a linear combination for .the initial values imply for our concrete problems all the recurrences could be factored completely .equivalently , for a recurrence of order we found linearly independent solutions where the solution with can be given in the form where for the and are polynomials and . [ exp:7order ] for the -term of the 3-loop non - singlet splitting function we found a recurrence of order 7 which fills around five pages .the 7 linearly independent solutions can be computed within 10 seconds ; the largest solution fills around three pages and has the form where the irreducible polynomials have the respective degrees 4 , 8 , 16 , 28 , 63 , 69 , and the denominators are of the form [ exp:35order ] the solution of the recurrence for the -contribution to the unpolarized 3-loop wilson coefficient for deeply inelastic scattering , , constituted the hardest problem to solve .we obtained a recurrence of order 35 .then our solver ran 25 hours and used 3 gb of memory to derive the 35 linearly independent solutions . in total, we needed only 478 instead of summation quantifiers in order to represent those solutions .this is possible due to the improvement 1 . for each of the summands around 20 mb of memorywere used . in particular , in the summandsthe denominators have irreducible factors up to degree 1000 ; the integer coefficients of the polynomials were up to 700 decimal digits long .we consider the following problem : _ given _ indefinite nested sum and product expressions , e.g. , expressions of the form , _ find _ an alternative sum representation with the following properties : 1 .all the involved sums are algebraically independent with each other .the nested depth of the sum expressions is minimal .3 . in the summandsthe degree of the denominators is minimal .the sums should be tuned in such a way that algorithms can perform this simplification as efficiently as possible . in principal , this problem can be solved with karr s summation algorithm based on -difference fields , if one knows explicitly the sum elements in which , e.g. , the expression should be expressed .for small examples such optimal sums with properties 13 from above might be guessed . in particular ,if one has additional knowledge about the objects under consideration , a good sum representation might be known a priory .but if such additional knowledge is not available , karr s algorithm is not applicable . in order to overcome this restriction ,the fourth named author has refined karr s-theory for symbolic summation . as a consequence, we can determine completely automatically such sum representations with the properties from above ; see .with we find the depth - optimal representation of where the sums are given in .we can read off the harmonic sum representation the sum expression for containing in particular the 7-nested sum can be simplified with to the depth - optimal representation finally , we use j. ablinger s package , which transforms this expression to the harmonic sum notation : we emphasize that the harmonic sums in this expression are algebraically independent . the algebraic independence could be accomplished with the package ; out of convenience and efficiency we used the package which contains among various other features the harmonic sum relations of .the derived sum expression of from example [ exp:35order ] contains sums of the form with depth . in around four days and 20 hoursthis expression could be simplified to an expression in terms of 65 sums that satisfy the properties 13 from above . among themthere are 47 sums with depth two ; typical examples are only one sum of nested depth three has been used , namely we emphasize that these sums are constructed in such a way that the difference field algorithms work most efficiently : the less nested the sums are , the more efficient our algorithms work .e.g. , if we switch to harmonic sum notation with ablinger s package , the first sum in can be rewritten as the involved sums have nested depths up to five . with such representations the algorithms in work much slower , or might even fail for our specific input due to time and memory limitations .in the following we apply the method described in the previous section to unfold all the unpolarized qcd anomalous dimensions and wilson coefficients to 3-loop order from a series of mellin moments .this sequence is calculated using the relations given in for the different quantities per color factor and factors given by -values .we will need rather high mellin moments .the corresponding harmonic sums can not be calculated by summer directly , but have to be evaluated recursively , we used a code for this .the highest moment to be calculated is for the to the 3-loop wilson coefficient .its recursive computation requires roughly 3 gb of memory and 270 min computational time on a 2 ghz processor .it is given by a fraction with 13888 numerator and 13881 denominator digits .the set of moments has a size of 69 mb .the determination of most of the other inputs sets requires far less resources . in tables13 we summarize the run parameters for the individual color- and to the splitting functions and in tables 48 to the wilson coefficients in unpolarized deeply inelastic scattering up to 3loop order .we specify the number of moments needed on input and the order , degree , and length of the recurrence derived . for the solutionwe compare the number of harmonic sums in refs . and in the present calculation .the computation times needed to establish and to solve the recurrences are also given .to give some example for the rise of complexity for different orders in the coupling constant , we compare the contributions to . in case of the anomalous dimensionsthe largest amount of moments needed is for , for , and for .the order and degree of the recurrences found are exactly , resp .nearly , the same for . for the non - singlet anomalous dimensions and the singlet anomalous dimensions and order and degree of the difference equationare larger than in case of . the total computation time needed for all anomalous dimensions amount to less than 18 h.the largest number of harmonic sums contributing is 26 .there are significant reductions in their number comparing to the representation given in the attachment to .it amounts to a factor of two or larger , except in case of the very small recurrences . in the non - singlet case the number reduces from 68 to 26 .a large reduction is obtained for from 130 to 21 harmonic sums . for the wilsoncoefficients and four weeks of computation time is needed in each case requiring gb on a 2 ghz processor .the number of necessary harmonic sums is 60 , reducing from 290 in for .this is the number of all harmonic sums not containing the index up to weight after algebraic reduction , cf . .if one compares the number of harmonic sums obtained in the present calculation after the algebraic reduction yields groups characterized by clusters of 58 - 60 , 26 - 29 , 11 - 15 and cases with a number of sums below 10 , up to very few exceptions .as this pattern is the same for quite different quantities , it may be related rather to the topology , but the color- or field - structure of the respective diagrams .this pattern is not seen counting the harmonic sums in the representation of ref . . in case of the smaller recurrencesthe time needed for their derivation is usually shorter than that for its solution .conversely , for the larger recurrences the time required to establish them and the solution time behave roughly like 4(3):1 .the total computation time amounted to 110.3 cpu days . concerning the size of the different problems to be dealt with a naive fivefold parallelization was possible .here we did not yet consider parallelization w.r.t .the number of primes chosen , which would significantly reduce the computational time , of the term of , with , discussed above and for other comparably large contributions .in course of solving the recurrences we reduce the harmonic sums appearing algebraically , , and can express all results in terms of the following harmonics sums : the 3-loop wilson coefficients require the complete set of possible functions up to .this representation can be further reduced using the structural relations to : in we applied a slightly different basis referring to instead of and to instead of , which is algebraically equivalent .these 38 functions can be represented by 35 basic mellin transforms .the ab - initio calculation of moments for the quantities considered in the present paper can be performed by codes like and available for physics calculations .both the computational time and memory requests rise drastically going to higher values of . in case of both parametersincrease by a factor of enlarging .comparable , but slightly larger factors are obtained for . in the well known leading order case , enough moments may be provided for our procedure .already for some color projections of the next - to - leading order corrections , this is no longer the case , , since around 150 initial values are needed . for the 3-loop anomalous dimensions and wilson coefficients be reached with computation times of the order of 0.51 cpu year , cf .the codes still may be improved .however , the power growth going to higher moments will basically remain due to the algorithms used . the method presented in this papercan therefore not be applied to whole color - factor contributions for the anomalous dimensions and wilson coefficients at the 3loop level .they may , however , be useful in solving medium - size problems . in view of constructing general methods suitable to evaluate single scale quantities , methods to evaluate the fixed moments for these quantities at far lower expenses have to be developed . to illustrate the results of the present calculation ,the non - singlet anomalous dimensions to are given as an example in the appendix .the relations for all unpolarized anomalous dimensions and wilson coefficients , separated according to the corresponding color- and -value terms , are attached to this paper in - and files .the -codes provide a check of our relations with the moments calculated in ref . .we established a general algorithm to calculate the exact expression for single scale quantities from a finite , suitably large number of moments , which are zero scale quantities .the latter ones are much more easily calculable than single scale quantities .we applied the method to the anomalous dimensions and wilson coefficients up to 3-loop order .hereby we compactified their representation exploiting all algebraic relations between the harmonic sums .the 3-loop wilson coefficients require the whole set of basic harmonic sums in the sub - algebra spanned by the index set to without .a further compactification can be obtained using the structural relations between the harmonic sums .after algebraic reduction the number of the harmonic sums contributing clusters in several classes mainly determined by the topology of the graphs and widely independent of the color- and field structure of the respective contributions .the cpu time for the whole problem amounted to about four months using 2 ghz processors and 10 gb of memory were needed .the problem can be naively parallelized fivefold .the real computational time needed to establish the recurrences can be shortened further running chinese remaindering in parallel .to solve 3-loop problems for whole color factor contributions is not possible at present , since the number of required moments is too large for the methods available .methods to evaluate the fixed moments for these quantities to high order at far lower expenses have still to be developed .we established and solved the recurrences for all color resp .-projections at once , which forms a rather voluminous problem .yet we showed that rather large difference equations [ order 35 ; degree 1000 ] , which occur for the most advanced problems in quantum field theory , can be reliably and fast established and solved unconditionally .* acknowledgments .* we would like to thank j. vermaseren for discussions .this work was supported in part by dfg sonderforschungsbereich transregio 9 , computergesttzte theoretische teilchenphysik , projects p19462-n18 , p20162-n18 , and p20347-n18 of the austrian fwf , and studienstiftung des deutschen volkes ..run parameters for the unfolding of the non - singlet anomalous dimensions [ cols= " < , > , > , > , > , > , > , > " , ]the non - singlet anomalous dimensions and are given by \\p_{qq}^{1,-}(n ) & = & c_f^2 \biggl[-\frac{3 n^6 + 9 n^5 + 9 n^4 - 5 n^3 - 24 n^2 - 32 n-24}{2 n^3 ( n+1)^3}-16 s_{-3 } \nonumber\\ & & + s_{-2 } \left(\frac{16}{n ( n+1 ) } -32 s_1\right)+s_1 \left(\frac{8 ( 2 n+1)}{n^2 ( n+1)^2}-16 s_2 \right)+\frac{4 \left(3 n^2 + 3 n+2\right)}{n ( n+1 ) } s_2 \nonumber\\ & & -16 s_3 + 32 s_{-2,1}+\frac{16 ( -1)^n}{(n+1)^3 } \biggr ] \nonumber\\ & + & c_a c_f \biggl[-\frac{51 n^5 + 102 n^4 + 655 n^3 + 484 n^2 + 12 n+144}{18 n^3( n+1)^2 } + 8 s_{-3}+\frac{268}{9 } s_1 \nonumber\\ & & + s_{-2 } \left(16 s_1-\frac{8}{n ( n+1 ) } \right)-\frac{44}{3 } s_2 + 8 s_3 - 16 s_{-2,1}-\frac{8 ( -1)^n}{(n+1)^3}\biggr ] \nonumber\\ & + & c_f n_f \biggl[\frac{3 n^4 + 6 n^3 + 47 n^2 + 20 n-12}{9 n^2 ( n+1)^2 } -\frac{40}{9 } s_1+\frac{8}{3 } s_2 \biggr ] \\p_{qq}^{1,+}(n ) & = & c_f^2 \biggl[-\frac{3 n^6 + 9 n^5 + 9 n^4 + 59 n^3 + 40 n^2 + 32 n+8}{2 n^3 ( n+1)^3 } -16 s_{-3 } \nonumber\\ & & + s_{-2 } \left(\frac{16}{n ( n+1)}-32 s_1\right ) + s_1 \left(\frac{8 ( 2 n+1 ) } { n^2 ( n+1)^2}-16 s_2\right ) \nonumber\\ & & + \frac{4 \left(3 n^2 + 3 n+2\right)}{n ( n+1)}s_2 - 16 s_3 + 32 s_{-2,1}+\frac{16 ( -1)^n}{(n+1)^3 } \biggr ] \nonumber\\ & + & c_a c_f \biggl[-\frac{51 n^5 + 153 n^4 + 757 n^3 + 851 n^2 + 208 n-132}{18 n^2 ( n+1)^3}+8 s_{-3 } + \frac{268}{9 } s_1 \nonumber\\ & & + s_{-2 } \left(16 s_1-\frac{8}{n ( n+1)}\right)-\frac{44}{3 } s_2 + 8 s_3 - 16 s_{-2,1}-\frac{8 ( -1)^n}{(n+1)^3 } \biggr]\nonumber\\ & + & c_f n_f \biggl [ \frac{3 n^4 + 6 n^3 + 47 n^2 + 20 n-12}{9 n^2 ( n+1)^2}-\frac{40}{9 } s_1 + \frac{8}{3 } s_2 \biggr ] \\p_{qq}^{2,-}(n ) & = & c_f^3 \biggl\{\left(\frac{64}{n ( n+1)}-128 s_1\right ) s_{-2}^2 + \biggl(\frac{16 \left(3 n^6 + 9 n^5 + 9 n^4 + 17 n^3 + 6 n^2 + 8 n+2\right ) } { n^3 ( n+1)^3 } \nonumber\\ & + & s_1 \left(\frac{64 \left(3 n^2-n+1\right)}{n^2 ( n+1)^2}-1408 s_2\right)-\frac{64 \left(3 n^2 + 3 n -11\right ) s_2}{n ( n+1)}+1536 s_3 + 128 s_{-2,1 } \nonumber\\ & - & 2304 s_{2,1}\biggr ) s_{-2}-\frac{16 \left(3 n^2 + 3 n+2\right ) s_2 ^ 2}{n ( n+1 ) } -\frac{p_1(n ) } { 2 n^5 ( n+1)^5}-576 s_{-5 } \nonumber \nonumber\end{aligned}\ ] ] \biggr\ } \nonumber\\ & + & c_a c_f^2 \biggl\{\left(256 s_1-\frac{16 \left(3 n^2 + 3 n+8\right)}{n ( n+1)}\right ) s_{-2}^2 \nonumber\\ & + & \biggl[-\frac{8 \left(81 n^6 + 243 n^5 - 229 n^4 - 389 n^3 - 130 n^2 + 228 n+72\right)}{9 n^3 ( n+1)^3}+\frac{32 \left(31 n^2 + 31 n-81\right ) s_2}{3 n ( n+1 ) } \nonumber\\ & + & s_1 \left(1728 s_2-\frac{32 \left(134 n^4 + 268 n^3 + 215 n^2 + 45 n+54\right)}{9 n^2 ( n+1)^2}\right)-1792 s_3 - 192 s_{-2,1}+2688 s_{2,1}\biggr ] s_{-2 } \nonumber\\ & + & \frac{176}{3 } s_2 ^ 2-\frac{p_2(n)}{36 n^5 ( n+1)^5}+672 s_{-5}+s_{-4 } \left(\frac{8 \left(97 n^2 + 97 n-210\right)}{3 n ( n+1)}+1120 s_1\right ) \nonumber\\ & + & s_{-3 } \biggl(-576 s_1 ^ 2+\frac{16 \left(31 n^2 + 31 n+108\right ) s_1}{3 n ( n+1)}-\frac{8 \left(268 n^4 + 536 n^3 + 811 n^2 + 507 n+450\right)}{9 n^2 ( n+1)^2 } \nonumber\\ & + & 480 s_{-2}+2656 s_2\biggr)+(-1)^n \biggl(\frac{8 \left(382 n^2 + 41 n-161\right)}{9 ( n+1)^5}-\frac{256 s_{-2}}{(n+1)^3}-\frac{16 ( 127 n+121 ) s_1}{3 ( n+1)^4 } \nonumber\\ & + & \frac{32 s_2}{(n+1)^3}\biggr)-\frac{8 \left(385 n^4 + 770 n^3 + 427 n^2 + 6 n-126\right ) s_3}{9 n^2 ( n+1)^2}+\frac{8 \left(151 n^2 + 151 n-30\right ) s_4}{3 n ( n+1 ) } \nonumber\\ & + & 384 s_5 + 864 s_{-4,1}-\frac{960 s_{-3,1}}{n ( n+1)}-1344 s_{-3,2}\nonumber\end{aligned}\ ] ] \nonumber\\ & + & c_a^2 c_f \zeta_3 \biggl[-\frac{12 \left(5 n^4 + 10 n^3 + 9 n^2 - 4 n-4\right)}{n^2 ( n+1)^2}-96 s_{-2 } \biggr ] \nonumber\\ & + & c_f n_f^2 \biggl [ \frac{51 n^6 + 153 n^5 + 57 n^4 + 35 n^3 + 96 n^2 + 16 n-24}{27 n^3 ( n+1)^3}-\frac{16}{27 } s_1-\frac{80}{27 } s_2+\frac{16}{9 } s_3\biggr ] \nonumber\\ & + & c_f^2 n_f \bigg[-\frac{32}{3 } s_2 ^ 2 -\frac{4 \left(15 n^4 + 30 n^3 + 79 n^2 + 16 n-24\right ) s_2}{9 n^2 ( n+1)^2 } \nonumber\\ & + & \frac{207 n^8 + 828 n^7 + 1443 n^6 + 1123 n^5 - 38 n^4 - 779 n^3 - 632 n^2 + 120}{9 n^4 ( n+1)^4}-\frac{128}{3 } s_{-4 } \nonumber\\ & + & s_{-3 } \left(\frac{32\left(10 n^2 + 10 n+3\right)}{9 n ( n+1)}-\frac{64}{3 } s_1\right)+(-1)^n \left(\frac{64 s_1}{3 ( n+1)^3}-\frac{128 ( 4 n+1)}{9 ( n+1)^4}\right ) \nonumber\\ & + & s_{-2 } \left(-\frac{32 \left(16 n^2 + 10 n-3\right)}{9 n^2 ( n+1)^2}+\frac{640}{9 } s_1-\frac{128}{3 } s_2\right)+\frac{16 \left(29 n^2 + 29 n+12\right ) s_3}{9 n ( n+1)}-\frac{128}{3 } s_4 \nonumber\\ & + & s_1 \left(-\frac{2 \left(165 n^5 + 330 n^4 + 165 n^3 + 160 n^2 - 16 n-96\right)}{9 n^3 ( n+1)^2}+\frac{320}{9 } s_2-\frac{128}{3 } s_3-\frac{128}{3 } s_{-2,1}\right ) \nonumber\\ & - & \frac{64 \left(10 n^2 + 10 n-3\right ) s_{-2,1}}{9 n ( n+1)}+\frac{64}{3 } s_{2,-2}+\frac{64}{3 } s_{3,1}+\frac{256}{3 } s_{-2,1,1 } \biggr ] \nonumber\\ & + & \left(c_f^2 - c_f c_a \right ) n_f \zeta_3 \biggl[32 s_1-\frac{8 \left(3 n^2 + 3 n+2\right)}{n ( n+1 ) } \biggr ] \nonumber\\ & + & c_a c_f n_f \biggl [ -\frac{2 \left(270 \ n^7 + 810 n^6 - 463 n^5 - 1392 n^4 - 211 n^3 - 206 n^2 - 156 n+144\right)}{27 n^4 ( n+1)^3 } \nonumber\\ & + & \frac{64}{3 } s_{-4}+s_{-3 } \left(\frac{32}{3 } s_1-\frac{16 \left(10 n^2 + 10 n+3\right)}{9 n ( n+1)}\right)+(-1)^n \left(\frac{64 ( 4 n+1)}{9 ( n+1)^4}-\frac{32 s_1}{3 ( n+1)^3}\right ) \nonumber\\ & + & \frac{1336}{27 } s_2+s_{-2 } \left(\frac{16 \left(16 n^2 + 10 n-3\right)}{9 n^2 ( n+1)^2}-\frac{320}{9 } s_1+\frac{64}{3 }s_2\right)-\frac{8 \left(14 n^2 + 14 n+3\right )s_3}{3 n ( n+1)}+\frac{80}{3 } s_4 \nonumber\\ & + & \frac{32 \left(10 n^2 + 10 n-3\right ) s_{-2,1}}{9 n ( n+1)}+s_1 \biggl(-\frac{4 \left(209 n^6 + 627 n^5 + 627 n^4 + 281 n^3 + 36 n^2 + 36 n+18\right)}{27 n^3 ( n+1)^3 } \nonumber\\ & + & 16 s_3+\frac{64}{3 } s_{-2,1}\biggr)-\frac{32}{3 } s_{2,-2}-\frac{64}{3 } s_{3,1}-\frac{128}{3 } s_{-2,1,1 } \biggr ] \nonumber\\ \\ p_{qq}^{2,+ } & = & c_f^3 \biggl [ \left(\frac{64}{n ( n+1)}-128 s_1\right ) s_{-2}^2+\biggl(\frac{16 \left(3 n^6 + 9 n^5 + 9 n^4+n^3 + 2 n^2 + 4 n+2\right)}{n^3 ( n+1)^3 } \nonumber\\ & + & s_1 \left(-\frac{64 \left(3 n^2 + 7 n+5\right)}{n^2 ( n+1)^2}-1408 s_2\right)-\frac{64 \left(3 n^2 + 3 n-11\right ) s_2}{n ( n+1)}+1536 s_3 + 128 s_{-2,1 } \nonumber\end{aligned}\ ] ] \nonumber\\ & + & c_f^3 \zeta_3 \biggl [ -\frac{24 \left(5 n^4 + 10 n^3+n^2 - 4 n-4\right)}{n^2 ( n+1)^2}-192 s_{-2 } \biggr ] \nonumber\\ & + & c_a c_f^2 \biggl\{\left(256 s_1-\frac{16 \left(3 n^2 + 3 n+8\right)}{n ( n+1)}\right ) s_{-2}^2 \nonumber\\ & + & \biggl(-\frac{8 \left(81 n^5 + 243 n^4 - 337 n^3 - 1181 n^2 - 526 n-60\right)}{9 n^2 ( n+1)^3}+\frac{32 \left(31 n^2 + 31 n-81\right ) s_2}{3 n ( n+1 ) } \nonumber\\ & + & s_1 \left(1728 s_2-\frac{32 \left(134 n^4 + 268 n^3 + 89 n^2 - 81 n-72\right)}{9 n^2 ( n+1)^2}\right)-1792 s_3 - 192 s_{-2,1}+2688 s_{2,1}\biggr ) s_{-2 } \nonumber\\ & + & \frac{176}{3 } s_2 ^ 2-\frac{p_5(n)}{36 n^4 ( n+1)^4}+672 s_{-5}+s_{-4 } \left(\frac{8 \left(97 n^2 + 97 n-210\right)}{3 n ( n+1)}+1120 s_1\right ) \nonumber\\ & + & s_{-3 } \biggl(-576 s_1 ^ 2+\frac{16 \left(31 n^2 + 31 n+108\right ) s_1}{3 n ( n+1)}-\frac{8 \left(268 n^4 + 536 n^3 + 487 n^2 + 183 n+126\right)}{9 n^2 ( n+1)^2 } \nonumber\end{aligned}\ ] ] \nonumber\\ & + & c_a c_f^2 \zeta_3 \biggl[\frac{36 \left(5 n^4 + 10 n^3+n^2 - 4 n-4\right)}{n^2 ( n+1)^2}+288 s_{-2 } \biggr ] \nonumber\\ & + & c_a^2 c_f \left(\frac{24 \left(n^2+n+2\right)}{n ( n+1)}-96s_1\right)s_{-2}^2+\biggl(\frac{8 \left(27 n^6 + 81 n^5 - 209 n^4 - 595 n^3 - 272 n^2 - 48 n-9\right)}{9 n^3 ( n+1)^3}\nonumber\\ & + & s_1 \left(\frac{16 \left(134 n^4 + 268 n^3 + 116 n^2 - 18 n-27\right)}{9 n^2 ( n+1)^2}-512 s_2\right ) -\frac{32 \left(11 n^2 + 11 n-24\right ) s_2}{3 n ( n+1)}+512 s_3 \nonumber\\ & + & 64 s_{-2,1}-768 s_{2,1}\biggr ) s_{-2 } + \frac{p_6(n ) } { 108 n^3 ( n+1)^5}-192 s_{-5}+s_{-4 } \left(-\frac{8 \left(35 n^2 + 35 n-66\right)}{3 n ( n+1)}-352 s_1\right ) \nonumber\\ & + & (-1)^n \left(-\frac{16 \left(91 n^2 + 80 n-29\right)}{9 ( n+1)^5}+\frac{96 s_{-2}}{(n+1)^3}+\frac{16 ( 29 n+23 ) s_1}{3 ( n+1)^4}\right ) \nonumber\\ & + & s_{-3 } \biggl(128 s_1 ^ 2-\frac{16 \left(11 n^2 + 11 n+24\right ) s_1}{3 n ( n+1)}+\frac{8 \left(134 n^4 + 268 n^3 + 203 n^2 + 69 n+27\right)}{9 n^2 ( n+1)^2 } \nonumber\\ & - & 160 s_{-2}-768 s_2\biggr)+\frac{4 \left(389 n^4 + 778 n^3 + 398 n^2 + 9 n-81\right ) s_3}{9 n^2 ( n+1)^2}-\frac{8 \left(55 n^2 + 55 n-24\right ) s_4}{3 n ( n+1 ) } \nonumber\\ & - & 160 s_5 - 224 s_{-4,1}+\frac{256 s_{-3,1}}{n ( n+1)}+384 s_{-3,2}+s_1 ^ 2 ( -64 s_3 - 256 s_{-2,1 } ) \nonumber\\ & - & \frac{16 \left(134 n^4 + 268 n^3 + 137 n^2 + 3 n+27\right ) s_{-2,1}}{9 n^2 ( n+1)^2}+s_2 \left(768s_{-2,1}-\frac{4172}{27}\right ) \nonumber\\ & + & \frac{16 \left(11 n^2 + 11 n-48\right ) s_{2,-2}}{3 n ( n+1)}-544 s_{3,-2}+\frac{32 \left(11 n^2 + 11 n-12\right ) s_{3,1}}{3 n ( n+1)}+192 s_{3,2 } \nonumber\end{aligned}\ ] ] \nonumber\\ & + & c_f^2 n_f \biggl\ { -\frac{32}{3 } s_2^ 2-\frac{4 \left(15 n^4 + 30 n^3 + 79 n^2 + 16 n-24\right ) s_2}{9 n^2 ( n+1)^2}+\frac { p_7(n)}{9 n^4 ( n+1)^4}-\frac{128}{3 } s_{-4 } \nonumber\\ & + & s_{-3 } \left(\frac{32 \left(10 n^2 + 10 n+3\right)}{9 n ( n+1)}-\frac{64}{3 } s_1\right)+(-1)^n \left(\frac{64 s_1}{3 ( n+1)^3}-\frac{128 ( 4 n+1)}{9 ( n+1)^4}\right ) \nonumber\\ & + & s_{-2 } \left(-\frac{32 \left(16 n^2 + 10 n-3\right)}{9 n^2 ( n+1)^2}+\frac{640}{9 } s_1-\frac{128}{3 }s_2\right)+\frac{16 \left(29 n^2 + 29 n+12\right ) s_3}{9 n ( n+1)}-\frac{128}{3 } s_4 \nonumber\\ & + & s_1 \left(-\frac{2 \left(165 n^5 + 495 n^4 + 495 n^3 + 517 n^2 + 336 n+80\right)}{9 n^2 ( n+1)^3}+\frac{320}{9 } s_2-\frac{128}{3 } s_3-\frac{128}{3 }s_{-2,1}\right ) \nonumber\\ & - & \frac{64 \left(10 n^2 + 10 n-3\right ) s_{-2,1}}{9 n ( n+1)}+\frac{64}{3 } s_{2,-2}+\frac{64}{3 } s_{3,1}+\frac{256}{3 } s_{-2,1,1 } \biggr\ } \nonumber\\ & + & c_f^2 n_f \zeta_3 \biggl[32 s_1-\frac{8 \left(3 n^2 + 3 n+2\right)}{n ( n+1)}\biggr ] \nonumber\\ & + & c_f n_f^2 \biggl [ \frac{51 n^6 +153 n^5 + 57 n^4 + 35 n^3 + 96 n^2 + 16 n-24}{27 n^3 ( n+1)^3}-\frac{16}{27 } s_1-\frac{80}{27 } s_2+\frac{16}{9 } s_3 \biggr ] \nonumber\\ & + & c_a c_f n_f \biggl [ -\frac{2 \left(270 n^7 + 1080 n^6 + 383 n^5 - 979 n^4 - 571 n^3 + 507 n^2 + 106 n-132\right)}{27 n^3 ( n+1)^4 } \nonumber\\ & + & \frac{64}{3 } s_{-4}+s_{-3 } \left(\frac{32}{3 } s_1-\frac{16 \left(10 n^2 + 10 n+3\right)}{9 n ( n+1)}\right)+(-1)^n \left(\frac{64 ( 4 n+1)}{9 ( n+1)^4}-\frac{32 s_1}{3 ( n+1)^3}\right ) \nonumber\\ & + & \frac{1336}{27 } s_2+s_{-2 } \left(\frac{16 \left(16 n^2 + 10 n-3\right)}{9 n^2 ( n+1)^2}-\frac{320}{9 } s_1+\frac{64}{3 } s_2\right)-\frac{8 \left(14 n^2 + 14 n+3\right ) s_3}{3 n ( n+1)}+\frac{80}{3 } s_4 \nonumber\\ & + & \frac{32 \left(10 n^2 + 10 n-3\right ) s_{-2,1}}{9 n ( n+1)}+s_1 \biggl(-\frac{4 \left(209 n^6 + 627 n^5 + 627 n^4 + 137 n^3 - 108 n^2 - 108 n-54\right)}{27 n^3 ( n+1)^3 } \nonumber\\ & + & 16 s_3+\frac{64}{3 } s_{-2,1}\biggr)-\frac{32}{3 } s_{2,-2}-\frac{64}{3 } s{_3,1}-\frac{128}{3 } s_{-2,1,1}\biggr ] \nonumber\\ & + & c_a c_f n_f \zeta_3 \biggl[\frac{8 \left(3 n^2 + 3 n+2\right)}{n ( n+1)}-32 s_1\biggr ] \nonumber\end{aligned}\ ] ] for brevity we abbreviated . here , are color factors , denotes the number of quark flavors and is the number of colors , with for quantum chromodynamics .we have accounted for the color factor explicitly , which is the same for all groups . denotes a structure constant and the einstein convention is applied calculating .99 see e.g. : t. van ritbergen , j. a. m. vermaseren and s. a. larin , _ the four - loop -function in quantum chromodynamics _ , phys .b * 400 * ( 1997 ) 379 - 384 , [ arxiv : hep - ph/9701390 ] ; + m. czakon , _ the four - loop qcd -function and anomalous dimensions _ nucl . phys .b * 710 * ( 2005 ) 485498 , [ arxiv : hep - ph/0411261 ] ; + k. g. chetyrkin , _ quark mass anomalous dimension to _ , phys .b * 404 * ( 1997 ) 161165 , [ arxiv : hep - ph/9703278 ] ; + j. a. m. vermaseren , s. a. larin and t. van ritbergen , _ the 4-loop quark mass anomalous dimension and the invariant quark mass _ , phys .b * 405 * ( 1997 ) 327333 , [ arxiv : hep - ph/9703284 ] .e. g. floratos , d. a. ross and c. t. sachrajda , _ higher order effects in asymptotically free gauge theories : the anomalous dimensions of wilson operators _ , nucl .b * 129 * ( 1977 ) 6699 [ erratum - ibid .b * 139 * ( 1978 ) 545546 ] ; + e. g. floratos , d. a. ross and c. t. sachrajda , _ higher order effects in asymptotically free gauge theories : 2 .flavor singlet wilson operators and coefficient functions _ , nucl .b * 152 * ( 1979 ) 493520 ; + a. gonzalez - arroyo , c. lopez and f. j. yndurain , _ second order contributions to the structure functions in deep inelastic scattering .1 . theoretical calculations _ , nucl . phys .b * 153 * ( 1979 ) 161186 ; + a. gonzalez - arroyo and c. lopez , _ second order contributions to the structure functions in deep inelastic scattering .3 . the singlet case _ , nucl . phys .b * 166 * ( 1980 ) 429459 ; + g. curci , w. furmanski and r. petronzio , _ evolution of parton densities beyond leading order : the non - singlet case _ , nucl .b * 175 * ( 1980 ) 2792 ; + w. furmanski and r. petronzio , _ singlet parton densities beyond leading order _ , phys .b * 97 * ( 1980 ) 437442 ; + e. g. floratos , c. kounnas and r. lacaze , _ higher order qcd effects in inclusive annihilation and deep inelastic scattering _ , nucl .b * 192 * ( 1981 ) 417462 ; + r. hamberg and w. l. van neerven , _ the correct renormalization of the gluon operator in a covariant gauge _ , nucl .b * 379 * ( 1992 ) 143171 ; + e. b. zijlstra and w. l. van neerven , _ qcd corrections to the deep inelastic proton structure functions and _ , nucl .b * 383 * ( 1992 ) 525574 .s. moch , j. a. m. vermaseren and a. vogt , _ the three - loop splitting functions in qcd : the non - singlet case _ , nucl .b * 688 * ( 2004 ) 101134 , [ arxiv : hep - ph/0403192 ] .a. vogt , s. moch and j. a. m. vermaseren , _ the three - loop splitting functions in qcd : the singlet case _ ,b * 691 * ( 2004 ) 129181 , [ arxiv : hep - ph/0404111 ] . j. a. m. vermaseren , a. vogt and s. moch , _ the third - order qcd corrections to deep - inelastic scattering by photon exchange _ , nucl .b * 724 * ( 2005 ) 3182 , [ arxiv : hep - ph/0504242 ] .m. buza , y. matiounine , j. smith , r. migneron and w. l. van neerven , _heavy quark coefficient functions at asymptotic values _ , nucl .b * 472 * ( 1996 ) 611658 , [ arxiv : hep - ph/9601302 ] ; + m. buza , y. matiounine , j. smith and w. l. van neerven , _ charm electroproduction viewed in the variable - flavour number scheme versus fixed - order perturbation theory _ ,j. c * 1 * ( 1998 ) 301320 , [ arxiv : hep - ph/9612398 ] ; + j. blmlein , a. de freitas , w. l. van neerven and s. klein , _ the longitudinal heavy quark structure function in the region at _ , nucl .b * 755 * ( 2006 ) 272285 , [ arxiv : hep - ph/0608024 ] ; + i. bierenbaum , j. blmlein and s. klein , _ the gluonic operator matrix elements at for dis heavy flavor production _ ,* b * ( 2009 ) , in print , arxiv:0901.0669 [ hep - ph ] ; + i. bierenbaum , j. blmlein , s. klein and c. schneider , two loop massive operator matrix elements for unpolarized heavy flavor production to , nucl . phys .b * 803 * ( 2008 ) 141 , [ arxiv:0803.0273 [ hep - ph ] ] ; + i. bierenbaum , j. blmlein and s. klein , _ two - loop massive operator matrix elements and unpolarized heavy flavor production at asymptotic values _ , nucl . phys .b * 780 * ( 2007 ) 4075 , [ arxiv : hep - ph/0703285 ] .s. a. larin , p. nogueira , t. van ritbergen and j. a. m. vermaseren , _ the 3-loop qcd calculation of the moments of deep inelastic structure functions _ ,b * 492 * ( 1997 ) 338378 , [ arxiv : hep - ph/9605317 ] .s. a. larin , t. van ritbergen and j. a. m. vermaseren , _ the next next - to - leading qcd approximation for non - singlet moments of deep - inelastic structure functions _ ,b * 427 * ( 1994 ) 4152 ; + a. retey and j. a. m. vermaseren , _ some higher moments of deep inelastic structure functions at next - to - next - to leading order of perturbative qcd _ , nucl .b * 604 * ( 2001 ) 281311 , [ arxiv : hep - ph/0007294 ] . j. blmlein and j. a. m. vermaseren , _ the 16th moment of the non - singlet structure functions and to _ , phys .b * 606 * ( 2005 ) 130138 , [ arxiv : hep - ph/0411111 ] .i. bierenbaum , j. blmlein and s. klein , _ heavy flavor operator matrix elements at _, arxiv:0812.2427 [ hep - ph ] ; _ first heavy flavor contributions to deeply inelastic scattering _ , nucl . phys .* 183 * ( 2008 ) 162167 , [ arxiv:0806.4613 [ hep - ph ] ] .l. euler , _ meditationes circa singulare serium genus _ , novi comm .petropol . * 20 * ( 1775 ) 140186 , reprinted in opera omnia ser i vol .15 , ( b.g .teubner , berlin , 1927 ) , 217267 ; + p.h. fuss ( ed . ) , _ correspondance mathmatique et physique de quelques clbres gomtres _ ( tome 1 ) , st .petersburg , 1843 ; + n. nielsen , _ die gammafunktion _( chelsea , new york , 1965 ) , reprint of _ handbuch der theorie der gammafunktion _ ( teubner , leipzig , 1906 ) ; + d. zagier , _ values of zeta functions and their applications _ , in : first european congress of mathematics , vol .ii , ( paris , 1992 ) , progr ., * 120 * , ( birkhuser , basel boston , pp .497512 ; + j. m. borwein , d. m. bradley , d. j. broadhurst and p. lisonek , _ special values of multiple polylogarithms _ , trans . am . math .* 353 * ( 2001 ) 907941 , ; + j. blmlein , d. broadhurst , j. vermaserern , _ the multiple zeta value data mine _ ,desy 09 - 003 .y. andre , _ ambiguity theory , old and new _ , to appear in : bollettino u.m.i .( 8) * i * ( 2008 ) and talk at the motives , quantum field theory , and pseudodifferential operators boston university , june 2 - 13 , 2008 ; + d. j. broadhurst , _ massive 3-loop feynman diagrams reducible to sc * primitives of algebras of the sixth root of unity _ , eur .j. c * 8 * ( 1999 ) 311333 , [ arxiv : hep - th/9803091 ] . j. blmlein and s. kurth , _ harmonic sums and mellin transforms up to two - loop order _ , phys .d * 60 * ( 1999 ) 014018 [ arxiv : hep - ph/9810241 ] .j. a. m. vermaseren , _ harmonic sums , mellin transforms and integrals _ , int .j. mod .a * 14 * ( 1999 ) 20372076 , [ arxiv : hep - ph/9806280 ] . j. blmlein and v. ravindran , _ mellin moments of the next - to - next - to leading order coefficient functions for the drell - yan process and hadronic higgs - boson production _ , nucl .b * 716 * ( 2005 ) 128172 , [ arxiv : hep - ph/0501178 ] ; + _ timelike wilson coefficients for parton - fragmentation functions in mellin space _ , nucl . phys .b * 749 * ( 2006 ) 124 , [ arxiv : hep - ph/0604019 ] ; + j. blmlein and s. moch , in preparation ; + j. blmlein and s. klein , _ structural relations between harmonic sums up to w=6 _ , arxiv:0706.2426 [ hep - ph ] . b. salvy and p. zimmermann , _ gfun : a package for the manipulation of generating and holonomic functions in one variable _ , acm trans .software , * 20 * ( 1994 ) 163177 . c. schneider ._ symbolic summation assists combinatorics _ , sm .* 56 * ( 2007 ) 136 , article b56b .m. karr , _ summation in finite terms _ , j. acm , * 28 * ( 1981 ) 305350 .c. schneider , _ parameterized telescoping proves algebraic independence of sums _ , ann . comb . , ( 2008 ) , in print . c. schneider ,_ a refined difference field theory for symbolic summation _ , j. symbolic comput . , *43*(9 ) ( 2008 ) 611644 .j. blmlein , _ algebraic relations between harmonic sums and associated quantities _ , comput .commun .* 159 * ( 2004 ) 1954 , [ arxiv : hep - ph/0311046 ] .nrlund , _ differenzenrechnung _ ,( springer , berlin , 1924 ) .s.a . abramov and m. petkovek , _dalembertian solutions of linear differential and difference equations _ , in : j. von zur gathen , editor ,issac94 , 169174 , acm press , 1994 . c. schneider , _ symbolic summation in difference fields _ , phd thesis , risc - linz , j. kepler university , linz , may 2001 .w. l. van neerven and a. vogt , _ nnlo evolution of deep - inelastic structure functions : the non - singlet case _ , nucl .b * 568 * ( 2000 ) 263286 , [ arxiv : hep - ph/9907472 ] .r. v. harlander and w. b. kilgore , _ next - to - next - to - leading order higgs production at hadron colliders _ , phys .* 88 * ( 2002 ) 201801 [ arxiv : hep - ph/0201206 ] . f. j. yndurain , _ reconstruction of the deep - inelastic structure functions from their moments _ , phys .b * 74 * ( 1978 ) 6872 ; + g. parisi and n. sourlas , _ a simple parametrization of the dependence of the quark distributions in qcd _ , nucl .b * 151 * ( 1979 ) 421428 ; + w. furmanski and r. petronzio , _ a method of analyzing the scaling violation of inclusive spectra in hard processes _ , nucl .b * 195 * ( 1982 ) 237261 ; + j. chyla and j. rames , _ on methods of analyzing scaling violation in deep - inelastic scattering _ , z. phys .c * 31 * ( 1986 ) 151161 ; + j. blmlein , m. klein , g. ingelman and r. rckl , _ testing qcd scaling violations in the hera energy range _ , z. phys .c * 45 * ( 1990 ) 501513 .d. j. gross , _ how to test scaling in asymptotically free theories _ , phys .* 32 * ( 1974 ) 10711073 .j. blmlein , _ structural relations of harmonic sums and mellin transforms up to weight w=5 _ , arxiv:0901.3106 [ hep - ph ] .j. blmlein , _ structural relations of harmonic sums and mellin transforms at weight w=6 _ , arxiv:0901.0837 [ math - ph ] . c. mallinger ,_ algorithmic manipulations and transformations of univariate holonomic functions and sequences _ , master thesis , j. kepler university , linz , ( 1996 ) .geddes , s. r. czapor and g. labahn , _ algorithms for computer algebra _ , ( kluwer , dordrecht , 1992 ) .j. von zur gathen and j. gerhard , _ modern computer algebra _ , ( cambridge university press , cambridge , 1999 ) .m. kauers , _ fast solvers for dense linear systems _ , nucl .* b * ( proc . suppl . ) * 183 * ( 2008 ) 245250 .a. bostan and m. kauers , _ the full counting function for gessel walks is algebraic _ , inria - rocquencourt report , 2008 , in preparation .m. bronstein and m. petkovek , _ an introduction to pseudo - linear algebra _ , theoretical computer science , * 157 * ( 1 ) ( 1996 ) 333 .b. beckermann and g. labahn , _ a uniform approach for hermite pad and simultaneous pad approximants and their matrix generalizations _, numerical algorithms , * 3 * ( 1992 ) 4554 .b. beckermann and g. labahn , _ fraction - free computation of matrix rational interpolants and matrix gcds _ , siam journal of matrix analysis and applications , * 22 * ( 1 ) ( 2000 ) 114144 .hendriks and m.f ._ solving difference equations in finite terms _ , j. symbolic comput . ,* 27*(3 ) ( 1999 ) 239259 . s. moch , p. uwer and s. weinzierl , _ nested sums , expansion of transcendental functions and multi - scale multi - loop integrals _ , j. math .* 43 * ( 2002 ) 33633386 ,[ arxiv : hep - ph/0110083 ] . m. karr , _ theory of summation in finite terms _j. symbolic comput . , * 1 * ( 1985 ) 303315 .m. petkovek , _ hypergeometric solutions of linear recurrences with polynomial coefficients _ , j. symbolic comput . , *14*(2 - 3 ) ( 1992 ) 243264 .hoeij , _ finite singularities and hypergeometric solutions of linear recurrence equations _ , j. pure appl. algebra , * 139*(1 - 3 ) ( 1999 ) 109131 . c. schneider , _ solving parameterized linear difference equations in terms of indefinite nested sums and products _ , j. differ .equations appl ., * 11*(9 ) ( 2005 ) 799821 .abramov , _ on the summation of rational functions _ , zh . vychisl . mat .* 11 * ( 1971 ) 10711074 . c. schneider , _ simplifying sums in _ , j. algebra appl ., * 6*(3 ) ( 2007 ) 415441 . c. schneider ,_ symbolic summation approach to find optimal nested sum representations _, sfb - report 2007 - 26 , sfb f013 , j. kepler university linz , 2007 , to appear in clay mathematics proceedings ( 2009 ) . j. ablinger , _ a computer algebra toolbox for harmonic sums related to particle physics _ , diploma thesis , j. kepler university linz , 2009 .e. remiddi and j. a. m. vermaseren , int .j. mod .phys . a * 15 * ( 2000 ) 725[ arxiv : hep - ph/9905237 ] .hoffman , _ multiple harmonic series _ , pacific j. math .* 152 * ( 1992 ) 275290 ; _ the algebra of multiple harmonic series _ , j. algebra , * 194 * ( 1997 ) 477495 ; _ quasi - shuffle products _ , * 11 * ( 2000 ) 4968 .gorishnii , s.a .larin , l.r .surguladzen and f.v .tkachov , _ mincer : program for multiloop calculations in quantum field theory for the schoonschip system _ , comput .commun .* 55 * ( 1989 ) 381408 ; + s. a. larin , f. v. tkachov and j. a. m. vermaseren , _ the form version of mincer _ , nikhef - h-91 - 18 ; + m. steinhauser , _ matad : a program package for the computation of massive tadpoles _ , comput .commun .* 134 * ( 2001 ) 335364 , [ arxiv : hep - ph/0009029 ] . to calculate higher than 1st moments , one has to use a further -code establishing the link to operator matrix elements , cf . and .i. bierenbaum , j. blmlein , and s. klein , in preparation .j. vermaseren , private communication .
|
single scale quantities , as anomalous dimensions and hard scattering cross sections , in renormalizable quantum field theories are found to obey difference equations of finite order in mellin space . it is often easier to calculate fixed moments for these quantities compared to a direct attempt to derive them in terms of harmonic sums and their generalizations involving the mellin parameter . starting from a sufficiently large number of given moments , we establish linear recurrence relations of lowest possible order with polynomial coefficients of usually high degree . then these recurrence equations are solved in terms of dalembertian solutions where the involved nested sums are represented in optimal nested depth . given this representation , it is then an easy task to express the result in terms of harmonic sums . in this process we compactify the result such that no algebraic relations occur among the sums involved . we demonstrate the method for the qcd unpolarized anomalous dimensions and massless wilson coefficients to 3loop order treating the contributions for individual color coefficients . for the most complicated subproblem 5114 moments were needed in order to produce a recurrence of order 35 whose coefficients have degrees up to 938 . about four months of cpu time were needed to establish and solve the recurrences for the anomalous dimensions and wilson coefficients on a 2 ghz machine requiring less than 10 gb of memory . no algorithm is known yet to provide such a high number of moments for 3loop quantities . yet the method presented shows that it is possible to establish and solve recurrences of rather large order and degree , occurring in physics problems , uniquely , fast and reliably with computer algebra . desy 09 - 002 + sfb / cpp-09 - 22 + february 2009 johannes blmlein , manuel kauers , sebastian klein , and carsten schneider + + +
|
the dynamics of the genetic composition of populations evolving under recombination has been a long - standing subject of research .the traditional models assume random mating , non - overlapping generations ( meaning discrete time ) , and populations so large that stochastic fluctuations may be neglected and a law of large numbers ( or infinite - population limit ) applies .even this highly idealised setting leads to models that are notoriously difficult to treat and solve , namely , to large systems of coupled , nonlinear difference equations . here , the nonlinearity is due to the random mating of the partner individuals involved in sexual reproduction . elucidating the underlying structure and finding solutions to these equationshas been a challenge to theoretical population geneticists for nearly a century now .the first studies go back to jennings in 1917 and robbins in 1918 . building on , robbins solved the dynamics for two diallelic loci ( to be called sites from now on ) and gave an explicit formula for the ( haplo)type frequencies as functions of time .geiringer investigated the general recombination model for an arbitrary number of loci and for arbitrary ` recombination distributions ' ( meaning collections of probabilities for the various partitionings of the sites that may occur during recombination ) in 1944 .she was the first to state the general form of the solution of the recombination equation ( as a convex combination of all possible products of certain marginal frequencies derived from the initial population ) and developed a method for the recursive evaluation of the corresponding coefficients .this simplifies the calculation of the type frequencies at any time compared to the direct evaluation through successive iteration of the dynamical system .even though she worked out the method for the general case in principle , its evaluation becomes quite involved for more than three sites .her work was followed by bennett in 1954 .he introduced a multilinear transformation of the type frequencies to certain functions that he named _principal components_. they correspond to linear combinations of certain correlation functions that transform the dynamical system ( exactly ) into a linear one .the new variables decay independently and geometrically for all times , whence they decouple and diagonalise the dynamics .they therefore provide an elegant solution in principle , but the price to be paid is that the coefficients of the transformation must be constructed via recursions that involve the parameters of the recombination model .bennett worked this method out for up to six sites , but did not give an explicit method for an arbitrary number of sites .the approach was later completed within the systematic framework of genetic algebras , where it became known as _haldane linearisation _, compare .but , in fact , bennett s program may be completed outside this abstract framework , as was shown by dawson , who derived a general and explicit recursion for the coefficients of the principal components .however , the proofs are somewhat technical and do not reveal the underlying mathematical structure .it is the aim of this paper to provide a more systematic , but still elementary , approach that exploits the inherent ( multi)linear and combinatorial structure of the problem at least for one particular , but biologically relevant , special case , which will now be described .our special case is obtained by the restriction to single crossovers , which leads to what we call _ single - crossover recombination _ ( scr ) .this is the extreme case of the biological phenomenon of _ interference _ , and describes the situation where a crossover event completely inhibits any other crossover event in the same generation , at least within the genomic region considered .surprisingly , the corresponding dynamics in _ continuous _ time can be solved in closed form .again , a crucial ingredient is a transformation to certain correlation functions ( or linkage disequilibria ) that linearise and diagonalise the system .luckily , in this case , the corresponding coefficients are independent of the recombination parameters , and the transformation is available explicitly . motivated by this result , we now investigate the analogous single - crossover dynamics in discrete time .the paper is organised as follows .we first describe the discrete - time model and the general framework ( section [ sec : prelim ] ) and then recapitulate the essentials of the _ continuous - time _ model and its solution ( section [ scrconti ] ) .section [ scrdiscrete ] returns to discrete time .we first analyse explicitly the cases of two , three , and four sites . for two and three sites ,the dynamics is analogous to that in continuous time ( and , in particular , available in closed form ) , but differs from then on .this is because a certain linearity present in continuous time is now lost .the transformations used in continuous time are therefore not sufficient to both linearise _ and _ diagonalise the discrete - time dynamics .they do , however , lead to a _linearisation _ ; this is worked out in sections [ sec : subsys ] and [ sec : comm ] .the resulting linear system has a triangular structure that can be diagonalised in a second step in a recursive way ( section [ sec : diagonal ] ) .we summarise and discuss our results in section [ sec : discuss ] .an explicit example is worked out in the appendix .let us briefly recall the recombination model described in and the special notation introduced there , as the remainder of this paper critically depends on it .a chromosome ( of length , say ) is represented as a linear arrangement of the _ sites _ of the set .sites are discrete positions on a chromosome that may be interpreted as gene or nucleotide positions . a set collects the possible elements ( such as alleles or nucleotides ) at site . for convenience ,we restrict ourselves to _ finite _ sets in this paper , though much of the theory can be extended to the case that each is a locally compact space , which can be of importance for applications in quantitative genetics .type _ is now defined as a sequence , where denotes the ( finite ) _ type space_. recombination events take place at the so - called _ links _ between neighbouring sites , collected into the set , where link is the link between sites and .since we only consider single crossovers here , each individual event yields an exchange of the sites either before or after the respective link between the two types involved .a recombination event at link that involves and thus results in the types and , with both pairs considered as _unordered_. although one is ultimately interested in the stochastic process defined by recombination acting on populations of finite size , compare and references therein , we restrict ourselves to the deterministic limit of infinite population size here , also known as_ infinite population limit _ ( ipl ) .consequently , we are not looking at the individual dynamics , but at the induced dynamics on the probability distribution on the type space .let denote the convex space of all possible probability distributions on .as is finite , a probability distribution can be written as a vector , where denotes the relative frequency of type in the population .let us look at the time evolution of the relative frequencies of types when starting from a known initial distribution of the population at time .in discrete time , it is given by the following collection of _ recombination equations _ for all : where the coefficients , , are the probabilities for a crossover at link .consequently , we must have and , where is assumed from now on without loss of generality ( when , the set can be considered as a space for an effective site that comprises and ) . when the do not sum to ,the remainder is the probability that no crossover occurs , which is taken care of by the last term in the equation .moreover , ( ) denotes the largest integer below ( the smallest above ) and the star at site stands for , and thus indicates marginalisation over site .an important step to solve the large nonlinear coupled system of equations lies in its reformulation in a more compact way with the help of certain recombination operators . to construct them , we need the canonical projection operator , defined by as usual .likewise , for any index set , the projector is defined as .we will frequently use these can be understood as _ cut - and - forget _ operators since they ` cut out ' the leading and the trailing segment of a type , respectively , and ` forget ' about the rest .the projectors induce linear mappings from to by , where denotes the preimage under and indicates composition of mappings .the operation ( not to be confused with a multiplication sign ) is known as the pullback of with respect to .consequently , is simply the marginal distribution of with respect to the sites of .now consider recombination at link , performed on the entire population .since the resulting population consists of randomly chosen leading segments relinked with randomly chosen trailing segments , it may be described through the ( elementary ) recombination operator ( or _ recombinator _ for short ) , defined by with where denotes the product measure and reflects the independent combination of both marginals and .note that the recombinators are structural operators that do not depend on the recombination probabilities .before we rewrite the recombination equations in terms of these recombinators , let us recall some of their elementary properties , see for proofs .first of all , the elementary recombinators are idempotents and commute with one another on .this permits the consistent definition of _ composite recombinators _ for arbitrary subsets .in particular , one has and .[ recoprop ] on , the elementary recombinators are commuting idempotents . for ,they satisfy likewise , for , furthermore , the composite recombinators satisfy for arbitrary . these properties can be understood intuitively as well : says that recombination at or after link does not affect the marginal frequencies at sites before , whereas the marginal frequencies at the sites after change into the product measure ( and vice versa in ) .furthermore , repeated recombination at link does not change the situation any further ( recombinators are idempotents ) and the formation of the product measure with respect to links does not depend on the order in which the links are affected . as we shall see below , these properties of the recombinators are crucial for finding a solution of the scr dynamics , both in continuous and in discrete time .let us briefly review the scr dynamics in _ continuous _ time , as its structure will be needed below .making use of the recombinators introduced above , the dynamics ( in the ipl ) is described by a system of differential equations for the time evolution of the probability distribution ( or measure ) , starting from an initial condition at .it reads where is now the _ rate _ for a crossover at link . though describes a coupled system of nonlinear differential equations , the closed solution for its cauchy ( or initial value ) problem is available : [ thm : reco - cont ] the solution of the recombination equation with initial value can be given in closed form as with the coefficient functions these are non - negative functions , which satisfy for all . the coefficient functions can be interpreted probabilistically .given an individual sequence in the population , is the probability that the set of links that have seen at least one crossover event until time is precisely the set .note that the product structure of the implies independence of links , a decisive feature of the single - crossover dynamics in continuous time , as we shall see later on . by, is always a convex combination of the probability measures with .consequently , given an initial condition , the entire dynamics takes place on the closed simplex ( within ) that is given by , where denotes the convex hull of .it is surprising that a closed solution for the dynamics can be given explicitly , and this suggests the existence of an underlying linear structure , which is indeed the case and well known from similar equations , compare . in the context of the formulation with recombinators, it can be stated as follows , compare for details .[ thm : linear ] let be a family of non - negative functions with , valid for any partition of the set and all , where .assume further that these functions satisfy for any and . if and , one has the identity which is then satisfied for all . here , the upper index specifies the respective set of links .so far , theorem [ thm : linear ] depends crucially on the product structure of the functions , but we will show later how this assumption can be relaxed . in any case , the coefficient functions of satisfy the conditions of theorem [ thm : linear ] .the result then means that the recombinators act linearly along solutions of the recombination equation . denoting as the flow of eq ., theorem [ thm : linear ] thus has the following consequence .[ coro1 ] on , the forward flow of commutes with all recombinators , which means that holds for all and all . the conventional approach to solve the recombination dynamics consists in transforming the type frequencies to certain functions which diagonalise the dynamics , see and references therein for more .from now on , we will call these functions _ principal components _ after bennett . for the single - crossover dynamics in continuous time, they have a particularly simple structure : they are given by certain correlation functions , or _ linkage disequilibria _ ( lde ) , which play an important role in biological applications .they have a counterpart at the level of operators on .namely , let us define _ lde operators _ on as linear combinations of recombinators via so that the inverse relation is given by due to the combinatorial mbius inversion formula , compare .let us note for further use that , by eq . in proposition [ recoprop] , .note also that , for a probability measure on , is a signed measure on ; in particular , it need not be positive .the ldes are given by certain _ components _ of the see for more . in the continuous - time single - crossover setting , it was shown in that , if is the solution , the satisfy which is a _ decoupled _ system of homogeneous linear differential equations , with the standard exponential solution .that is , the lde operators both linearise and diagonalise the system , and the ldes are thus , at the same time , principal components .a straightforward calculation now reveals that the solution can be rewritten as where the new coefficient functions are given by at this point , it is important to notice the rather simple structure of the lde operators , which do not depend on the crossover rates .moreover , the transformation between recombinators and lde operators is directly given by the mbius formula , see eqs . and .this is a significant simplification in comparison with previous results , compare , where the coefficients of the transformation generally depend on the crossover rates and must be determined recursively .below , we shall see that the scr dynamics in _ continuous _ time is indeed a special case , and that the above results can not be transferred directly to the corresponding dynamics in _ discrete _ time . nevertheless , part of the continuous - time structure prevails and offers a useful entry point for the solution of the discrete - time counterpart .employing recombinators , the scr equations in discrete time with a given initial distribution can be compactly rewritten as as indicated , the nonlinear operator of the right - hand side of is denoted by from now on .we aim at a closed solution of , namely for with . based on the result for the continuous - time model , the solution is expected to be of the form with non - negative , , , describing the ( unknown ) coefficient functions arising from the dynamics .this representation of the solution was first stated by geiringer .in particular , also the discrete - time dynamics takes place on the simplex .we are particularly interested in whether a discrete - time equivalent to corollary [ coro1 ] exists , that is , whether all recombinators commute with .this is of importance since it would allow for a diagonalisation of the dynamics via the lde operators . to see this ,assume for a moment that for all , and thus for all .noting that , when , eq . from proposition [ recoprop ] implies that , we see that the assumption above would lead to & \ , = \ , & \sum_{h\supseteq g}(-1)^{\vert h - g\vert } r^{}_h + \sum_{h\supseteq g } ( -1)^{\vert h - g\vert } \sum_{\alpha \in l}\rho^{}_{\alpha } ( r^{}_{\alpha } - { \mathbbm{1 } } ) r^{}_h \\[1 mm ] & \ , = \ , & t^{}_g + \sum_{h\supseteq g } ( -1)^{\vert h - g \vert } \sum_{\alpha \in l\setminus g}\rho^{}_{\alpha } ( r^{}_{\alpha } - { \mathbbm{1 } } ) r^{}_h \\ & \ , = \ , & \bigl(1- \sum_{\alpha \in l\setminus g}\rho^{}_{\alpha}\bigr ) t^{}_g + \sum_{\alpha \in l\setminus g}\rho^{}_{\alpha } \sum_{\substack{h \supseteq g\\\alpha\notin h } } \bigl ( ( -1)^{\vert h - g\vert } r^{}_{h \cup \{\alpha\ } } + ( -1)^{\vert h \cup \{\alpha\ } - g \vert } r^{}_{h\cup \{\alpha\ } } \bigr ) \\ & \ , = \ , & \bigl(1- \sum_{\alpha \in l\setminus g } \rho^{}_{\alpha}\bigr ) t^{}_g \ , , \end{aligned}\ ] ] so that , indeed , all would decay geometrically .this wishful calculation is badly smashed by the nonlinear nature of the recombinators , and the remainder of this paper is concerned with true identities that repair the damage . to get an intuition for the dynamics in discrete time ,let us first take a closer look at the discrete - time model with two , three , and four sites . for two sites ,one simply has and , so that only one non - trivial recombinator exists , , with corresponding recombination probability .consequently , the scr equation simplifies to where is a -dimensional probability vector .the solution is given by with .this formula is easily verified by induction .thus , in analogy with the scr dynamics in continuous time , the solution is available in closed form , and the coefficient functions allow an analogous probabilistic interpretation .furthermore , it is easily seen that the recombinators and commute with and therefore with for all . for two sites, the analogue of corollary [ coro1 ] thus holds in discrete time . as a consequence ,the lde operators from decouple and linearise the system . at the level of the component ldes ,this is common knowledge in theoretical population genetics ; compare ( * ? ? ?* chap.3 ) .similarly , the recombination equation for three sites can be solved explicitly as well .an elementary calculation ( applying the iteration and comparing coefficients ) shows that the corresponding coefficient functions follow the linear recursion with solution these coefficient functions have the same probabilistic interpretation as the corresponding , , in the continuous - time model , so that is the probability that the links that have been involved in recombination until time are exactly those of the set .but there is a crucial difference .recall that , in continuous time , single crossovers imply _ independence _ of links , which is expressed in the product structure of the coefficient functions ( see ) .this independence is lost in discrete time , where a crossover event at one link forbids any other cut at other links in the same time step .consequently , already for three sites , the coefficients of the discrete - time dynamics fail to show the product structure used in theorem [ thm : linear ] . buteven though corollary [ coro1 ] , concerning the forward flow of , is a consequence of theorem [ thm : linear ] , which , in turn , is based upon the product structure of the coefficients , a short calculation reveals that still holds for the discrete - time model with three sites for all . as a consequence , just as in the case of two sites , the linearise _ and _ decouple the dynamics , which is well - known to the experts , see for more . to summarise : despite the loss of independence of links , an explicit solution of the discrete - time recombination dynamics is still available , and a linearisation and diagonalisation of the dynamics can be achieved with the methods developed for the continuous - time model , that is , a transformation to a solvable system via the .however , things will become more complex if we go to four sites and beyond .in particular , there is no equivalent to corollary [ coro1 ] , i.e. , in general , the recombinators _ do not _ commute with , and we have to search for a new transformation that replaces , as will be explained next .the complication with four sites originates from the fact that , so that the property described by corollary [ coro1 ] for continuous time is lost here .consequently , the fail the desired properties .in particular , one finds so that an explicit solution of the model can not be obtained as before .this raises the question why four sites are more difficult than three sites , even though independence of links has already been lost with three sites . to answer this , we look at the time evolution of the coefficient functions , . for this purpose ,let us return to the general model with an arbitrary number of sites .we now consider an arbitrary ( but finite ) set with the corresponding link set .for each , we use the following abbreviations : furthermore , we set .we then obtain [ thm : adevelop ] for all and , the coefficient functions evolve according to with initial condition .geiringer already explained in words how to derive this general recursion , and illustrated it with the four - site example ; we give a proof via our operator formalism .using , the recombination equation for reads where each product term in the first sum can be calculated as where we use the linearity of the projectors in the first step , and eqs . and from proposition [ recoprop ] in the second (more precisely , we use the left parts of eqs . and , reading them both forward and backward ) .insert this into the expression for and rearrange the sums for a comparison of coefficients of with .comparison of coefficients is justified by the observation that , for generic and generic site spaces , the vectors with are the extremal vectors of the closed simplex .they are the vectors that ( generically ) can not be expressed as non - trivial convex combination within the simplex , and hence the vertices of the latter ( in cases with degeneracies , one reduces the simplex in the obvious way ) . if , we only have as coefficient for , we get additional contributions for each , namely , from those for which and , while and can be any subset of and , respectively . hence , the term belonging to reads and the assertion follows . the have the same probabilistic interpretation as the from the continuous - time model , and the above iteration can be understood intuitively as well : a type resulting from recombination at link is composed of two segments and .these segments themselves may have been pieced together in previous recombination events already , and the iteration explains the possible cuts these segments may carry along .the first term in the product stands for the type delivering the leading segment ( which may bring along arbitrary cuts in the trailing segment ) , the second for the type delivering the trailing one ( here any leading segment is allowed ) .the term covers the case of no recombination .note that the above iteration is generally _ nonlinear _ , where the products stem from the fact that types recombine independently .this nonlinearity is the reason that an explicit solution can not be given as before .a notable exception is provided by recombination events that occur at links where one of the involved segments can not have been affected by previous crossovers , namely the links and . in this case ,at least one of the factors in eq . becomes ( since , obviously , for and for ) and the resulting linear and triangular recursion can be solved .the coefficients for the corresponding link sets can be inferred directly ( proof via simple induction ) as this explains the availability of an explicit solution for the model with up to three sites , where we do not have links other than and/or , so that all corresponding coefficients can be determined explicitly . indeed ,one recovers with and .so far , we have observed that the product structure of the coefficient functions , known from continuous time , is lost in discrete time from three sites onwards ; this reflects the dependence of links .in contrast , the linearity of the iteration is only lost from four sites onwards .the latter can be understood further by comparison of with the differential equations for the coefficients of the continuous - time model .these read : that is , they are linear , with solution .note that this linear dynamics emerges from a seemingly nonlinear one , namely the analogue of , however , due to the product structure of the solution , the product term in the second sum , when inserting , reduces to a single term , which turns into .what happens here is the following . from four sites onwards (namely , beginning with and a crossover at , and both in discrete and continuous time ) , it happens that leading and trailing segments meet that both possess at least one link that may possibly have seen a previous cut .when a crossover at takes place , the new joint distribution of cuts before and after is formed as the product measure of the marginal distributions of cuts in the leading and trailing segments ( cf . and ) akin to the formation of product measures of marginal types by . in continuous time , the links are all independent , hence the new combination leaves the joint distribution of cuts unchanged .therefore , a set of affected links ( before and after ) is simply augmented by if is a ` fresh ' cut ; this results in the linearity of . in discrete time, however , the dependence between the links , in particular between those in the leading and trailing segment , means that the formation of the product measure changes the joint distribution of affected links , in addition to the new cut at ; thus remains nonlinear .since we aim at an explicit solution of the discrete - time recombination model , we need to find a way to overcome the obstacles of nonlinearity .inspired by the results of the continuous - time model , we now search for a transformation that decouples and linearises the dynamics . to this end , we first investigate the behaviour of the and in the discrete - time model , since a deeper understanding of their actions will help us find a new transformation .we are still concerned with the lde operators from the continuous - time model , because of their favourable structure and the existence of the inverse transformation ( mbius inversion ) . moreover , as will become clear later , some of them still have the desired features and can be adopted directly for the discrete - time model .first , we need further notation .two links are called _ adjacent _ if .we say that a subset is _ contiguous _ if for any two links with , also all links between and belong to ( this includes the case ) .a non - empty contiguous set of links is written as . whereas , according to theorem [ thm : linear ] , all recombinators act linearly on the solution of the continuous - time recombination equation , this does not hold for the solution of the discrete - time model in general , though the following property still holds .[ lemma : linear ] let be a family of non - negative numbers with .for an arbitrary and for all with contiguous , one has when , the claim is clear , because and itself is contiguous .otherwise , we have with and for some ( this includes the case via ) .since we work on , we have from proposition [ recoprop ] . with the projection onto a single site , we obtain since is a linear operator and by proposition [ recoprop ] . for the contiguous set and , we obtain , with the help of and a repeated application of proposition [ recoprop ] , an analogous calculation reveals for contiguous .this proves the assertion . intuitive content of lemma [ lemma : linear ] falls into place with the explanation of theorem [ thm : adevelop ] .the linearity of the particular recombinators of lemma [ lemma : linear ] is due to the fact that produces only one segment , namely , that might be affected by previous recombination events while all other segments consist of only one site and thus can not bring along cuts from ` the past ' .in this section , we show a certain product structure of the recombinators and the lde operators .this will turn out as the key for constructing an appropriate transformation . recall that a crossover at link partitions into and . in general ,recombination at the links belonging to , , induces the following _ ordered _partition of ( see fig . [fig : subsystems ] ) : note that the partition is ordered due to the restriction to single crossovers . in connection with this , we have the sets of links that correspond to the respective parts of the partition ( fig . [fig : subsystems ] ) .namely , for , with specifies the links belonging to the respective parts of : the links associated with , , are exactly those of ( and vice versa ). sites ( i.e. , , ) cut at the links ( broken lines ) .the resulting subsystems are and with , , and as well as , , and ( the upper index is suppressed here for clarity ) . ] with this definition , is possible for each and will be included ( possibly multiply ) in .furthermore , , so that . the upper index will be suppressed in cases where the corresponding set of links is obvious .clearly , is not a partition of , whereas is a partition of . this way , recombination at the links in produces several ` subsystems ' ( characterised through the sites and the corresponding links , ) with respect to the ` full system ' described through the sites and the links .we demonstrate below that it is sufficient to consider these subsystems separately , a property that reduces the problem of dealing with the recombination dynamics .note first that repeated application of and leads to where is our usual recombinator acting on , and denotes the respective recombinator on , which acts on the subsystem specified through the sites and cuts the links ( and analogously for ) .likewise , recombinators , , acting on , may be defined for all subsystems , , in the obvious way . for consistency , we define . from now on ,the upper index specifies the corresponding system the ( and , likewise , the ) are acting on .it will be suppressed in cases where the system is obvious .we now explain the inherent product structure of the recombinators : [ thm : product ] let . for each and ,one has the identity for , proposition [ recoprop ] implies : where the last step follows from . this proposition carries over to the lde operators : on , the lde operators satisfy where and now describe the operators acting on the simplices and , respectively .let . using the product structure from proposition [ thm : product ] and splitting the sum into two disjoint parts , one obtains which establishes the claim . using this argument iteratively on the respective segments , one easily obtains where the upper index specifies the corresponding subsystems associated with , compare .hence , the effect of on the full system is given by that of on the respective subsystems corresponding to .our goal is now to study the effect of the and on , the right - hand side of the recombination equation .this will show us in more detail when and why the lde operators from the continuous - time model are not sufficient for solving the discrete - time model and , at the same time , will direct us to the new transformation .if denotes the right - hand side of the recombination equation on the full simplex , then , for any contiguous , the right - hand side of the recombination equation on the subsimplex will be denoted with .again , we suppress the upper index when the simplex is obvious .for the right - hand side of the recombination equation , one finds for every and . since , we obtain with the help of : analogously , one obtains , and the assertion follows . more generally , this theorem implies inductively that finally , for the interaction between the and , we have the following result .[ thm : prodldeops ] for the lde operators and all , one has with according to . using and ,one calculates which establishes the formula . this result is of particular significance since it shows that , to determine the effect of the on , it is sufficient to know the action of the on the subsystems that correspond to . hence , we now need to determine . it will turn out that this relies crucially on the commutators of with , which will be the subject of the next section .the more algebraic approach of , which was later generalised by popa , suggests to further analyse the problem in terms of commuting versus non - commuting quantities . for ,the commutator is defined as : = r^{}_g \circ \varphi - \varphi \circ r^{}_g ] is not true in general .we are interested in the commutators because as we will see in a moment they lead us to the evaluation of , and this in turn gives ( see proposition [ thm : prodldeops ] ) .[ prop : tempty ] let as before . on ,one has .\ ] ] expressing the left - hand side as \ , , \ ] ] and using , one calculates & \,=\ , & \eta \ , t^ { } _ { \varnothing } + \sum _ { \alpha \in l } \sum _ { \substack { g \subseteq l\\\alpha\notin g } } \left ( ( -1)^ { \left| g \right| } \rho^ { } _ { \alpha } r^ { } _ { \alpha } r^{}_g + ( -1)^{\left|g \cup \left\ { \alpha \right\ } \right| } \rho^ { } _ { \alpha } r^ { } _ { g \cup \left\ { \alpha \right\ } } \right ) \\ & \,=\ , & \eta \ , t^ { } _ { \varnothing } + \sum_{\alpha\in l } \bigl ( \sum _ { \substack{g \subseteq l\\\alpha\notin g } } ( -1)^ { \left| g \right| } \rho^ { } _ { \alpha } ( r^ { } _ { \alpha } r^{}_g - r^ { } _ { g \cup \left\{\alpha\right\ } } ) \bigr ) \,=\ , \eta t^ { } _ { \varnothing } \ , , \end{aligned}\ ] ] which shows the claim . proposition [ prop : tempty ] shows that only yields a diagonal component if _ all _ recombinators commute with . we now need to determine the commutator ] whenever .[ thm : commute ] = 0 ] . let us note in passing that the converse direction of corollary [ thm : commute ] may fail if the site spaces are sufficiently trivial .nevertheless , in the generic case , = 0 ] in terms of the ( which are related to the via ) .[ thm : tvarnotcomplete ] on , the operators and satisfy for all .the coefficients , , are given by and , for , by for , the coefficients are recursively determined by let us first prove the case . according to proposition [ prop : tempty ] , we have $ ] , where by definition .let us thus evaluate the last term . in the first step ,we insert the commutator from theorem [ commutator ] ; we then use definition [ defin : newop ] and change the order of summation to arrive at \,&=&\ , \sum _ { g^ { ' } \subseteq l } ( -1)^ { \vert g^ { ' } \vert } \sum_{\substack { h \subseteq l \setminus g^ { ' } \\ \vert h \vert \geq 2 } } ( -1)^{\vert h \vert } \operatorname{sep}(g^{'},h ) { \hspace{0.5pt}}\rho^{}_h { \hspace{0.5pt}}\widetilde{t}^ { } _ { g^ { ' } , g^ { ' } \dot{\cup } h } \nonumber \\ & = & \sum _ { g^ { ' } \subseteq l } ( -1)^ { \vert g^ { ' } \vert } \sum_{\substack { h \subseteq l \setminus g^ { ' } \\ \vert h \vert \geq 2 } } ( -1)^{\vert h \vert } \operatorname{sep}(g^{'},h ) { \hspace{0.5pt}}\rho^{}_h \sum_{g^ { ' } \subseteq k \subseteq l\setminus h } t^{}_k \nonumber \\ & = & \sum_{k\subseteq l } t_k^{}\sum_{\substack { h \subseteq l\setminus k \\ \vert h \vert \geq 2 } } ( -1)^{\vert h \vert } \rho^{}_h \sum _ { g^{'}\subseteq k } ( -1)^{\vert g^{'}\vert}\operatorname{sep}(g^{'},h ) \ , , \end{aligned}\ ] ] which does not contain any term with .we can now compare coefficients for .note first that , by , we only need to consider sets , that is , . in this case , and .this is true since implies that the smallest element in is smaller ( larger ) than the smallest element in , thus ( and vice versa for ) .taking this together with lemma [ lemmamoresepfunc ] , the coefficient of in turns into note that , in the last step , the restriction on may be dropped since it is already implied by the factors involving the -functions .this proves the claim for . for the case , we follow proposition [ thm : prodldeops ] and write , for , applying the above result for to each factor , and using the product structure of proposition [ thm : prodldeops ] backwards , establishes the claim . coefficients with can be expressed explicitly as let us consider those whose contribution to the sum in is not annihilated by the separation function or the -functions . for to hold , each must belong to a different .furthermore , must contain one element each from and ( and , respectively ) to keep the factors involving the -functions from vanishing .thus , the sum in may be factorised as claimed. in particular , if .taking this together with , one obtains for whenever , and hence , in these cases , is already a diagonal component in line with the observation in section 4 .furthermore , theorem [ thm : tvarnotcomplete ] and entail that whenever theorem [ thm : tvarnotcomplete ] reveals the linear structure inherent in the action of on .in fact , the structure is even triangular ( with respect to the partial ordering ) since is a linear combination of the , .thus , diagonalisation will boil down to recursive elimination . as a preparation , we make the following observation .[ corohelp ] if , one has the relation for all .when , the intersection in , with , can never be empty , so that follows .motivated by the triangular structure of , we make the ansatz to define new operators , , as the following linear combination of the well - known : where the coefficients are to be determined in such a way that they transform the recombination equation into a decoupled diagonal system , more precisely so that with eigenvalues that are still unknown as well .an example for this transformation can be found in appendix a. note first that , with the help of , eqs . and may be rewritten as obviously , there is some freedom in the choice of the ; we set for all ( and we will see shortly that this is consistent ) .has the structure of an eigenvalue problem of a triangular matrix with coefficients , where the role of the unit vectors is taken by the , and the , , take the roles of the components of the eigenvector corresponding to ( note that , by considering for only , we have already exploited the triangular structure ) .recall next that the eigenvalues of a triangular matrix are given by its diagonal entries , which are by theorem [ thm : tvarnotcomplete ] .in particular , .the describe the probability that there is no further recombination between the respective sites of the subsystems corresponding to ; they have already been identified by bennett and dawson .[ lemma : zulambdas ] for all with , one has .let .then , for , with for an arbitrary , we see from that and hence obtain because all are positive , as are all three terms in parentheses of the fraction , and by assumption .finally , the argument also works for , provided . since for all , the claim trivially also holds for .the assertion then follows inductively for any . the coefficients can now be calculated recursively as follows .[ sumtheorem ] the coefficients of are determined by and for .the coefficients of the inverse transformation of , are determined by for together with . considering with , comparing coefficients for , , and observing , one obtains and the recursion for follows .it is always well - defined for all , since by lemma [ lemma : zulambdas ] .the recursion for the coefficients of the inverse transformation follows directly from which enforces , as the are distinct . we now identify those that already give diagonal components of the discrete - time system : [ thm : direct ] for all that satisfy for all , one has for . in this case , we have for all , hence , from which the assertion follows via theorem [ thm : tvarnotcomplete ] . note that for all simply implies that each subsystem consists of at most three sites , hence all subsystems can be reduced to the simple cases considered in section [ scrdiscrete ] .then , for such , for all . with the help of this transformation, we can finally specify the solution of the recombination equation in terms of the initial condition . to this end, we first use the transformation from the recombinators to the operators , and then relation to arrive at the operators , which finally diagonalise the system according to .finally , we use the appropriate inversions to return to the recombinators : the coefficient functions can now be extracted as follows .[ thm : aexplicit ] the coefficient functions of the solution of the recombination equation in discrete time may be expressed as for all . here , and are the coefficients of theorem [ sumtheorem ] . to derive the asymptotic behaviour for large iteration numbers , we need the following property of the coefficients . [ cforl ] the coefficients and satisfy for arbitrary .we have for all by theorem [ sumtheorem ] .the claim for now follows from the recursion together with corollary [ corohelp ] .inserting this into recursion establishes the relation for the . as an example , the path to a solution via the above chain of transformations for the model with five sites will be presented in appendix a. finally , let us consider what happens in the limit as .the solution of the recombination equation _ _ with initial condition satisfies with exponentially fast convergence in the norm topology .when expressing in terms of according to , we first observe , because and by lemma [ cforl ] . since , we obtain the following estimate in the variation norm which establishes the claim since for . as was to be expected , the solution of the recombination equation converges towards the independent combination of the alleles , that is towards _in this paper , we have investigated the dynamics of an ` infinite ' population that evolves due to recombination alone . to this end, we assumed discrete ( non - overlapping ) generations , and restricted ourselves to the case of single crossovers .previous results had shown that the corresponding single - crossover dynamics in continuous time admits a closed solution .this astonishing result is concordant with a ` hidden ' linearity in the system that is due to independence of links .the fact that crossovers at different links occur independently manifests itself in the product structure of the coefficient functions of the solution ensuing from the linear action of the nonlinear recombination operators along the solution of the recombination equation .additionally , in , a certain set of linkage disequilibria was found that linearise and diagonalise the dynamics .since the overwhelming part of the literature deals with discrete - time models , our aim was to find out whether , and to what extent , these continuous - time results carry over to single - crossover dynamics in discrete time .we could show that the discrete - time dynamics is far more complex than the continuous - time one , and , as a consequence , a closed solution can not be given .the main reason for these difficulties lies in the fact that the key feature of the continuous - time model , the independence of links , does not carry over to discrete time .this is due to interference : the occurrence of a recombination event in the discrete - time model forbids any further crossovers in the same generation . in connection with this ,the recombinators do not , in general , act linearly on the right - hand side of the recombination equation .likewise , the coefficient functions of the solution follow a nonlinear iteration that can not be solved explicitly .while geiringer developed a skilful procedure for the generation - wise evaluation of these coefficients , we constructed a method that allows for an _ explicit _ formula valid for all times , once the coefficients of the transformation have been determined recursively for a given system .as in previous approaches , this is achieved by a transformation of the nonlinear , coupled system of equations to a linear diagonal one .this was done before by bennett and dawson for the more general recombination equation ( without restriction to single crossovers ) , and they presented an appropriate transformation that includes parameters that must be determined recursively .unfortunately , the corresponding derivations are rather technical and fail to reveal the underlying mathematical structure .it was our aim to improve on this and add some structural insight . unlike the previous approaches , we proceeded in two steps : first linearisation followed by diagonalisation .more precisely , it turns out that the lde operators , which both linearise and diagonalise the continuous - time system , still _ linearise _ the discrete - time dynamics , but fail to _ diagonalise _ it for four or more loci .however , the resulting linear system may be diagonalised in a second step .this relies on linear combinations of the , with coefficients derived in a recursive manner .as it must be , the transformation agrees with the one of dawson when translated into his framework .( note that our are coefficients of , whereas his coefficients belong to components of .note also that scr does _ not _ belong to the singular cases he excludes ) .it remains an interesting open problem how much of the above findings can be transferred to the general recombination model ( i.e. without the restriction to single crossovers ) , where one loses the simplifying structure of ordered partitions .it is our pleasure to thank th .hustedt for critically reading the manuscript and k. schneider for valuable suggestions to further improve it .this work was supported by dfg ( research training group bioinformatics , and dutch - german bilateral research group on mathematics of random spatial models in physics and biology ) .to illustrate the construction , let us spell out the example of five sites .we have and , the corresponding recombination probabilities , , , and a given initial population .aiming at determining the coefficient functions for all , we can immediately write down , , and , see . if we wanted to determine the remaining coefficient functions for a given time , they could be calculated using the method of geiringer ( i.e. theorem [ thm : adevelop ] ) .but since we aim at a closed solution for _ all _ , we use the procedure developed above . to determine the coefficients of theorem [ thm : aexplicit ] , we have to calculate the corresponding and .theorem [ sumtheorem ] and [ thm : direct ] imply , for all , for all , as well as and .hence , in these cases , the only non - vanishing coefficients are for all .it remains to determine , and . 1 ._ constructing _ :+ the recursion starts with .following , for all except for , and thus the only non - zero , , is where we have used the recursion and , .so , for the transformation we obtain so that .analogously , 2 ._ constructing _ : + by , the only non - vanishing coefficients are , , , and .they are determined by the recursion and lead to the following transformation : now that we know the , the coefficients are calculated via . finally , the remaining coefficient functions follow from theorem [ thm : aexplicit ] : and where the are given by .jennings , h.s .: the numerical results of diverse systems of breeding , with respect to two pairs of characters , linked or independent , with special relation to the effects of linkage . _ genetics _ * 2 * , 97154 ( 1917 ) .popa , e. : some remarks on a nonlinear semigroup acting on positive measures . in : carja , o. , vrabie ,( eds . ) , applied analysis and differential equations , world scientific , singapore , 308319 ( 2007 ) .
|
modelling the process of recombination leads to a large coupled nonlinear dynamical system . here , we consider a particular case of recombination in _ discrete _ time , allowing only for _ single crossovers_. while the analogous dynamics in _ continuous _ time admits a closed solution , this no longer works for discrete time . a more general model ( i.e. without the restriction to single crossovers ) has been studied before and was solved algorithmically by means of haldane linearisation . using the special formalism introduced in , we obtain further insight into the single - crossover dynamics and the particular difficulties that arise in discrete time . we then transform the equations to a solvable system in a two - step procedure : linearisation followed by diagonalisation . still , the coefficients of the second step must be determined in a recursive manner , but once this is done for a given system , they allow for an explicit solution valid for all times .
|
recently , there has been much interest in understanding the behavior of multiple access channels under constraints . considerable amount of work has been carried out on the problem of how users can obtain an acceptable throughput by choosing rates independently . motivated by the interest in studying a large population of users playing the game over time , evolutionary game theory was found to be an appropriate framework for communication networks .it has been applied to problems such as power control in wireless networks and mobile interference control . in , an additive white gaussian noise ( awgn )multiple access channel problem was modeled as a noncooperative game with pairwise interactions , in which users were modeled as rational entities whose only interest was to maximize their own communication rates .the authors obtained the nash equilibria of the two - user game and introduced a two - player evolutionary game model with _ pairwise interactions _ based on replicator dynamics .however , the case when interactions are not pairwise arises frequently in communication networks , such the code division multiple access ( cdma ) or the orthogonal frequency - division multiple access ( ofdma ) in worldwide interoperability for microwave access ( wimax ) environment . in this work ,we extend the study of to wireless communication systems with an arbitrary number of users corresponding to each receiver .we formulate a static non - cooperative game with users subject to rate capacity constraints and extend the constrained game to a dynamic evolutionary game with a large number of users whose strategies evolve over time .different from evolutionary games with discrete and finite number of actions , our model is based on a class of continuous games , known as _ continuous - trait games_. evolutionary games with continuum action spaces can be seen in a wide variety of applications in evolutionary ecology , such as evolution of phenology , germination , nutrient foraging in plants , and predator - prey foraging . the main contributions of this work can be summarized as follows .we show that the static continuous kernel rate allocation game with coupled rate constraints has a convex set of pure nash equilibria , coinciding with the maximal face of the polyhedral capacity region .all the pure equilibria are pareto optimal and are also strong equilibria , resilient to simultaneous deviation by coalition of any size .we show that the pure nash equilibria in the rate allocation problem are 100% efficient in terms of price of anarchy ( poa ) and constrained strong price of anarchy ( cspoa ) .we study the stability of strong equilibria , normalized equilibria , and evolutionary stable strategies ( ess ) using evolutionary game dynamics such as brown - von neumann - nash dynamics , generalized smith dynamics , and replicator dynamics .the rest of the paper is structured as follows .we present in the next section the evolutionary game model of rate allocation in additive white gaussian multiple access wireless networks , and analyze its equilibria and pareto optimality . in section [ secrobust ] ,we present strong equilibria and price of anarchy of the game . in section [ secselection ] ,we discuss how to select one specific equilibrium such as normalized equilibrium and evolutionary stable strategies .section [ secdynamics ] studies the stability of equilibria and evolution of strategies using game dynamics .section [ secconclud ] concludes the paper .we consider a communication system consisting of several receivers and several senders ( see figure [ figfuncttt3 ] ) . at each time , there are many local interactions ( typically , at each receiver there is a local interaction ) at the same time. each local interaction will correspond to a non - cooperative one - shot game with common constraints .the opponents do not necessarily stay the same from a given time slot to another time slot .users revise their rates in view of their payoffs and the coupled constraints ( for example by using an evolutionary process , a learning process or a trial - and - error updating process ) .the game evolves in time .users are interested in maximizing a fitness function based on their own communication rates at each time , and they are aware of the fact that the other users have the same goal .the coupled power and rate constraints are also common knowledge .users have to choose independently their own coding rates at the beginning of the communication , where the rates selected by a user may be either deterministic , or chosen from some distribution .if the rate profile arrived at as a result of these independent decisions lies in the capacity region , users will communicate at that operating point .otherwise , either the receiver is unable to decode any signal and the observed rates are zero , or only one of the signals can be decoded .the latter case occurs when all the other users are transmitting at or below a safe rate . with these assumptions, we can define a constrained non - cooperative game .the set of allowed strategies for user is the set of all probability distributions over and the payoff is a function of the rates .in addition , the rational action ( rates ) sets are restricted to lie in the capacity regions ( the payoff is zero if the constraint is violated ) . in order to study the interactions between the selfish or partially cooperative users and their stationary rates in the long run , we propose to modelthe rate allocation in gaussian multiple access channels as an evolutionary game with a continuous action space and coupled constraints .the development of evolutionary game theory is a major contribution of biology to competitive decision making and the evolution of cooperation .the key concepts of evolutionary game theory are ( i ) _ evolutionary stable strategies _ , which is a refinement of equilibria , and ( ii ) _ evolutionary game dynamics _ such as replicator dynamics , which describes the evolution of strategies or frequencies of use of strategies in time , .the single population evolutionary rate allocation game is described as follows : there is one population of senders ( users ) and several receivers .the number of senders is large . at each time , there are many one - shot games called _ local interactions_. each sender of the population chooses from the same set of strategies which is a non - empty , convex and compact subset of without loss of generality , we can suppose that user chooses its rate in the interval ] }\times g(a)\nu_{m-1}(\mathcal{d}_a)\ ] ] where if the rate does not satisfy the capacity constraints , then the payoff is hence the _ rational _ rate for user is lower than fix a rate . ] with full support .suppose is continuous on . ] and is a positive and strictly increasing function for each fixed .since the game is subject to coupled constraints , the action set is coupled and dependent on other players actions . given the strategy profile of other players , the constrained action set is given by ,\ ( \alpha^i,\alpha^{-i})\in \mathcal{c } \}\ ] ] we then have an asymmetric game .the minimum rate that the user can guarantee in the feasible regions is which is different than each user maximizes over the coupled constraint set .owing to the monotonicity of the function and the inequalities that define the capacity region , we obtain the following lemma .let be the best reply to the strategy , defined by is a non - empty single - valued correspondence ( i.e a standard function ) , and is given by [ lembr1 ] where .the set of nash equilibria is all these equilibria are optimal in pareto sense .let be a feasible solution , i.e. , if then at least one of the users can improve its rate ( hence its payoff ) to reach one of the faces of the capacity region .we now check the strategy profile on the face if then from the lemma [ lembr1 ] , hence , is a strict equilibrium . moreover , this strategy is pareto optimal because the rate of each user is maximized under the capacity constraint .these strategies are social welfare optimal if the total utility is maximized subject to constraints .note that the set of pure nash equilibria is a convex subset of the capacity region .the pure equilibria are global optima if the function is the identity function .in this paper , we have studied an evolutionary multiple access channel game with a continuum action space and coupled rate constraints .we showed that the game has a continuum of strong equilibria which are 100% efficient in the rate optimization problem .we proposed the constrained brown - von neumann - nash dynamics , smith dynamics , and the replicator dynamics to study the stability of equilibria in the long run .an interesting question which we leave for future work is whether similar equilibria structure exist in the case of multiple access games with non - convex capacity regions .another extension would be to the hybrid model in which users can select among several receivers and control the total rate , which is currently under study .99 altman , e. , el - azouzi , r. , hayel , y. , and tembine , h. , evolutionary power control games in wireless networks , " _ networking 2008 ad hoc and sensor networks , wireless networks , next generation internet _ , springer berlin / heidelberg , pp .930 - 942 , 2008 .andelman , n. , feldman , m. , and mansour , y. , strong price of anarchy , " _ soda _ , 2007 .anshelevich , e. , dasgupta , a. , kleinberg , j. , tardos , e. , wexler , t. and roughgarden , t. , the price of stability for network design with fair cost allocation , " in _ proc .focs _ , pp .59 - 73 , 2004 .aumann , r. , acceptable points in general cooperative n - person games " , in _ contributions to the theory of games _ , volume 4 , 1959 .gajic , v. and rimoldi , b. , game theoretic considerations for the gaussian multiple access channel , " in _ proc .ieee isit _ , 2008 .goodman , j. c. , a note on existence and uniqueness of equilibrium points for concave n - person games , " _ econometrica _ , 48(1),1980 ,hofbauer , j. and sigmund , k .. , _ evolutionary games and population dynamics _ , cambridge university press , 1998 .hofbauer , j. , oechssler , j. , and riedel , f. , brown - von neumann - nash dynamics : the continuous strategy case , " _ games and econ ._ , 65(2):406 - 429 , 2008 .ponstein , j. , existence of equilibrium points in non - product spaces , " _ siam j. appl ._ , 14(1):181 - 190 , 1966 .shaiju , a. j. and bernhard , p. , evolutionarily robust strategies : two nontrivial examples and a theorem , " _ proc . of isdg _ , 2006 .smith , j.m . and price , g.m . , the logic of animal conflict , " _ nature _ , 246:15 - 18 , 1973 .rosen , j. b. , existence and uniqueness of equilibrium points for concave n - person games , " _ econometrica _ , 33:520 - 534 , 1965 .sandholm , w. h. , _ population games and evolutionary dynamics _ , mit press , 2009 ( to appear ) .takashi , u. , correlated equilibrium and concave games , " _ int .journal of game theory _ , 37(1):1 - 13 , 2008 .taylor , p.d . and jonker , l. , evolutionarily stable strategies and game dynamics , " _ math . bioscience _ , 40:145 - 156 , 1978 .tembine , h. , altman , e. , el - azouzi , r. and hayel , y. , evolutionary games with random number of interacting players applied to access control " , _ proc . of ieee/ acm wiopt _ , march 2008 .tembine h. , altman e. and el - azouzi r. , delayed evolutionary game dynamics applied to the medium access control " , _ in proc .ieee mass _ , 2007 .tembine h. , altman e. , el - azouzi r. and hayel y. multiple access game in ad - hoc networks " , _ in proc .gamecomm _ , 2007 .vincent , t.l . and brown , j.s . , _ evolutionary game theory , natural selection , and darwinian dynamics _ , cambridge univ .press , 2005 .wei y. and cioffi , j.m . competitive equilibrium in the gaussian interference channel , " _ ieee internat .information theory ( isit ) _ , 2000 .zhu , q. , a lagrangian approach to constrained potential games , part i : theory and example , " _ proc ., cancun , mexico , 2008 .
|
in this paper , we formulate an evolutionary multiple access channel game with continuous - variable actions and coupled rate constraints . we characterize nash equilibria of the game and show that the pure nash equilibria are pareto optimal and also resilient to deviations by coalitions of any size , i.e. , they are strong equilibria . we use the concepts of price of anarchy and strong price of anarchy to study the performance of the system . the paper also addresses how to select one specific equilibrium solution using the concepts of normalized equilibrium and evolutionary stable strategies . we examine the long - run behavior of these strategies under several classes of evolutionary game dynamics such as brown - von neumann - nash dynamics , and replicator dynamics .
|
the matrix factorization is well known in the blind source separation ( bss ) context : the rows of and represent unknown source signals and their observed linear mixtures , respectively .the task of the bss problem is to estimate the source matrix and the mixing matrix from .if no prior information is available on the matrices or , then they can not be uniquely identified from . indeed ,for any nonsingular matrix , applications may involve particular constraints on and/or , so that in the resulting class of structured matrices the solution of becomes unique .commonly used constraints include sparsity , constant modulus and vandermonde structure .sufficient conditions for uniqueness can be deterministic or generic .deterministic conditions concern particular matrices and .generic conditions concern the situation that can be expected in general ; a generic property is a property that holds everywhere except for a set of measure .( a formal definition will be given in subsection [ subsectio_id ] below . ) to illustrate the meaning of deterministic and generic uniqueness let us consider decomposition in which , and the columns of are obtained by sampling the exponential signals at .then , i.e. is a vandermonde matrix .a deterministic condition under which decomposition is unique ( up to trivial indeterminacies ) is : ( i ) the vandermonde matrix has strictly more rows than columns and its generators are distinct and ( ii ) the matrix has full column rank .( in this paper we say that an matrix _ has full column rank _ if its column rank is , which implies . )this deterministic condition can easily be verified for any particular and .a generic variant is : ( i ) the vandermonde matrix has and ( ii ) the ( unstructured ) matrix has .indeed , under these dimensionality conditions the deterministic conditions are satisfied everywhere , except in a set of measure ( which contains the particular cases of coinciding generators and the cases in which the columns of are not linearly independent despite the fact that is square or even tall ) . note that generic properties do not allow one to make statements about specific matrices ; they only show the general picture .as mentioned before , bss has many variants , which differ in the types of constraints that are imposed .different constraints usually mean different deterministic uniqueness conditions , and the derivation of these is work that is difficult to automate .in this paper we focus on generic uniqueness conditions .we propose a framework with which generic uniqueness can be investigated in a broad range of cases . indeed, it will become clear that if we limit ourselves to generic uniqueness , the derivation of conditions can to some extent be automated .we discuss two concrete applications which may serve as examples .our approach builds on results in algebraic geometry .algebraic geometry has so far been used in system theory in and it also has direct applications in tensor - based bss via the generic uniqueness of tensor decompositions .our paper makes a contribution in further connecting algebraic geometry with applications in signal processing . throughout the paper denotes the field of real or complex numbers ; bold lowercase letters denote vectors , while bold uppercase letters represent matrices ; a column of a matrix and an entry of a vector are denoted by and , respectively ; the superscripts , and are used for the conjugate , transpose , and hermitian transpose , respectively ; `` '' denotes the kronecker product . _ a structured matrix factorization ._ in this paper we consider the following structured factorization of a matrix , where is a subset of and and are known matrix - valued functions defined on .we can assume that the parameter vector ^t ] with .note that the independence implies that and that ^t ] are the same up to index permutation .for the sake of exposition , let us first consider a class of matrices that is smaller than the class that we will be able to handle in our derivation of generic uniqueness conditions .namely , let us first consider matrices , of which the -th row is obtained by evaluating a known rational function at some points , : , \ ] ] where note that we model a column of through the values taken by functions at one particular point .on the other hand , a row of is modeled as values taken by one particular function at points .the structure that we consider in our study for the matrix is more general than the rational structure of in the sense that we additionally allow ( possibly nonlinear ) transformations of .formally , we assume that the columns of are sampled values of known vector functions of the form ^t,\quad { \bm{\zeta}}\in\mathbb f^l,\ ] ] at points , such that = \left[\begin{matrix } \frac{p_1({\mathbf f}({\bm{\zeta}}_1))}{q_1({\mathbf f}({\bm{\zeta}}_1))}&\dots & \frac{p_1({\mathbf f}({\bm{\zeta}}_r))}{q_1({\mathbf f}({\bm{\zeta}}_r))}\\ \vdots&\vdots&\vdots\\ \frac{p_n({\mathbf f}({\bm{\zeta}}_1))}{q_n({\mathbf f}({\bm{\zeta}}_1))}&\dots & \frac{p_n({\mathbf f}({\bm{\zeta}}_r))}{q_n({\mathbf f}({\bm{\zeta}}_r ) ) } \end{matrix}\right ] , \end{aligned}\ ] ] where the functions are subject to an analyticity assumption that will be specified in theorem [ th : main ] further .although our general result in theorem [ th : main ] will be formulated in terms of functions in variables , in the applications in sections [ section_ica][sampled sources ] we will only need entry - wise transformations : with analytic functions in one variable . as an example of how the model for can be used , consider vectors that are obtained by sampling the exponential signals ( with ) at . in this case is an vandermonde matrix with unit norm generators ; its column is ^t ] . since , can be parameterized by independent parameters . ) in the theorem and throughout the paper we use and to denote the jacobian matrices of and , respectively , further , denotes the set of all values of for .we say that the set is invariant under scaling if [ th : main ] let be a subset of and .assume that 1 .the matrix has full column rank for a generic choice of , that is , 2 .the coordinate functions of can be represented as where the functions are analytic on ; 3 .there exists such that ; 4 .the dimension of the subspace spanned by the vectors of form is at least , 5 . for a generic choice of ; 6 . or , depending on whether the set is invariant under scaling or not .then decomposition is generically unique .see appendix a. assumptions 16 can be used as a checklist for demonstrating the generic uniqueness of decompositions that can be put in the form .we will discuss two application examples in sections [ section_ica][sampled sources ] .we comment on the following aspects of assumptions 26 . in this paper we will use theorem [ th : main ] in the case where is of the form . for such matrix is diagonal , yielding that .moreover , in this paper are non - constant , so is not identically zero .thus , _ assumption _ 3 in theorem [ th : main ] will hold automatically . for the reader who wishes to apply theorem [ th : main ] in cases where is not of the form , we recall the definition of an analytic ( or holomorphic ) function of several variables used in _ assumption 2_. a function of complex variables is analytic if it is analytic in each variable separately , that is , if for each and accordingly fixed the function is analytic on in the classical one - variable sense . examples of analytic functions of several variables can be obtained by taking compositions of multivariate polynomials and analytic functions in one variables , e.g. . to check _ assumption 4 _ in theorem [ th : main ] it is sufficient to present ( or prove the existence of ) linearly independent vectors .it is clear that larger yield a better bound on in _ assumption _ 6 . in all cases considered in this paper situation may appear when the vector - function models a periodic , ( locally ) odd or even function , etc . the goal of _ assumption 5 _ is to check whether generic signals of the form can be re - parameterized with fewer ( i.e. ) parameters . in this case , the jacobian has indeed rank strictly less than .it is clear that _ assumption _ 5 in theorem [ th : main ] holds trivially for and that smaller yield a better bound on in _ assumption _ 6 . in this paperwe set either ( namely in the proof of theorem [ th : exp_poly ] ) or , in the case where it is clear that does not have full column rank ( namely in the proof of theorems [ th : ica ] and [ th : rat_funct ] ) , . although the theorem holds both for and , we formulated _ assumptions _ 3 , 4 and 5 in theorem [ th : main ] for and . in these assumptions also be replaced by .we presented the complex variants , even for the case , since they may be easier to verify than their real counterparts , as and are allowed to take values in a larger set . on the other hand ,the analyticity on in _ assumption _ 2 is a stronger assumption than analyticity on and is needed in the form it is given .we consider data described by the model , where is the -dimensional vector of observations , is the -dimensional unknown source vector and is the -by- unknown mixing matrix .we assume that the sources are mutually uncorrelated but individually correlated in time .it is known that the spatial covariance matrices of the observations satisfy rcl c_1&= ( x_tx^h_t+_1)=md_1m^h=_r=1^rd_1rm_rm_r^h , + & [ eq : sobi ] + c_p&= ( x_tx^h_t+_p)=md_pm^h=_r=1^rd_prm_rm_r^h , in which is the -by- diagonal matrix with the elements of the vector on the main diagonal .the estimation of from the set is known as second - order blind identification ( sobi ) or as second - order blind identification of underdetermined mixtures ( sobium) depending on whether the matrix has full column rank or not .variants of this problem are discussed in , e.g. , ,, , ( * ? ? ?* chapter 7 ) .it is clear that if the matrices and satisfy , then the matrices and also satisfy for any permutation matrix and diagonal unitary matrix .we say that has a unique solution when it is only subject to this trivial indeterminacy. generic uniqueness of solutions of has been studied 1 ) in and ( * ? ? ?* subsection 1.4.2 ) in the case where the superscript `` '' in is replaced by the superscript `` '' ( for quantities , are that can be either real valued or complex valued ) ; 2 ) in , ( where , are are complex valued ) . in matrix equations in were interpreted as a so - called canonical polyadic decomposition of a ( partially symmetric ) tensor . in the following theoremswe interpret the equations in as matrix factorization problem .the new interpretation only relies on elementary linear algebra ; it does not make use of advanced results on tensor decompositions while it does lead to more relaxed bounds on than in , for .we consider the variants , , and in theorems [ th : ica ] and [ th : icatau_1ne0 ] , respectively .[ th : ica ] assume that and then has a unique solution for generic matrices and , i.e. , where denotes the matrix with entries , , and is a measure that is a.c .with respect to the lebesgue measure on .first we rewrite the equations in as matrix decomposition . in stepwe will apply theorem [ th : main ] to .since , the equation in is equivalent to the following pair of equations since , we further obtain that [ { \mathbf m}_1^*\otimes { \mathbf m}_1 \dots { \mathbf m}_r^*\otimes { \mathbf m}_r]^t,\\ \operatorname{vec}(\operatorname{im}{\mathbf c}_p)^t & = \\ & [ \operatorname{im}d_{p1 } \dots \operatorname{im}d_{pr } ] [ { \mathbf m}_1^*\otimes { \mathbf m}_1 \dots { \mathbf m}_r^*\otimes { \mathbf m}_r]^t.\end{aligned}\ ] ] hence , the equations in can be rewritten as , where ^t,\\ & { \mathbf a}= \left[\begin{matrix } \frac{{\mathbf d}+{\mathbf d}^*}{2}\\ \frac{{\mathbf d}-{\mathbf d}^*}{2i } \end{matrix}\right]\in\mathbb r^{k\times r},\ k=2p,\ \text{and } \\ & { \mathbf b}=[{\mathbf m}_1^*\otimes { \mathbf m}_1\ \dots\ { \mathbf m}_r^*\otimes { \mathbf m}_r]\in\mathbb r^{n\times r},\ n = i^2 .\end{aligned}\ ] ] now we choose , , , , and such that the columns of are of the form .note that the trivial parameterization with is not of the form because of the conjugation .however , since for , the parameterization ^t - i[\zeta_{i+1}\ \dots\ \zeta_{2i}]^t\otimes\\ & ( [ \zeta_1\ \dots\ \zeta_i]^t+i[\zeta_{i+1}\ \dots\ \zeta_{2i}]^t),\quad { \bm{\zeta}}\in\mathbb r^l \end{split}\ ] ] with , is of the form . as a matter of fact, each component of is a polynomial in , , so we can set , and .it is clear that the matrix can be parameterized independently of by real parameters , namely , by the entries of the matrices and .thus , the equations in can be rewritten as decomposition with , where .moreover , one can easily verify that has a unique solution if and only if decomposition is unique . in turn , since , obviously , is equivalent to it follows that can be rewritten as . to prove we check assumptions 16 in theorem [ th : main ] .assumption 1 : it is clear that if is generic , then , by the assumption , the matrix has full column rank .assumptions 23 are trivial since is the identity mapping .assumption 4 : since the rank- matrices of the form span the whole space of matrices and it follows that assumption 4 holds for .assumption 5 : an elementary computation shows that for a generic , =\bf 0 ] , so we set ^t ] is of the form and .now we check assumptions 16 in theorem [ th : main ] : 1 ) holds by our assumption ( iii ) ; 2 ) and 3 ) are trivial ; 4 ) holds for since the vectors ^t ] : 1)3 ) are trivial ; 4 ) since the vectors ^t ] : 1)3 ) are trivial ; 4 ) since an matrix with column ( for ) given by ^t,\ ] ] is nonsingular , we set ; 5 ) an elementary computation shows that for a generic , , implying that , so we set ; 6 ) holds by since is invariant under scaling .we assume that the matrix is generic and compare the bounds in theorem [ th : exp_poly ] and theorem [ th : rat_funct ] with the generic bounds in and , respectively .since is generic , it has full column rank if and only if .thus , we compare the bound with the bound , in , and the bound with the bound , in . on one hand , the bounds in and can be used in the undetermined case ( ) , while our bounds work only in the overdetermined case ( ) . on the other hand , roughly speaking , our bounds are of the form while the bounds in and are of the form , where is the number of parameters that describe a generic signal . in this senseour new uniqueness conditions are significantly more relaxed .borrowing insights from algebraic geometry , we have presented a theorem that can be used for investigating generic uniqueness in bss problems that can be formulated as a particular structured matrix factorization .we have used this tool for deriving generic uniqueness conditions in ( i ) sobium - type independent component analysis and ( ii ) a class of deterministic bss approaches that rely on parametric source models . in a companion paperwe will use the tool to obtain generic results for structured tensor and coupled matrix / tensor factorizations .in this appendix we consider the decomposition where the matrix has full column rank and denotes a known subset of . in theorem[ th : main_deterministic ] below , we present two conditions that guarantee the uniqueness of decomposition .these conditions will be checked in the proof of theorem [ th : main ] for generic points in , where is defined in .the latter proof is given in subsection [ subsec : appac ] .the step from the deterministic formulation in subsection [ subsec : appaa ] to the generic result in subsection [ subsec : appac ] is taken in subsection [ subsec : appab ] .[ th : main_deterministic ] assume that 1 .the matrix has full column rank ; 2 .the columns of the matrix satisfy the following condition : then decomposition is unique .we need to show that if there exist and such that then decompositions and coincide up to permutation of the rank- terms .first we show that assumption 2 implies that has full column rank .assume that there exist for which , such that at least one of these values being nonzero would imply that does not have full column rank .then for any , hence , by assumption 2 , at most one of the values is nonzero . since , we have that .since , it follows that or .one can easily verify that is in contradiction to assumption 2 .hence .thus the matrix has full column rank . since the matrices and have full column rank , it follows from the identity that the matrices and also have full column rank .hence , where denotes the left inverse of . by assumption 2, each row of the matrix contains at most one nonzero entry . since the matrices and have full column rank , the square matrix is nonsingular .thus , each row and each column of contains exactly one nonzero entry .hence there exist an nonsingular diagonal matrix and an permutation matrix such that . fromit follows that substituting into and taking into account that the matrix has full column rank we obtain equations imply that decompositions and coincide up to permutation of the rank- terms .theorem [ th : main_deterministic ] has already been proved for the particular cases where decomposition represents the cpd of a third - order tensor ( * ? ? ?* section iv ) , the cpd of a partially symmetric of order higher than three ( * ? ? ?* theorem 4.1 ) , the cpd of an unstructured tensor of order higher than three ( * ? ? ?* theorem 4.2 ) , and the decomposition in multilinear rank- terms ( * ? ? ? * theorem 2.4 ) .condition means that the subspace has dimension and may intersect the set only at `` trivial '' points , that is property is the key to proving uniqueness of .we can easily find from the matrix if it can be assumed that the matrix has full column rank . on the other hand ,property means that the only points in that have the hypothesized structure ( encoded in the definition of the set ) , are the vectors , ( up to trivial indeterminacies ) .however , conditions and are most often hard to check for particular points .the checking may become easier if we focus on the generic case , and this is where algebraic geometry comes in .more precisely , if is an algebraic variety , then the classical trisecant lemma states that if is sufficiently small , then holds for `` generic '' .a set is an algebraic variety if it is the set of solutions of a system of polynomial equations .it is clear that algebraic varieties form an interesting class of subsets of ; however , is not easy to verify whether a given subset of is a variety or not . on the other hand , it is known that a set obtained by evaluating a known rational vector - function ( such as in ) can be extended to a variety by taking the closure , i.e. , by including its boundary .this is indeed what we will do in the proof of lemma [ lemma : generic_con_2 ] below .first we give a formal statement of the trisecant lemma .[ lemma : trisecant ] ( ( * ? ? ?* corollary 4.6.15 ) , ( * ? ? ?* theorem 1.4 ) ) let be an irreducible algebraic variety and or depending on whether is invariant under scaling or not .let denote a set of points such that then the zariski closure of is a proper subvariety of ( times ) , that is , there exists a polynomial in variables whose zero set does not contain but does contain .it is the last sentence in the trisecant lemma that makes it a powerful tool for proving generic properties .let us explain in more detail how this works .we can use to denote a set that poses problems in terms of uniqueness , in the sense that does _ not _ intersect only in the points that correspond to the pure sources .the trisecant lemma states now that belongs to the zero set of a polynomial that is not identically zero and hence nonzero almost everywhere , i.e. the problematic cases occur in a measure - zero situation . in order to make the connection with theorem [ th : main ] we will need the following notations : [ lemma : generic_con_2 ] let assumptions 26 in theorem [ th : main ] hold . then assumption 2 in theorem [ th : main_deterministic ] holds for and , where the vectors are generic . since is equivalent to it is sufficient to show that =0 , where ^t : { \mathbf b}_1={\mathbf b}({\bm{\zeta}}_1),\dots,{\mathbf b}_r={\mathbf b}({\bm{\zeta}}_r)\\ \text{are linearly dependent}\ } , \nonumber \end{split}\\ \begin{split } { g}_{{\mathbf b}}=\{&[{\bm{\zeta}}_1^t\ \dots\ { \bm{\zeta}}_r^t]^t:\ \eqref{eq : u2condition2}\ \text { does not hold for } \\ & \ \ { \mathbf b}_1={\mathbf b}({\bm{\zeta}}_1),\dots,{\mathbf b}_r={\mathbf b}({\bm{\zeta}}_r)\}. \end{split}\end{gathered}\ ] ] it is a well - known fact that the zero set of a nonzero analytic function on has measure zero both on and .thus , to prove =0 , we will show that there exist analytic functions and of complex variables such that we consider the following three cases : 1 ) and ; 2 ) and is arbitrary ; 3 ) .1 ) _ case and . in this case , thus , the sets and take the following form : ^t : { \mathbf b}_1={\mathbf r}({\bm{\zeta}}_1),\dots,{\mathbf b}_r={\mathbf r}({\bm{\zeta}}_r)\\ \text{are linearly dependent}\ } , \nonumber \end{split}\\ \begin{split } { g}_{{\mathbf b}}={g}_{{\mathbf r}}=\{[{\bm{\zeta}}_1^t\ \dots\ { \bm{\zeta}}_r^t]^t:\ \eqref{eq : u2condition2}\ \text { does not hold for } \\ s=\operatorname{range}({\mathbf r})\ \text{and } \ { \mathbf b}_1={\mathbf r}({\bm{\zeta}}_1),\dots,{\mathbf b}_r={\mathbf r}({\bm{\zeta}}_r)\}. \end{split}\end{gathered}\ ] ] here we prove that there exist polynomials and in variables such that hold for and . first we focus on .let denote the zariski closure of .since is the image of the open ( hence irreducible ) subset under the rational map ^t,\ ] ] it follows that is also an irreducible set .hence is an irreducible variety and the dimension of is equal to at a generic point .hence , by assumption 5 in theorem [ th : main ] , since , by definition , consists of all vectors of the form , from assumption 4 in theorem [ th : main ] it follows that since , it follows that thus , by assumption 6 in theorem [ th : main ] and , thus , we have shown that satisfies the assumptions in lemma [ lemma : trisecant ] .let now the set and the polynomial be as in lemma [ lemma : trisecant ] . since is the zariski closure of , it follows that is the zariski closure of .since , by lemma [ lemma : trisecant ] , the zero set of does not contain , it follows that the zero set of does not contain . hence , there exist such that on the other hand , since is a subset of , from the definitions of and it follows that ^t\in { g}_{{\mathbf r}}.\ ] ] hence , by lemma [ lemma : trisecant ] , ^t\in { g}_{{\mathbf r}}.\ ] ] since the function is a composition of the polynomial in variables and rational functions , it follows that can be written as a ratio of two polynomials in the entries of , by , vanishes on and is not identically zero .that is , holds for .now we focus on . by assumption 6 in theorem [ th : main ] , , so assumption 4 implies that there exist such that the vectors are linearly independent .hence there exists an submatrix of ] , then there exist , and such that , yielding that ^t\in w_{{\mathbf b},\mathbb c} ] coincides with the last entries of .the generic uniqueness that we want to prove in theorem [ th : main ] , then follows from theorem [ th : main_deterministic ] .we have the following .\1 ) condition 1 of theorem [ th : main_deterministic ] holds for generic by assumption 1 in theorem [ th : main ] .\2 ) by lemma [ lemma : generic_con_2 ] , condition 2 of theorem [ th : main_deterministic ] holds for generic , or equivalently , for generic ^t\in\mathbb f^s$ ] .hence , condition 2 of theorem [ th : main_deterministic ] holds for generic .( indeed , if denotes a set of points such that condition 2 of theorem [ th : main_deterministic ] does not hold and denotes the projection onto the last coordinates of , then , by lemma [ lemma : generic_con_2 ] , , which implies ( * ? ? ?* theorem b , p.144 ) that . )we use the fact that and are polynomials in : where denotes the integer part of . substituting into these equationswe obtain that there exist rational functions and such that hold .the authors wish to thank giorgio ottaviani and ed dewey for their assistance in algebraic geometry . c. martin and r. hermann , `` applications of algebraic geometry to systems theory , part ii : feedback and pole placement for linear hamiltonian systems , '' _ proc .ieee _ , vol .65 , no . 6 , pp . 841848 , june 1977 .a. cichocki , d. mandic , c. caiafa , a .-phan , g. zhou , q. zhao , and l. de lathauwer , `` tensor decompositions for signal processing applications . from two - way to multiway component analysis , '' _ ieee signal process . mag ._ , vol .145163 , march 2015 .a. belouchrani , k. abed - meraim , j .- f .cardoso , and e. moulines , `` a blind source separation technique using second - order statistics , '' _ ieee trans . signal process ._ , vol . 45 , no . 2 , pp. 434444 , feb 1997 .a. stegeman , j. ten berge , and l. de lathauwer , `` sufficient conditions for uniqueness in candecomp / parafac and indscal with random component matrices , '' _ psychometrika _ , vol .71 , no . 2 ,pp . 219229 , june 2006 .t. jiang and n. d. sidiropoulos , `` kruskal s permutation lemma and the identification of candecomp / parafac and bilinear models with constant modulus constraints , '' _ ieee trans .signal process ._ , vol .52 , no . 9 , pp . 26252636 , september 2004 .m. beltrametti , e. carletti , d. gallarati , and g. monti bragadin , _ lectures on curves , surfaces and projective varieties . a classical view of algebraic geometry_.1em plus 0.5em minus 0.4emems textbooks in mathematics .european mathematical society ( ems ) , zrich , 2009 .ignat domanov ignat domanov received the master s degree from donetsk state university , ukraine , the ph.d degree in physics and mathematics from institute of applied mathematics and mechanics , ukraine , and the ph.d .degree in engineering from ku leuven , belgium , in 1998 , 2004 , and 2013 , respectively . since 2013he has been a postdoctoral fellow with the ku leuven , belgium .his research interests include applied linear algebra , tensor decompositions and tensor - based signal processing .lieven de lathauwer lieven de lathauwer received the master s degree in electromechanical engineering and the ph.d .degree in applied sciences from ku leuven , belgium , in 1992 and 1997 , respectively .he is currently professor with ku leuven , belgium .de lathauwer is an associate editor of the siam journal on matrix analysis and applications and has been an associate editor for ieee transactions on signal processing .his research concerns the development of tensor tools for engineering applications .
|
algebraic geometry , although little explored in signal processing , provides tools that are very convenient for investigating generic properties in a wide range of applications . generic properties are properties that hold `` almost everywhere '' . we present a set of conditions that are sufficient for demonstrating the generic uniqueness of a certain structured matrix factorization . this set of conditions may be used as a checklist for generic uniqueness in different settings . we discuss two particular applications in detail . we provide a relaxed generic uniqueness condition for joint matrix diagonalization that is relevant for independent component analysis in the underdetermined case . we present generic uniqueness conditions for a recently proposed class of deterministic blind source separation methods that rely on mild source models . for the interested reader we provide some intuition on how the results are connected to their algebraic geometric roots . structured matrix factorization , structured rank decomposition , blind source separation , direction of arrival , uniqueness , algebraic geometry
|
methods have been developed to efficiently obtain representations of words in that capture subtle semantics across the dimensions of the vectors ( collobert and weston , 2008 ) .for instance , after sufficient training , relationships encoded in difference vectors can be uncovered with vector arithmetic : vec(``king '' ) - vec(``man '' ) + vec(``woman '' ) returns a vector close to vec(``queen '' ) ( mikolov et al .2013a ) .applying this powerful notion of distributed continuous vector space representations of words , we embed representations of institutions and the words from their law and policy documents into shared semantic space .we can then combine positively and negatively weighted word and government vectors into the same query , enabling complex , targeted and subtle similarity computations .for instance , which government branch is more characterized by `` validity and truth , '' or `` long - term government career '' ?we apply this method , gov2vec , to a unique corpus of supreme court opinions , presidential actions , and official summaries of congressional bills .the model discerns meaningful differences between house , senate , president and court vectors .we also learn more fine - grained institutional representations : individual presidents and congresses ( 2-year terms ) .the method implicitly learns important latent relationships between these government actors that was not provided during training .for instance , their temporal ordering was learned from only their text .the resulting vectors are used to explore differences between actors with respect to policy topics .a common method for learning vector representations of words is to use a neural network to predict a target word with the mean of its context words vectors , obtain the gradient with back - propagation of the prediction errors , and update vectors in the direction of higher probability of observing the correct target word ( bengio et al . 2003 ; mikolov et al .after iterating over many word contexts , words with similar meaning are embedded in similar locations in vector space as a by - product of the prediction task ( mikolov et al .le and mikolov ( 2014 ) extend this word2vec method to learn representations of documents . for predictions of target words ,a vector unique to the document is concatenated with context word vectors and subsequently updated .similarly , we embed institutions and their words into a shared vector space by averaging a vector unique to an institution with context word vectors when predicting that institution s words and , with back - propagation and stochastic gradient descent , update representations for institutions and the words ( which are shared across all institutions ) . there are two hyper - parameters for the algorithm that can strongly affect results , but suitable values are unknown .we use a tree of parzen estimators search algorithm ( bergstra et al .2013 ) to sample from parameter space and save all models estimated .subsequent analyses are conducted across all models , propagating our uncertainty in hyper - parameters . due to stochasticity in training and the uncertainty in the hyper - parameter values , patterns robust across the ensemble are more likely to reflect useful regularities than individual models .gov2vec can be applied to more fine - grained categories than entire government branches . in this context , there are often relationships between word sources , e.g. obama after bush , that we can incorporate into the learning process . during training ,we alternate between updating govvecs based on their use in the prediction of words in their policy corpus and their use in the prediction of other word sources located nearby in time .we model temporal institutional relationships , but any known relationships between entities , e.g. ranking congresses by number of republicans , could also be incorporated into the structured gov2vec training process ( fig . 1 ) . after training, we extract parameters , where is the number of unique words , is the number of word sources , and the vector dimensionality , which varies across the models ( we set ) .we then investigate the most cosine similar words to particular vector combinations , , where , is one of wordvecs or govvecs of interest , are the most frequent words in the vocabulary of words ( to exclude rare words during analysis ) excluding the query words , is _ 1 _ or _-1 _ for whether we re positively or negatively weighting .we repeat similarity queries over all models , retain words with cosine similarity , and rank the word results based on their frequency and mean cosine similarity across the ensemble .we also measure the similarity of wordvec combinations to each govvec and the similarities between govvecs to validate that the process learns useful embeddings that capture expected relationships .we created a unique corpus of 59 years of all u.s .supreme court opinions ( 1937 - 1975 , 1991 - 2010 ) , 227 years of all u.s . presidential memorandum , determinations , and proclamations , and executive orders ( 1789 - 2015 ) , and 42 years of official summaries of all bills introduced in the u.s .congress ( 1973 - 2014 ) .we used official summaries rather than full bill text because full texts are only available from 1993 and summaries are available from 1973 .we scraped all presidential memorandum ( 1,465 ) , determinations ( 801 ) , executive orders ( 5,634 ) , and proclamations ( 7,544 ) from the http://www.presidency.ucsb.edu/[american presidency project website ] .the sunlight foundation downloaded https://github.com/unitedstates/congress/wiki[official bill summaries ] from the u.s .government publishing office ( gpo ) , which we downloaded .we downloaded supreme court decisions issued 19371975 ( vol .300 - 422 ) from the https://www.gpo.gov/fdsys/bulkdata/scd/1937[gpo ] , and the pdfs of decisions issued 19912010 ( vol .502 - 561 ) from the http://www.supremecourt.gov/opinions/boundvolumes.aspx[supreme court ] .we removed html artifacts , whitespace , http://jmlr.csail.mit.edu/papers/volume5/lewis04a/a11-smart-stop-list/english.stop[stop words ] , words occurring only once , numbers , and punctuation , and converted to lower - case .we tested whether our learned vectors captured meaningful differences between branches .fig . 2 displays similarities between these queries and the branches , which reflect _ a priori _ known differences .gov2vec has unique capabilities that summary statistics , e.g. word frequency , lack : it can compute similarities between any source and word as long as the word occurs at least in one source , whereas word counting can not provide meaningful similarities when a word never occurs in a source s corpus .most importantly , gov2vec can combine complex combinations of positively and negatively weighted vectors in a similarity query .we learned representations for individual presidents and congresses by using vectors for these higher resolution word sources in the word prediction task . to investigate if the representations capture important latent relationships between institutions , we compared the cosine similarities between the congresses over time ( 93rd113th ) and the corresponding sitting presidents ( nixon obama ) to the bill veto rate .we expected that a lower veto rate would be reflected in more similar vectors , and , indeed , the congress - president similarity and veto rate are negatively correlated ( spearman s computed on raw veto rates and similarities : -0.74 ; see also fig . 3 ) .as a third validation , we learn vectors from only text and project them into two dimensions with principal components analysis . from fig . 4 it s evident that temporal and institutional relationships were implicitly learned .one dimension ( top - to - bottom ) almost perfectly rank orders presidents and congresses by time , and another dimension ( side - to - side ) separates the president from congress .5 ( top ) asks : how does obama and the 113th house differ in addressing climate change and how does this vary across environmental and economic contexts ?the most frequent word across the ensemble ( out of words with similarity to the query ) for the obama - economic quadrant is `` unprecedented . ''`` greenhouse '' and `` ghg '' are more frequent across models and have a higher mean similarity for obama - environmental than 113th house - environmental . fig .5 ( bottom ) asks : how does the house address war from `` oil '' and `` terror '' perspectives and how does this change after the 2001 terrorist attack . compared to the 106th , both the oil and terrorism panels in the 107th ( when 9 - 11 occurred ) have words more cosine similar to the query ( further to the right ) suggesting that the 107th house was closer to the topic of war , and the content changes to primarily strong verbs such as instructs , directs , requires , urges , and empowers . , where _ _ g__= , , , , _ _c__= , , , .the bottom panel is the war policy query for the u.s .house of representatives before and after the 9 - 11 terrorist attacks : wv ( ) , _ _g__= , _ _ c__= .the exact query used to create each quadrant is provided at the bottom of the quadrant.,title="fig:",width=316 ] , where _ _ g__= , , , , _ _c__= , , , .the bottom panel is the war policy query for the u.s .house of representatives before and after the 9 - 11 terrorist attacks : wv ( ) , _ _ g__= , __ c__= .the exact query used to create each quadrant is provided at the bottom of the quadrant.,title="fig:",width=316 ]political scientists model text to understand political processes ( grimmer 2010 ; roberts et al .2014 ) ; however , most of this work focuses on variants of topic models ( blei et al .djuric et al . (2015 ) apply a learning procedure similar to structured gov2vec to streaming documents to learn representations of documents that are similar to those nearby in time . structured gov2vec applies this joint hierarchical learning process ( using entities to predict words _ and _ other entities ) to non - textual entities .kim et al . ( 2014 ) and kulkarni et al . ( 2015 ) train neural language models for each year of a time ordered corpora to detect changes in words . instead of learning models for distinct times , we learn a global model with embeddings for time - dependent entities that can be included in queries to analyze change .kiros et al .( 2014 ) learn embeddings for text attributes by treating them as gating units to a word embedding tensor .their process is more computationally intensive than ours .we learned vector representations of text meta - data on a novel data set of legal texts that includes case , statutory , and administrative law .the representations effectively encoded important relationships between institutional actors that were not explicitly provided during training .finally , we demonstrated fine - grained investigations of policy differences between actors based on vector arithmetic .more generally , the method can be applied to measuring similarity between any entities producing text , and used for recommendations , e.g. what s the closest _ think - tank vector _ to the _ non - profit vector _ representation of the sierra club ?methodologically , our next goal is to explore where training on non - textual relations , i.e. structural gov2vec , is beneficial .it seems to help stabilize representations when exploiting temporal relations , but political relations may prove to be even more useful . substantively, our goal is to learn a large collection of vectors representing government actors at different resolutions and within different contexts to address a range of targeted policy queries .once we learn these representations , researchers could efficiently search for differences in law and policy across time , government branch , and political party .we thank the anonymous reviewers for helpful suggestions .bengio , yoshua , rjean ducharme , pascal vincent , and christian janvin .2003 . a neural probabilistic language model . _ j. mach .res . _ 3 ( march ) : 113755 .bergstra , james s. , daniel yamins , and david cox .2013 . making a science of model search : hyperparameter optimization in hundreds of dimensions for vision architectures . in _ proceedings of the 30th international conference on machine learning _ , 11523collobert , ronan and jason weston .2008 . a unified architecture for natural language processing : deep neural networks with multitask learning . in _ proceedings of the 25th international conference on machinelearning_. 160167 .djuric , nemanja , hao wu , vladan radosavljevic , mihajlo grbovic , and narayan bhamidipatihierarchical neural language models for joint representation of streaming documents and their content . in _ proceedings of the 24th international conference on world wide web _ , 24855 .new york , ny , usa : acm . grimmer , justin . 2010 . `` a bayesian hierarchical topic model for political texts : measuring expressed agendas in senate press releases . '' _ political analysis _ 18 ( 1 ) : 135 .kim , yoon , yi - i .chiu , kentaro hanaki , darshan hegde , and slav petrov .temporal analysis of language through neural language models . in _ proceedings of the acl 2014 workshop on language technologies and computational social science _ , 6165 .association for computational linguistics .kiros , ryan , richard zemel , and ruslan r salakhutdinov .2014 . a multiplicative model for learning distributed text - based attribute representations .in _ advances in neural information processing systems 27 _ , edited by z. ghahramani , m. welling , c. cortes , n. d. lawrence , and k. q. weinberger , 234856 .curran associates , inc .kulkarni , vivek , rami al - rfou , bryan perozzi , and steven skiena . 2015 . statistically significant detection of linguistic change . in _ proceedings of the 24th international conference on world wide web _new york , ny , usa : acm .le , quoc , and tomas mikolov .2014 . distributed representations of sentences and documents . in_ proceedings of the 31st international conference on machine learning _ , 118896 .mikolov , tomas , ilya sutskever , kai chen , greg s corrado , and jeff dean .2013b . distributed representations of words and phrases and their compositionality . in _ advances in neural information processing systems26 _ , edited by c. j. c. burges , l. bottou , m. welling , z. ghahramani , and k. q. weinberger , 31119 .curran associates , inc .roberts , margaret e. , brandon m. stewart , dustin tingley , christopher lucas , jetson leder - luis , shana kushner gadarian , bethany albertson , and david g. rand .`` structural topic models for open - ended survey responses . ''_ american journal of political science _ 58 ( 4 ) : 106482 .
|
we compare policy differences across institutions by embedding representations of the entire legal corpus of each institution and the vocabulary shared across all corpora into a continuous vector space . we apply our method , gov2vec , to supreme court opinions , presidential actions , and official summaries of congressional bills . the model discerns meaningful differences between government branches . we also learn representations for more fine - grained word sources : individual presidents and ( 2-year ) congresses . the similarities between learned representations of congresses over time and sitting presidents are negatively correlated with the bill veto rate , and the temporal ordering of presidents and congresses was implicitly learned from only text . with the resulting vectors we answer questions such as : how does obama and the 113th house differ in addressing climate change and how does this vary from environmental or economic perspectives ? our work illustrates vector - arithmetic - based investigations of complex relationships between word sources based on their texts . we are extending this to create a more comprehensive legal semantic map
|
neoclassical tearing mode ( ntm ) stabilization by electron cyclotron current drive ( eccd ) requires good radial alignment to the rotating island , and is more efficient if the eccd is modulated to predominantly drive current in the island o - point .alignment and modulation require diagnosing in real time the radial and toroidal position of the rotating island . for alignment ,this was done by horizontal electron cyclotron emission ( ece ) .for modulation it was done by means of magnetic diagnostics and could in principle be done by horizontal ece , which is spatially localized and can more accurately determine the toroidal phase .ece was used extensively for this purpose as an off - line diagnostic .however , alignment and modulation assisted by magnetic probes and/or horizontal ece require real - time equilibria and helical extrapolations from the measurement location to the eccd location , taking into account the 3d structure of the island .apart from being complicated , these extrapolations suffer from the propagation of systematic and statistical errors .these complications become unnecessary if ece is collected along the same _ oblique _ line of sight as eccd or an equivalent , toroidally displaced but otherwise identical line of sight .the present paper describes the diii - d diagnostic setup for the latter approach , which led to the first oblique - ece - assisted stabilization of ntms .emission was collected via a temporarily disconnected eccd launcher , at nearly the same frequency and with the same view angles as eccd and from the same radial and vertical location and , but at a different toroidal location . by reciprocity , this oblique emission detects the island at the same location where the ec current is driven , except for a toroidal -rather than helical- offset .this offset is easily corrected by appropriate electronics .oblique ece applied to eccd is also immune from systematic errors associated with refraction and diffraction , which are the same for the emitted and the injected millimeter ( mm ) wave beam .an exception to such symmetry between oblique ece and eccd is when the third ec harmonic is present in the diii - d plasma , causing measurable absorption and emission but negligible current drive .the radiometer ( described in sec.[sec_muwavesetup ] together with the front optics ) features only 2 channels , for the reasons discussed in sec.[secprincip ] . herea single radiometer controls the modulation of 4 gyrotrons at different toroidal locations .this is done by means of a broad - band phase - shifter , and might have advantages over the in - line approach in devices equipped with several gyrotrons , like iter .additionally , using distinct antennas and transmission lines for oblique ece and eccd reduces the technical difficulties of separating mw - level injected mm - waves from nw - level emission from the plasma . on the other hand ,the introduction of the phase shifter comes with its own challenge : as the toroidal separation between launcher and receiver is fixed , the _ phase_-delay needs to be fixed and frequency - independent .most phase - shifters , however , apply a fixed _ time_-delay .the problem was solved by a special circuit where the resistance varies with frequency in a digitally controlled manner ( sec.[secinterf ] ) .finally , experimental results are presented in sec.[secresults ] and capabilities and limitations of the diagnostic technique are discussed in sec.[seclimt ] .with a rough knowledge of the island radial location relative to the emitting layer location , and whether they lie on the low or high field side ( lfs , hfs ) , a single channel is sufficient to determine whether the island o - point is transiting in the oblique ece view when the temperature fluctuation is positive or negative ( table [ tab1 ] ) .therefore , a single oblique ece channel is sufficient to modulate the eccd with the correct frequency and phase , with no ambiguity between o- and x - point ..[tab1 ] o - point discrimination from a single ece channel ( horizontal or oblique ) measuring at radial location . denotes the island location .obviously x - point detection criteria are opposite to o - point criteria .for example , if the o - point is identified by , the x - point is identified by , and vice versa . [ cols="<,<,^",options="header " , ] at least two channels , however , are necessary for radial alignment .their frequencies need to be above and below the eccd frequency .if eccd is correctly deposited at the island center , the two oblique ece signals will be out of phase with each other .if instead they are in phase , they are originating from the same side of the island .this implies that eccd is also being deposited on that side of the island rather than at its center , where it would be desirable . to clarify ,let us consider eccd on the hfs , well - aligned with the island of interest .the island is `` bracketed '' between two channels 1 and 2 ( ) .let the raw , uncalibrated signals and be non - inverted and proportional to at locations and .the waveform - will peak in correspondence of the o - point transit .hence , it can be used to modulate the eccd in synch and phase with the o - point . on the lfs, - should be used .note that , due to oblique view , emission is toroidally displaced with respect to the receiver .similarly , due to oblique launch , absorption is toroidally displaced with respect to the launcher .the two effects cancel each other , hence it is not necessary to correct - ( or - ) for the emission - receiver or launcher - absorption toroidal displacements . however , it is necessary to correct it for the launcher - receiver toroidal phase - difference .note also that for good radial alignment and for poor radial alignment .consequently , it is possible to chose a discrimination level ( sec.[secinterf ] ) below which is approximated with 0 at all times . in this way, no eccd is injected when the alignment is wrong .this is simple but requires manual adjustment of the ( absolute ) discrimination level . for a more versatile discrimination, one can compare the sum and difference of the signals : for poor alignment , hence poor alignment can be defined by falling below a relative threshold level .fig.[figtopview ] shows the top view of the experimental setup . as an ntm of toroidal mode number rotates with angular frequency , it modulates -at frequency - the 2nd harmonic x - mode emission originating at the dashed curve in fig.[figtopview ] . on the basis of this modulated emission , measured by a radiometer ( sec.[sec_muwavesetup ] ) , a circuit modulates the gyrotron power supplies with the correct frequency and phase - shift ( sec.[secinterf ] ) to drive ec current in the island o - point .a 2 channel heterodyne radiometer was built and installed at diii - d on the eccd transmission line 5 .the line of sight was set identical to the launch directions for gyrotrons 2 , 3 , 4 and 6 , ensuring symmetry between emission and absorption .an optical switch allows to use launcher and transmission line 5 for eccd or oblique ece ( fig.[microwsetupfig ] ) .the optical switch has two sides at machine vacuum ( a `` gyrotron '' side and a `` plasma '' side ) .a third side ( the `` diagnostic '' side ) consists of a short , evacuated corrugated overmoded waveguide with a quartz window . on the other side of the window , at atmospheric pressure , is the radiometer .a teflon lens refocuses the divergent beam exiting from the waveguide window into a horn .two additional quasi - optical components between the window and the horn select the desired elliptical polarization and frequency .the first component is a metal plate array or `` artificial dielectric '' ( fig.[figqo]a ) .electromagnetic waves of electric field perpendicular to the plates propagate undisturbed .waves of field parallel to the plates , instead , propagate at higher phase velocity , , where , =2.73 mm is the wavelength and =5.7 mm the spacing between the plates . at the output of the 23.5 mm long device ,linear polarizations recombine with a shift of .thus , linear polarizations not parallel nor perpendicular to the plates become elliptical and , conversely , elliptical polarizations can be linearized .for oblique view , the o- and x - mode polarizations are , indeed , elliptical . herethe /4 shifter linearizes them and couples the x - mode to the linearly polarized mm - wave components .this guarantees modal purity and prevents mixing the main 2nd harmonic x - mode signal with the weaker ( optically thin ) but not negligible 2nd harmonic o - mode ( fig.[figqo ] ). this would be undesirable because , as it experiences different refraction , it originates elsewhere , therefore the information that it carries does not come from the intended location . to manufacture the device , we equally spaced and fixed in frame various double - sided copper clad glass - fiber boards of the type used in the electronics industry .the thickness of the plates , 0.44 mm , is much smaller than their spacing =5.7 mm . at the same time , the 17 copper coating is much thicker than the 0.02 skin depth for 110ghz in copper , making the plates good reflectors .the other quasi - optical component is a dichroic plate ( fig.[figqo]b ) , basically a perforated aluminum plate stopping waves of wavelength bigger than the hole diameter .this is because , with a 95.75ghz local oscillator ( lo ) and 1ghz - wide channels centered at 12.5 and 15.5ghz , the radiometer is not just sensitive to 107.75 - 111.75ghz , but also to the undesired 79.75 - 83.75ghz band .this originates from the plasma edge and scrape - off layer on the lfs . for protection against stray radiation from ec heating and eccd, the radiometer is equipped with a stripline notch filter with rejection in the 110.05.45 ghz range , similar to the one succesfully used in the horizontal ece radiometer . the notch - filtered signal beats in a mixer with a gunn diode lo , stabilized by a regulator , of measured frequency 95.75ghz .the signal , now down - converted in frequency , is amplified by two gaas field effect transistor ( fet ) amplifiers of gain 33 and 25db and low noise figures ( 2 and 7db , respectively ) .then it is split by a power divider and filtered by two bandpass filters centered at 12.5 and 15.5ghz .in fact , two sets of filters are available , respectively 400mhz and 1ghz wide . the filters , in connection with the lo frequency ,determine the channel frequencies of 108.25 and 111.25ghz , approximately symmetric around the gyrotron frequency , 110ghz .their separation , 3ghz , corresponds to a distance =5 cm in units of major radius .note , however , that the island is viewed obliquely .the intermediate - frequency signals at the output of the filters are measured by crystal detectors .their video - frequency outputs are filtered ( <30khz ) and finally amplified by videoamplifiers of gain =450 - 2300 and corresponding bandwidth 80 - 50khz at 3db , which is more than sufficient to assist eccd up to the maximum gyrotron modulation frequency ( 10khz ) . as motivated in sec.[secprincip ] and illustrated in fig.[figblockelectr ], a simple analog voltage subtractor generates the difference between the signals from the radiometer channels .the difference signal is then filtered to remove slow changes of , noise , and ntms which are too fast for gyrotron modulation , it is fed into the phase - shifter described below , and ultimately in a comparator .this converts it in a square wave suitable for modulation of the gyrotron power supply , but only if the input signal exceeds a threshold : small differential signals are not used for modulation , as they typically indicate poor alignment ( sec.[secprincip ] ) .the gyrotrons are located 15 and 30 away , toroidally , from the oblique ece ( fig.[figtopview ] ) .the desired phase shift is the sum of three contributions .the first one compensates for the different toroidal location of the oblique ece sensor and eccd actuators . depending on the mode rotating clockwise or counter - clockwise, it can be a _ phase _ retardation ( or , where is the toroidal mode number ) or anticipation ( or - ) .the second contribution is a fixed _ time _ anticipation , to compensate for the finite turn - on and turn - off time of the gyrotrons and their power supply , as well as the discharge time of their cables .it amounts to - , depending on the gyrotron .the third contribution , , compensates for the fixed time - delays introduced by filters and videoamplifiers .the resultant of these contributions is always negative for normal ( counter - clockwise ) plasma rotation . for clockwise rotation, it becomes negative at high frequencies . in the impossibility of imparting negative phase shifts ,the circuit was designed for positive time shifts where is the smallest integer giving . as an example , a phase shift of -15 is replaced by an equivalent phase shift of 345 . has to be small because the frequency might change from one period to another .it is well - known that , generally , phase shifters provide constant time - delay . here , instead, we need a nearly - constant phase - shift ( eq.[eqshift ] ) .the range of interest is =1 - 10khz : modes rotating at less than 1khz tend to lock , and other control strategies need to be used , while 10khz exceeds the gyrotron modulation capabilities , in which case continuous eccd can be used for stabilization .constant phase shifts were obtained in the past in simple op - amp circuits with a fixed capacitor and a frequency - dependent resistor : with a resistance decaying inversely with the frequency , the -time of the circuit also decreases like , yielding costant phase shift . in turn , the frequency - dependent resistor consisted of a frequency - to - voltage converter ( fvc ) , of output , and a voltage controlled resistor ( vcr ) , typically a dual junction - gate fet ( jfet ) , where the gate - source voltage controls the resistance between the drain and the source .unfortunately this dependence is linear only in narrow ranges of frequencies : 20 - 40khz in ref. or 30 - 50mhz in ref. . on the broader range of interest here ( 1 - 10khz ) ,dual jfets and other fets exhibit marked nonlinearities .for this reason it was decided to adopt as vcr a digitally programmable resistor controlled by a microcontroller ( fig.[figblockps ] ) .the microcontroller has a built - in analog - to - digital converter sampling the fvc output . in brief, the fvc measures the rotation frequency and converts in a voltage which is digitized by the microncontroller .on the basis of that input voltage and of a programmed ( or ) curve , the microcontroller voltage - controls the vcr so that it takes the desired value of .this is done virtually continuously , at a clock frequency of 20mhz , much higher than 1 - 10khz .the digital resistor is calibrated as follows .the phase - shifter is fed with sine - waves of known frequencies , then a variable resistor is adjusted so that the output signal has the desired phase shift .the frequencies and associated resistor values are recorded and a curve is fit to them .this curve is then programmed into the microcontroller .this approach extends the fvc - vcr technique to a broader frequency range and adds flexibility , in that arbitrary curves can be programmed , allowing for example the inclusion of and in eq.[eqshift ] . the final result of this circuitry is a 0 - 10v square wave , optically converted and fed in the gyrotron power supplies .four of these systems are implemented , with up to 7 interchangeable microcontrollers , programmed for phase shifts equivalent to 0 , , and .the large shifts are intended for =2 modes .measurements confirm that the shifter introduces a dramatically non - flat and a nearly flat ( fig.[figprogres ] ) .the slight decrease of is intentional , because and become increasingly important at higher .the good behavior of is the result of programming the microcontroller according to the manual calibration of described above .the shifter operates in the 1 - 7khz range ( fig.[figprogres ] ) , limited by the dynamic range and precision of the vcr , which can only be set to 256 values ( 8 bit ) in the 0 - 10k range .these resistances are too low at the high end and relatively too coarse at the low end .extension to higher might be possible by a piecewise combination of curves of the type plotted in fig.[figprogres ] , adopting higher at higher ( the digital resistor can be programmed to yield this complicated ) .for example , at one might apply =705 instead of 345 .both are equivalent to -15 , but 705 resets to a higher , more accessible value , from which it decays again with , until eventually will be adopted , and so on .the setup described here was used to control the eccd modulation in real time in ntm stabilization experiments , as well as to check the quality of the radial alignment between eccd and ntm after the discharge was completed .fig.[fig132113det]a shows a detail of the eccd power injected in the plasma , which was modulated on the basis of the oblique ece signals in fig.[fig132113det]b - c .the satisfactory quality of modulation is qualitatively confirmed by how well the eccd correlates with the mirnov probe measurement of the rotating ntm in fig.[fig132113det]d , except for the phase , which is different , as it should be due to the different locations .the phase difference between the two oblique ece channels is plotted in figure [ figalign1 ] .this is about zero , except in the marked interval , when the normalized minor radius of the eccd matches the of the ntm within 1% .note in fig.[fig132113det ] and in the marked interval in fig.[figalign1 ] that the phase - difference between oblique ece signals , even in case of good alignment , is not exactly .this is expected , and discussed in the next section .the application of oblique ece to eccd modulation does not require the radiometer to be calibrated , neither absolutely ( in temperature ) nor relatively ( from channel to channel ). the application to eccd alignment does not need absolute calibration either , but it requires the two channels to have approximately the same responsivity , because good alignment is recognized by two signals being out of phase and having approximately equal amplitude .the modulation works even when the alignment indicator does not , as far as one channel works ( sec.[secprincip ] ) . by lowering the threshold in the discriminator ( sec.[secinterf ] ), modulation can work even if good alignment has not been achieved yet and the signals are in phase , nearly canceling each other .likewise , the indicator of alignment can work even under conditions preventing proper modulation , such as the mode rotating too fast for gyrotron modulation ( ) , but not for oblique ece detection ( ) .it should be pointed out that signals emitted on opposite sides of the island are not exactly out of phase by 180 , consistent with the oblique view and with flow shear .the effect is more pronounced ( to the point that signals appear in phase ) if the view is tangent to the flux - surface of interest .this geometry inhibits both modulation and alignment , and should be avoided .another potential issue , paradoxically , stems from the fact that the ntm and eccd have the same frequency . as a result , at that frequency the oblique ece will measure a associated with the ntm , and a caused by the modulated heating that inevitably accompanies the modulated eccd . in principle , this can lead to a nonlinear effect : the oblique ece modulates the eccd , which perturbs the oblique ece , etc .in practice , however , the radial profiles of are so different that a phase discontinuity across the island center is recognized even in presence of a modulated eccd background : the associated with an ntm peaks at the island edges and basically oscillates as a whole , i.e. with the same phase everywhere , except for a change of sign from one side of the island to the other ; from modulated eccd , instead , peaks at the deposition location ( i.e. at the island center , if it is properly aligned ) and propagates from there ( heat pulses ) . finally , an intrinsic limitation of oblique ece is the higher level of doppler broadening compared with horizontal ece .this smooths and broadens the profile , and lowers its peaks , but preserves its shape and inversion radius , guaranteeing that modulation and alignment are still possible .this work was supported in part by the us doe under de - fg03 - 97er544156 and de - sc0006415 .fv thanks d. kellman and r. nazikian for the fruitful discussions and r. la haye and r. prater for the interest and encouragement .this paper is dedicated to coauthor t. deterly , recently deceased .( color online ) top view of diii - d and experimental setup interfacing the oblique ece radiometer to the gyrotron power supplies ( ps ) , after introducing a phase shift to compensate for the different toroidal locations .the radiometer collects emission from the dashed line , which varies with time as the 3/2 mode rotates in the direction .only the o - point of the mode ( or , more precisely , a flux tube around it ) is shown for simplicity , in orange . ]( color online ) ( a ) plate array used to linearize the o and x elliptical polarizations and guarantee pure x - mode measurements .( b ) dichroic plate acting as a high pass filter , to reject the lower sideband ( 80 - 84ghz ) of the radiometer .insets : diii - d cross - sections , ec cold resonances and deposition regions for correct and incorrect polarizations and frequencies . ] block diagram of the frequency - independent phase shifter .a frequency - to - voltage converter ( fvc ) controls , through a micro - controller , a voltage - controlled resistor ( vcr ) .its resistance , multiplied by the =10nf/820pf capacitance , delays the the signal by a time .the frequency - dependence of is programmed to yield the phase - shift of eq.[eqshift ] and fig.[figprogres ] . ]( color online ) measured time - delay and phase - shift imparted by the circuit in fig.[figblockps ] to gyrotrons 3 and 4 , when the vcr is programmed for a 3/2 mode rotating in the standard direction ( fig.[figtopview ] ) . ]( color online ) ( a ) eccd and normalized minor radii and ( b ) phase difference between oblique ece signals at 108.25 and 111.25ghz , exhibiting a phase jump in correspondence of good radial alignment between eccd at 110ghz and the rotating ntm responsible for the oblique ece oscillations . ]
|
a 2 channel oblique electron cyclotron emission ( ece ) radiometer was installed on the diii - d tokamak and interfaced to 4 gyrotrons . oblique ece was used to toroidally and radially localize rotating magnetic islands and so assist their electron cyclotron current drive ( eccd ) stabilization . in particular , after manipulations operated by the interfacing analogue circuit , the oblique ece signals directly modulated the current drive in synch with the island rotation and in phase with the island o - point , for a more efficient stabilization . apart from the different toroidal location , the diagnostic view is identical to the eccd launch direction , which greatly simplified the real - time use of the signals . in fact , a simple toroidal extrapolation was sufficient to lock the modulation to the o - point phase . this was accomplished by a specially designed phase shifter of nearly flat response over the 1 - 7khz range . moreover , correlation analysis of two channels slightly above and below the eccd frequency allowed checking the radial alignment to the island , based on the fact that for satisfactory alignment the two signals are out of phase .
|
cellular systems are not entirely deterministic , but are instead impacted by small , random fluctuations in the number and activity of molecules of intracellular species .such fluctuations lead to macroscopic effects in a diverse array of processes . in differentiation ,the resulting noise plays a central role in cell fate determination and can allow clonal populations of differentiating cells to achieve distinct final states .noise can also produce spontaneous transitions , whereby it causes a system to switch from one stable state to another , often producing a significant change of phenotype or function .such stochastic state switching occurs , for example , in the _ lac _ system , where rare , brief transcription events in the `` off '' state cause large bursts in lacy expression , which in turn can be amplified and stabilized by a positive feedback loop .stochastically - induced transitions also underlie recent observations of spontaneous dedifferentiation in cancer cells , in which cancer stem cells arose de - novo from non - stem cell populations .the response to noise and the overall behavior of many biophysical systems are determined by an underlying epigenetic landscape . in this landscape ,the valleys represent the distinct achievable states of the system and the heights of the separating barriers determine their robustness to noise .a benefit of this representation is that bifurcation points locations in the parameter space at which one or more stable states suddenly cease to exist correspond precisely to the points where one or more of such barriers first reach zero height as parameters change .this landscape thus incorporates two distinct features of a state , namely its robustness to noise and its deterministic stability , into one : _ the less robust a state is to noise , the closer it is to being eliminated through a bifurcation , and vice - versa_. the landscape representation has been given a quantitative foundation as the quasipotential of the deterministic component of the system dynamics and has been explored in experiments e.g ., to show how two parameters in the yeast galactose signaling network , the concentrations of galactose and intracellular gal80p , can alter the rates of stochastic switching in this bistable circuit .despite these advances , researchers ability to control this landscape in order to induce prespecified biological outcomes has been generally limited to at most two parameters , and no general method currently exists to systematically tune transitions between stable states and/or eliminate undesired states altogether . the possibility of such control would offer clear opportunities .for example , under the widely supported stochastic model for induced pluripotent stem cell generation , a majority of cells have the possibility of being reprogrammed , even though existing technologies have achieved substantially smaller yields .the ability to control the response to noise of differentiated and stem - cell states ( e.g. , inhibiting transitions to the first and promoting transitions to the second ) could lead to enhanced procedures to create induced pluripotent stem cells .similarly , in the context of the `` cancer attractor '' hypothesis , in which normal and cancer cells correspond to distinct co - existing stable states , identifying interventions that destabilize , or eliminate , the cancerous state could lead to new therapeutic strategies . in this paperwe propose a broadly applicable method , here termed _optimal least action control _ ( olac ) , that can predict and control the dynamical behavior and response to noise in a wide class of biophysical networks . as schematically illustrated in fig .[ fig:0 ] , the essence of our approach is that to control a biophysical system it is sufficient to identify interventions e.g . , changes to gene expression , protein levels , or interaction rates that can reshape the topography of the underlying quasipotential in a desired way .this approach ultimately leads to a _ network of state transitions _ ( nest ) describing the transitions between stable states and that can be controlled by changing the heights of the separating barriers without changing any quality of the noise . for a given system ,this is achieved by determining the minimum action paths those followed by the most likely noise - induced transition trajectories and the corresponding transition rates between all pairs of stable states , and then optimizing these transition rates for a desired outcome .furthermore , this general foundation in a physical least - action principle allows olac to be applied broadly to many other complex networks as well .in particular , while we focus our application of olac to biophysical networks , applications to other networks where noise and multistability play important roles , including power - grid networks , polymer networks and food - web networks , among others , are immediate within the formulation we establish here .we apply olac to several gene network models and illustrate how this method can be used to make biologically realizable reprogramming predictions . in the limit of zero noise intensity , olac automatically identifies bifurcations that eliminate undesirable states and induce purely deterministic transitions to the desired ones .the significance of the latter is demonstrated by considering a third application , to eliminate cancerous states in a cell death network model , which concerns a time scale for which stochastic switches can be neglected . as illustrated in these examples ,the nest is a powerful yet simple representation that captures the essence of the state switching dynamics and can inform counterintuitive results e.g . , the possibility of transitions through intermediate stable states when direct transitions are essentially impossible ( a behavior observed even in high dimensions , where indirect transitions generally require longer paths ) . the method proposed hereis easily implemetable and the computational effort scales linearly in the number of control parameters and the dimension of the state space , allowing our approach to be applied to large networks and high - dimensional systems in general .we consider biophysical networks whose deterministic components are described by non - linear differential equations of the form , where is a vector representing the activity of the relevant biological factors , is the function representing the rates of change of these factors , and is a set of tunable parameters , which we show can be manipulated to drive cellular processes in advantageous directions .we focus on the most prevalent case of systems with two or more stable states and , although our approach is general , for concreteness we first assume that these states are time - independent .time - independent stable states correspond to fixed points , defined by , towards which neighboring trajectories converge over time .the set of all stable states represent the possible long - term behaviors of the deterministic system .stochasticity is modeled here as additive gaussian white noise , where is the variance of the distribution ( other cases are discussed in the supplemental material ) . with the addition of this small noise term, trajectories no longer approach the stable states asymptotically as in the deterministic case . instead, a trajectory close to a stable state will oscillate stochastically within its basin of attraction , typically staying close to the fixed point for long periods of time .the trajectory will also make rare but large excursions from the stable state .after sufficiently long time an excursion large enough to eject the trajectory from the original basin of attraction will necessarily occur , at which point it will transition to the neighborhood of another stable state of the system .the time scales for the occurrence of such transitions may be shorter or longer than the biologically relevant ones .the manipulation of these time scales underlies much of the control approach introduced below . for a given noise intensity , the transitions between two stable states , and , occur as a poisson process with a certain rate .these rates can be computed by evolving eq . , but in general at an unreasonably high computational cost . as a way to reduce this effort, we employ an asymptotic formula : where serves as an excellent approximation to for small compared to , which is typically the case for noise associated with biophysical systems . here , is the minimum of the freidlin - wentzell action ] , and .this is an effective representation of the combined effects of egf and notch signaling , whose signaling strengths are determined by and , respectively .the other parameters are , , , and . _ hpc differentiation model . _the deterministic component of the hpc differentiation model is where are the fixed parameters in these equations .each parameter represents a tunable factor to alter the expression of the gene represented by . _ cell death model ._ the cell death model was converted from the boolean model ( available at http://www.ebi.ac.uk/biomodels-main/ model0912180000[http://www.ebi.ac.uk / biomodels - main/ model0912180000 ] ) into a continuous version using the odefy package .the system has 22 variables , representing gene products , and 42 tunable parameters .in addition , the model has 3 input parameters that do not change in time , and 3 output variables that indicate the state of the cell ( distinct combinations of which indicate whether the cell is in the survival , apoptotic , necrotic , or naive state ) .in many biological systems there are dozens or hundreds of parameters that could potentially be changed , but in general only a few of them can be changed in any one intervention . to identify the few most promising targets from a large field of possible ones we employ a sparsity constraint .the constraint is implemented as where , as above , denotes the change due to the control action . while the condition in eq .is by itself consistent with all parameters being altered , optimization under this constraint ( as the one invoked by olac ) is expected to lead to a reduced number of modified parameters .the basis for this conclusion is that this constraint works similarly to well - established methods of convex regularization , which are known to lead to sparsity under general conditions .the specific number of modified parameters as well as success rate will generally depend on , and this dependence can be explored as an additional control factor .this formulation has the remarkable advantage of involving only one optimization step , and hence avoids the combinatorial explosion that would be involved in testing all combinations of possible `` choose '' tunable factors .indeed , an exhaustive strategy would be computationally prohibitive since testing for all would require optimization steps , which is for . naturally ,if a particular parameter is not targetable under the given conditions , such information can be directly be incorporated into our analysis .
|
noise caused by fluctuations at the molecular level is a fundamental part of intracellular processes . while the response of biological systems to noise has been studied extensively , there has been limited understanding of how to exploit it to induce a desired cell state . here we present a scalable , quantitative method based on the freidlin - wentzell action to predict and control noise - induced switching between different states in genetic networks that , conveniently , can also control transitions between stable states in the absence of noise . we apply this methodology to models of cell differentiation and show how predicted manipulations of tunable factors can induce lineage changes , and further utilize it to identify new candidate strategies for cancer therapy in a cell death pathway model . this framework offers a systems approach to identifying the key factors for rationally manipulating biophysical dynamics , and should also find use in controlling other classes of noisy complex networks . + subject areas : complex systems , biological physics , nonlinear dynamics
|
many applications such as agriculture , wind energy production or aviation require accurate and reliable forecasts of wind speed .wind speed predictions are usually based on output from numerical weather prediction ( nwp ) models which describe the dynamical and physical behavior of the atmosphere through nonlinear partial differential equations .historically , single runs of nwp models with the best available initial conditions were used to obtain single - valued predictions of the future state of the atmosphere . however , such deterministic forecasts fail to account for uncertainties in the initial conditions and the numerical model . therefore , nwp models are nowadays often run several times with varying initial conditions and/or numerical representations of the atmospheric processes , resulting in an ensemble of forecasts .since the first operational implementation by the european centre for medium - range weather forecasts ( ecmwf , see * ? ? ?* for a description of the current version ) , the generation of ensemble forecasts has become standard practice in meteorology .all major national meteorological services operate their own ensemble prediction systems ( epss ) as for example the pearp eps of mteo france or the cosmo - de eps of the german meteorological service .recent developments in ensemble forecasting include multi - model ensemble prediction systems such as the thorpex interactive grand global ensemble ( tigge , * ? ? ?* ) where several single - model ensembles each based on multiple runs of individual nwp models are combined , see , e.g. , .another example is the grand limited area model ensemble prediction system ( glameps , * ? ? ?* ) considered in this article which is described in more detail in section [ sec : sec2 ] . generally , probabilistic forecasts , i.e. , forecasts given in the form of full probability distributions , are desirable as they allow for a quantification of the uncertainty associated with the prediction .probabilistic forecasts further allow for optimal decision making since optimal deterministic forecasts can be obtained as functionals of the forecast distributions .this is particularly important for applications such as wind power forecasting for auction processes in electricity markets where the optimal bidding strategy depends on permanently changing features of the market conditions .while the implementation of ensemble prediction systems is an important step in the transition from deterministic to probabilistic forecasting , ensemble forecasts are finite and do not provide full predictive distributions .further , ensemble forecasts generally tend to be underdispersive and subject to systematic bias , and thus require some form of statistical postprocessing .various methods for statistical postprocessing of ensemble forecasts have been developed over the last years , for recent reviews and comparisons , see , e.g. , .state of the art techniques include bayesian model averaging ( bma ; * ? ? ?* ) and ensemble model output statistics ( emos ) or non - homogeneous regression .both approaches provide estimates of the future distributions of the weather variables of interest and are partially implemented in the ensemblebma and ensemblemos packages for the statistical programming language r . in the case of bma, the predictive probability density function ( pdf ) of a future weather quantity is a weighted mixture of individual pdfs corresponding to the members of the ensemble , where the weights are determined by the relative performance of the ensemble members during a given training period .the bma models for various weather quantities differ in the pdfs of the mixture components . for wind speed , suggest the use of a gamma mixture , whereas considers bma component pdfs following a truncated normal ( tn ) rule .the emos approach is conceptually simpler , the predictive pdf is given by a single parametric distribution with parameters depending on the ensemble members .over the last years , emos models have been developed for calibrating ensemble forecasts of various weather variables such as temperature and sea level pressure , wind speed and precipitation .the parameters of the forecast distributions are typically estimated by minimizing proper scoring rules evaluated at forecasts and verifying observations over rolling training periods consisting of the preceding days . for selecting the corresponding training sets ,two basic approaches are given by local and regional methods .in the local approach , only forecast cases from the single observation station of interest are considered for the parameter estimation , whereas in the regional approach , data from all available observation stations are composited to form a single training set for all stations .local estimation generally results in better predictive performance ( see , e.g. , * ? ? ?* ; * ? ? ?* ) , however , is often problematic if only limited amounts of training data are available . on the other hand ,there are typically no numerical stability issues in regional parameter estimation , however , in case of large ensemble domains it is undesirable to obtain a single set of coefficients for all observation stations due to the potentially significant differences in the climatological properties of the observation stations and forecast errors of the ensemble .we apply the truncated normal emos model of for statistical postprocessing of wind speed forecasts of the 52-member glameps ensemble .the glameps ensemble covers a large domain across europe and northern africa , however , only a short period of data is available .we propose two similarity - based semi - local approaches to parameter estimation in order to account for these challenges .a distance - based approach uses data from stations with similar characteristics to augment the training data for a given stations and follows ideas of .our novel clustering - based approach employs -means clustering to obtain groups of similar observation stations with respect to various features which then form shared training sets for parameter estimation .the remainder of this article is organized as follows . in section [ sec : sec2 ] , we introduce the glameps ensemble and the observation data . in section[ sec : sec3 ] , we review the truncated normal emos model and propose similarity - based semi - local approaches to parameter estimation based on distance functions and clustering . in section[ sec : sec4 ] , we report the results of the case study based on the glameps data .we conclude with a discussion in section [ sec : sec5 ] .the glameps ensemble is a short - range multi - model eps launched in 2006 as a part of the cooperation between the aladin and hirlam consortia .it operates on a large domain covering europe , north - africa and the northern atlantic and the currently running version 2 ( glamepsv2 ) is a combination of the subensembles from two versions of the alaro model ( isba and surfex schemes , see , e.g. , and ) and two version of the hirlam model ( kain - fritsch and straco schemes , see , e.g. , and ) .each subensemble consists of perturbed members and a control forecast , and half of the perturbed members are lagged by 6h .our data base contains 52 ensemble members of 18h ahead forecasts of 10-m wind speed for 1738 observation sites ( see figure [ fig : fig1]a ) together with the corresponding validating observations for october 2 november 25 , 2013 , and february 2 may 18 , 2014 .we divide the available data into two equally large periods from october 2013 to february 2014 and from march 2014 to may 2014 in order to allow for rolling training periods of sufficient length .the forecasts are evaluated over the second period .data from the first period are used to obtain training periods of equal lengths for all days , and to estimate the similarities between the stations used in the distance - based semi - local approach to parameter estimation , see section [ sec : sec3 ] for details .while apply bma to calibrate temperature forecasts of the glameps ensemble , the article at hand is the first application of postprocessing techniques to the corresponding wind speed forecasts to the best of the authors knowledge .figure [ fig : fig1]b shows the verification rank histogram of the raw ensemble .this is the histogram of ranks of validating observations relative to the corresponding 52 ensemble member forecasts over the verification period ( see , e.g. , * ? ? ?* section 7.7.2 ) . for a calibrated ensemble ,forecasts and observations should be exchangeable and all observed ranks should thus be equally likely and follow a uniform distribution corresponding to the dashed line in figure [ fig : fig1]b .the u - shaped verification rank histogram of the glameps ensemble indicates that the glameps forecasts lack calibration and are underdispersive , i.e. , too many observations fall outside the ensemble range .this deficiency can be observed for various ensemble prediction systems , see , e.g. , .as discussed in the introduction , the goal of ensemble postprocessing is to correct for biases and dispersion errors in nwp model output .the emos approach uses a single parametric distribution to model the pdf of the future weather quantity , where the parameters depend on the ensemble members . in case of the wind speedthis pdf should be concentrated on the non - negative values . herewe apply the truncated normal emos model introduced by , however , alternative emos models utilizing a generalized extreme value distribution and a log - normal distribution are available and have also been tested .as these alternative choices do not offer substantial improvements for the data at hand , we limit our discussion to results for the tn model and note that similar conclusions apply for the alternative emos approaches .the pdf of the tn distribution with location , scale , and cut - off at zero , denoted by , is given by where and are the pdf and the cumulative distribution function ( cdf ) of the standard normal distribution , respectively .the emos predictive distribution proposed by is where denote the ensemble of distinguishable forecasts of wind speed for a given location and time , and denotes the ensemble mean .location parameters and scale parameters of model can be estimated from the training data consisting of ensemble forecasts and verifying observations from the preceding days by optimizing an appropriate verification score ( see section [ subs : subs3.2 ] ) .however , in case of the glameps ensemble , similar to the majority of the currently used ensemble prediction systems such as the ecmwf ensemble or the pearp eps of mteo france , some of the ensemble members are generated with the help of perturbations of the initial conditions simulating model uncertainties .this should be incorporated into the model formulation since these exchangeable members are assumed to be statistically indistinguishable . in what follows , if we have ensemble members divided into groups of exchangeable members , where the group contains ensemble members ( ) , notation is used for the member of the group . in this situation ensemble members within a given group share the same coefficient of the location parameter resulting in the predictive distribution where again , denotes the ensemble variance .model formulations that take into account the grouping in modeling the variance have also been investigated , but result in a reduction of the predictive performance .as argued concisely by , the general goal of probabilistic forecasting should be to maximize the sharpness of the predictive distribution subject to calibration . while calibration is a notion of statistical consistency between the predictive distribution and the observation ,sharpness is a property of the forecasts only and refers to the information content in the forecast distribution .calibration of emos post - processed forecasts can be assessed using probability integral transform ( pit ) histograms .the pit is the value of the predictive cdf evaluated at the verifying observations and the closer the histogram to the desired uniform distribution , the better the calibration .pit histograms can be seen as continuous analogues of verification rank histograms , see section [ sec : sec2 ] .further , one can also investigate the coverage of the central prediction interval corresponding to the nominal coverage of the raw ensemble which is or for the glameps ensemble .the coverage of a central prediction interval is the proportion of validating observations located between the lower and upper quantiles of the predictive distribution . for a calibrated probabilistic forecast this value should be around and the choice of corresponding to the nominal coverageallows direct comparison to the raw ensemble .given the predictive distribution is calibrated , it should be as sharp as possible , where sharper distributions correspond to narrower central prediction intervals .proper scoring rules assign numerical values to pairs of forecasts and observations and can be used to assess calibration and sharpness simultaneously .the most popular scoring rules providing summary measures of predictive performance are the logarithmic score , i.e. , the negative logarithm of the predictive pdf evaluated at the verifying observation , and the continuous ranked probability score ( crps ; * ? ? ?* ; * ? ? ?* ) . given a predictive cdf and an observation , the crps is defined as where denotes the indicator of a set , while and are independent random variables with cdf and finite first moment . in case of ensemble forecasts ,the predictive cdf is given by the empirical cdf of the ensemble .the crps can be expressed in the same unit as the observation and both scores are proper scoring rules which are negatively oriented , i.e. smaller scores indicate better predictive performance . following the optimum score estimation approach of ,proper scoring rules can be utilized in parameter estimation by minimizing the average value of a proper scoring rule over a training set . in this waythe optimization with respect to the logarithmic score corresponds to the classical maximum likelihood ( ml ) estimation of the parameters . in case of a truncated normal predictive distributionthe crps has a closed form ( see , e.g. , * ? ? ? * ) which allows for an efficient parameter estimation based on optimizing the mean crps .point forecasts given by the median value of the predictive distribution are evaluated using the mean absolute error ( mae ) quantifying the deviation from the corresponding validating observations to assess the deterministic predictive accuracy .note that the median value is the optimal point forecast under the mae . in general ,the coefficients of the tn emos model are estimated by minimizing the mean crps of the predictive distributions over suitably chosen rolling training periods consisting of the preceding days .there exist two basic approaches for selecting the training data .the regional ( or global ) approach composites ensemble forecasts and validating observations from all available stations during the rolling training period .therefore , one obtains a single universal set of parameters across the entire ensemble domain , which is then used to produce the forecasts at all observation sites . in case of the glameps ensemblethis means that a single set of coefficients is used for the wide - ranging domain and the geographical and climatological variability might thus not be sufficiently taken into account . while the regional approach to parameter estimation can be implemented without numerical stability issues and offers slight gains in predictive performance compared to the raw ensemble ( see section [ sec : sec4 ] ) , there is room for further improvement for large and heterogeneous domains .by contrast , the local approach produces distinct parameter estimates for different stations by using only the training data of the given station .local models typically result in better predictive performance compared to regional models ( see , e.g. , * ? ? ?* ; * ? ? ?* ) , however , these training sets contain only one observation per day and the estimation of local emos models thus requires significantly longer training periods to avoid numerical stability issues .for example , in model with exchangeable groups ( which is the case for the glameps ensemble , see section [ sec : sec4 ] ) the number of free parameters to be estimated is , making the use of local emos impossible for small data sets such as the one considered in this article . in a recent case study on emos models for the ecmwf ensemble , find that training period lengths between 365 and 1816 days give the best results for local parameter estimation . for the glameps data at hand ,choosing such long training periods is impossible as the whole data set consists of only 161 days .we propose two alternative similarity - based semi - local approaches which avoid the problems that make both regional and local estimation of the emos coefficients undesirable for the glameps data .the basic idea of the semi - local methods is to combine the advantages of regional and local estimation by augmenting the training data for a given station with data from stations with similar characteristics .the choice of similar stations is either based on suitably defined distance functions , or on clustering .following , the training sets of a given station are increased by including training data from other stations with similar features .the similarity between stations is determined based on suitably defined distance functions with .distance functions can thus be seen as negatively oriented similarity measures with smaller values indicating more similar characteristics of the stations of interest . ] .note that compared to , we consider alternative choices of distance functions , and our forecasts are evaluated over a set of observation stations whereas the forecasts and analysis data used by are given on a grid .different conclusions may apply for grid - based data .generally , the distance between two stations and denoted by with is determined using the first period of available data from october 2013 to february 2014 which is distinct from the verification period . in the semi - local estimation of the emos model for a given station , we then add the corresponding forecast cases in the rolling training period from the most similar stations , i.e. , the stations with the smallest distances .alternatively , one could also iteratively determine the similarities anew in every rolling training period .however , this approach requires lots of computational resources as all pair - wise distances between stations have to be re - computed for every training period ( up to symmetry ) , and is thus infeasible due to the large number of observation stations .in particular , note that already the simple distance - based semi - local model estimation with a fixed set of distances is computationally more demanding compared to local parameter estimation which arises as special case for . furthermore , initial tests did not indicate significant improvements in the predictive performance for the glameps data , we thus limit our discussion to the use of the first period of data for determining the similarities between stations for the distance - based approach .we investigate the following five distance functions ._ distance 1 : geographical locations_. the distance between stations and is given by the euclidean distance of the locations and of the two stations , i.e. , the euclidean distance is employed here since the station locations in the data set are given on the linearly transformed model estimation grid .in general , the spherical or great - circle distance is a more appropriate distance measure for actual geographical locations on the globe ._ distance 2 : station climatology_. let denote the empirical cdf of wind speed observations at station over the first period of data . similar to the distance function proposed by , the distance to station given by the normalized sum over the absolute differences of the respective empirical cdfs and evaluated at a set of fixed values , i.e. , where denotes the cardinality of . here ,we choose and note that the obtained similarities are robust to minor changes in the definition of ._ distance 3 : ensemble forecast errors_. denote the ensemble mean for station and date by and the corresponding verifying observation by then the forecast error of the ensemble mean is given by the third distance function is based on the distribution of these forecast errors . to that end, we define the empirical cdf of the forecast errors at station as where denotes the set of dates in the first period of data .the distance between two stations and is then given by where denotes the set of fixed values at which the empirical cdfs of the forecast errors are evaluated .as before , the obtained sets of similar stations are robust to changes of ._ distance 4 : combination of distance 2 and 3_. we add up the values of distances 2 and 3 to define a distance function which depends on both the climatology of the observations as well as the distribution of the forecast errors of the ensemble , i.e. , with the above notation , _ distance 5 : ensemble characteristics_. proposes a similarity - based implementation of the shaake shuffle using a distance function that depends on summary statistics of the ensemble .with and denoting the mean and standard deviation of the ensemble member forecasts at station and date , the distance between station and is given by where again denotes the set of dates during the first period of data .+ figure [ fig : distances - illustration ] illustrates the five distance functions for two of the observation stations by displaying the 100 most similar stations in a specific color each . for both stations, a portion of the sets of most similar stations measured by two or more distance functions overlaps . see figure [ fig : appendxfig1 ] in the appendix for individual plots for the five distance functions and the two stations . for the station at ouessant ( figure [ fig : distances - illustration]a ) which is located on the north - western coast of france, it can be observed that the 100 most similar stations measured by the distance functions depending on the distribution of the observations and forecast errors ( distances 24 ) are mostly located at coastal regions and islands in northern europe , in particular if these characteristics are combined ( distance 4 ) . by contrast, the most similar stations to the observation site at vienna ( figure [ fig : distances - illustration]b ) are distributed over continental central europe , mostly located in france , germany and poland . as implied by the definition ,the most similar stations measured by distance 1 ( and due to the large overlap also by distance 5 ) are located in close geographical proximity around the two observation sites . due to the differences in the density of the observation station network ,the stations similar to the reference station at ouessant are spread out over larger geographical distances compared to the respective stations similar to the one at vienna .therefore , data from stations with potentially significantly different climatological properties might be added to the training sets for parameter estimation .further , as an alternative to the distance - based approach we propose a novel semi - local approach based on cluster analysis . here , the observation sites are grouped into clusters , and parameter estimation is performed for each cluster individually using only ensemble forecasts and validating observations at stations within the given cluster . to determine the clusters of observation stations we apply -means clustering ( see , e.g. , * ? ? ?* ) to various choices of feature sets which are based on climatological characteristics of the observation stations and the distribution of forecast errors , and are described in more detail below . in comparison to the distance - based method ,the clustering - based semi - local approach is computationally much more efficient , as the parameter estimation is only performed for distinct training sets for each given day , whereas the distance - based approach requires individual estimation of the coefficients at each of the 1738 stations with partially overlapping training sets .further , the similarities between the observation stations are obtained in a more efficient way as clustering is computationally less demanding compared to the computation of pair - wise distances between all observation stations ( up to symmetry ) . ] . in particular , clustering - based semi - local estimation is also computationally more efficient than local parameter estimation which arises as a special case with clusters of size 1 each .the above discussion does not account for the computational costs of the actual clustering . however , there exist efficient algorithms for -means clustering , e.g. , the hartigan - wong algorithm , which converge rapidly for the data at hand .the costs of the actual clustering are thus negligible compared to the computational costs of the numerical parameter estimation . in contrast to the distance - based approach , this allows for iteratively determining the clusters anew in every training period without a significant increase in the overall computational costs .this adaptive approach will be pursued for all clustering - based semi - local models discussed below .we denote the number of features used in the -means clustering procedure by and consider the following feature sets ._ feature set 1 : station climatology_. let denote the empirical cdf of the wind speed observations at station over the rolling training period consisting of the preceding forecast cases at this station .the feature set for station is given by the set of equidistant quantiles of at levels ._ feature set 2 : forecast errors_. denote the empirical cdf of forecast errors by . with a slight abuse of the above notation, the set in the expression denotes the preceding dates as the clusters are iteratively determined anew in every rolling training period . the feature set for station then given by the set of equidistant quantiles of at levels ._ feature set 3 : combination of feature sets 1 and 2_. to define a feature set that depends on both the station climatology and the distribution of forecast errors , we combine equidistant quantiles of at levels and equidistant quantiles of at levels into one single set of size , where and are defined as follows .if is an even number , let , otherwise let and .alternative choices of feature sets where the geographical location of the observation stations is included in the definition have also been investigated , but result in a reduction of the predictive performance and are thus omitted in the following discussion .figure [ fig : clustering - illustration ] illustrates the obtained clusters of observation stations for the different feature sets with a fixed number of clusters . for the feature set defined in terms of the distribution of the observations ( feature set 1 , figure [ fig : clustering - illustration]a ) , one can observe two larger clusters distributed over central europe , where one cluster mainly contains stations in germany and france , while the other one contains most of the stations in the alps and continental eastern europe .the remaining clusters are predominantly centered around the united kingdom and coastal regions of france and northern europe .if the clusters are determined based on forecast errors ( feature set 2 , figure [ fig : clustering - illustration]b ) , the stations are mainly grouped into three almost equally large clusters , where the most notable difference compared to the fist feature set is the predominant presence of the third cluster in north - eastern europe .further , the stations in the united kingdom and coastal regions of europe now mostly belong to the two biggest clusters rather than forming separate sets .clustering based on a combination of the distribution of the observations and forecast errors ( feature set 3 , figure [ fig : clustering - illustration]c ) results in a pattern of cluster memberships in between the other two choices . in particular , the alpine regions , continental europe and the coastal regions and the united kingdomshow the most clear - cut separation compared to the other feature sets .as discussed in section [ subs : subs3.1 ] , the link functions connecting the parameters of the predictive distribution of the emos models and the ensemble forecasts depend on the stochastic properties of the ensemble .the glameps ensemble consists of four subensembles which differ in the choice of numerical model and parametrization scheme .each subensemble contains a control and ( non - lagged and lagged ) perturbed members .this induces a natural grouping into twelve groups : the members within each individual group are exchangeable and should share a common set of emos coefficients , resulting in a predictive tn distribution with location and scale , which is a special case of model .this model has a total number of parameters to be estimated and will be referred to as _full model_. a natural simplification is to assign the same parameter values to the lagged and non - lagged exchangeable ensemble members of a subensemble , which results in a reduced model with location and parameters to be estimated .this model will be referred to as _ lag - ignoring model_. finally , we also investigate the fully exchangeable situation where the existence of the aforementioned groups is ignored , and all ensemble members are assumed to form a single exchangeable group . in this case the predictive distribution is given by where again , denotes the ensemble mean , and we refer to this model as _ simplified model_. both semi - local parameter estimation techniques require the choice of various tuning parameters given by the length of the rolling training period , the number of similar stations to be taken into account , the number of features and the number of clusters .we now discuss the effect of these tuning parameters on the predictive performance of the forecast models . to that end , the full , lag - ignoring and simplified model were estimated using the distance - based and clustering - based semi - local parameter estimation techniques described in section [ subs : subs3.3 ] .conclusions are drawn based on the mean crps over the evaluation period .for comparison , note that the average crps values of the glameps ensemble and the best regional tn model with a training period of 80 days are 1.058 and 0.955 , respectively . due to numerical stability issues in the parameter estimation ,a comparison to local tn models is impossible , an estimate of the average crps of the locally estimated simplified tn model with a training period of 80 days can be obtained if the problematic parameter estimates ( around 0.1% of the total number of forecast cases ) are replaced by corresponding estimates from preceding forecast cases .this estimate of the average crps of the local simplified model with such subsequent modifications equals 0.790 ( see section [ subs : subs4.3 ] ) . on the predictive performance of the distance - based semi - local models for three choices of training period lengths ( in days ) . missing line segments indicate unsuccessful parameter estimation for these choices of tuning parameters.,title="fig:",scaledwidth=99.0% ] . in the distance - based semi - local approach to parameter estimation ,the size of the training set for a given station is increased by including corresponding training data from the most similar stations , i.e. , the stations with the smallest distances .note that for the distance functions defined in section [ subs : subs3.3 ] , , a value of , e.g , thus means that the training set for station consists of data from this station , and of data from the 4 stations with the smallest distances to station .figure [ fig : tuningpar - l - distbased ] illustrates the effect of the number of close stations on the predictive performance measured as mean crps of the three proposed models for selected lengths of the training period . due to the large overlap of close stations determined by distance functions 1 and 5 ( see , e.g. , figure [ fig : distances - illustration ] ) we omit the corresponding plots for distance 5 which closely resemble the plots for distance 1 and remark that similar conclusions apply , in particular for small values of .note the varying scales of the plots in the first and second row of figure [ fig : tuningpar - l - distbased ] caused by the different predictive performances of the respective models .for distance 1 which is based on geographical locations , the predictive performance generally decreases with the number of similar stations added to the training sets , except for the more complex lag - ignoring and full models and shorter training periods , where the best crps values are attained for values around .clearly , the inclusion of similar stations then allows for unproblematic parameter estimation , but as few stations as possible should be chosen in order to achieve results as close as possible to the desirable ( however , even for long training periods impossible ) local parameter estimation .similar conclusions apply for the climatology - based distance 2 , however , the predictive performance of these models is notably better .a different pattern emerges for distance 3 which is based on the distribution of forecast errors . particularly for the more complex lag - ignoring and full model ,the best predictive performances are achieved with choices of between 10 and 30 , depending on the length of the training periods , whereas smaller values of result in worse predictions .note that with these choices of , the predictive performance of the semi - local models is better than the estimate of the predictive performance for the ( simplified ) local model . for distance 4 , a combination of distance functions 2 and 3 , similar conclusions apply with optimal values of between 10 and 25 .semi - local models based on this similarity measure show the best predictive performance and are also able to outperform the simplified local tn model for a wide range of tuning parameter choices .the effect of the length of the rolling training periods consisting of the preceding days can also be seen from figure [ fig : tuningpar - l - distbased ] where each individual plot contains three different choices of .together with further investigations of plots of the average crps against the employed training period lengths ( not shown ) , one can observe that only has a small effect on the predictive performance of the models .for all considered distance functions , the predictive performance increases with longer training periods , in particular for the more complex models and smaller values of .this is to be expected from the smaller size of the training sets as parameter estimation becomes problematic for shorter training periods and few additional forecast cases from similar stations taken into account .the simplified models show a slight decrease in predictive performance for training periods longer than 4050 days , however , the differences are negligible compared to those between models based on varying choices of distance functions or varying numbers of similar stations taken into account .the overall best predictive performances across the three considered model formulations are achieved with training period lengths of 80 days . on the predictive performance of clustering - based semi - local models for three choices of training period lengths ( in days ) .all models are estimated with feature sets of size .missing line segments indicate unsuccessful parameter estimation for these choices of tuning parameters . ] in the clustering - based semi - local approach -means clustering based on the different feature sets ( discussed in section [ subs : subs3.3 ] ) is employed to group the observation stations into clusters .the lower computational costs of this approach allow for iterative computation of the clusters in every training period , whereas the similarities between stations used in the distance - based semi - local approach are computed over a fixed period of data from october 2013 to february 2014 preceding the verification period .this adaptive application of -means clustering leads of improvements in mean crps of around 1 - 5% compared to the use of a fixed set of clusters determined over the first period of available data .figure [ fig : tuningpar - k - clustering ] illustrates the effect of the number of clusters on the predictive performance of the clustering - based semi - local models .choosing obviously corresponds to regional parameter estimation .for all three feature sets considered here , the predictive performance increases for larger values of up to around 100 clusters except for shorter training periods . clearly , a larger number of clusters allows for a more refined grouping into sets of observation stations with similar characteristics .the predictive performance decreases for all considered models and training period lengths if much more than clusters are used .this behavior is to be expected as the clusters become smaller and parameter estimation becomes numerically unstable , particularly for the lag - ignoring and full models . note that depending on training period length and feature set , only small improvements can be observed for exceeding values of around 40 to 70 clusters . as observed for the distance - based models , the clustering - based semi - local models defined in terms of the distribution of forecast errors and the station climatology ( feature sets 2 and 3 ) are able to outperform the local model over a wide range of tuning parameter choices except for short training periods .the worse predictive performance for shorter training periods is to be expected as the smaller amount of forecasts cases used to determine the clusters might result in a less accurate partitioning of the observation stations .compared to the distance - based approach it can be observed that for some numbers of clusters , training period lengths below 80 days are optimal , in particular for the lag - ignoring and full model .however , in comparison to the effect of different choices of feature sets the effect of the length of the training period is negligible . on the predictive performance of clustering - based semi - local models for three choices of numbers of clusters .all models are estimated over a training period of 80 days .missing line segments indicate unsuccessful parameter estimation for these choices of tuning parameters . ]thus far , all clustering - based semi - local models shown in figure [ fig : tuningpar - k - clustering ] were estimated for a fixed feature set size of . to illustrate the effect of on the predictive performance , figure [ fig : tuningpar - n - clustering ] shows the average crps of the clustering - based models as functions of the number of features considered in -means clustering for three choices of .given that sufficiently many features ( around 5 - 10 depending on the other tuning parameters ) are used , the feature set size has only a small effect on the predictive performance compared to different choices of or .reasons for this behavior clearly include the aforementioned robustness of the obtained cluster memberships with regards to .the best results across all considered tuning parameter combinations are generally obtained for feature set sizes between 20 and 40 thus justifying our previous choice of .the predictive performance of the semi - local models is evaluated by computing the verification scores introduced in section [ subs : subs3.2 ] over the verification period march 1 may 18 , 2014 .we use the local climatological forecasts given by the observations at the corresponding station during the rolling training periods , the raw glameps ensemble predictions , and probabilistic forecast by the regional tn model as benchmark models .while locally estimated models are desirable , the estimation of these models is highly problematic for the glameps data due to the issues discussed earlier . even for the simplified model with a maximum training period length of 80 days ,numerical issues occur in the local parameter estimation , e.g. , some shape parameters are estimated to be 0 .an estimate of the predictive performance of the local model can be obtained by replacing these problematic parameter estimates by the preceding ones . however , note that these subsequent adjustments are not necessary for the semi - local or regional models .further , neither the lag - ignoring nor the full local tn model can be successfully estimated as the employed numerical optimization algorithms fail to converge or produce numerical errors . in the interest of brevity, we limit our discussion to the simplified and the lag - ignoring models . it can be seen from figures [ fig : tuningpar - l - distbased][fig : tuningpar - n - clustering ] that the full semi - local models generally result in slightly worse predictive performance compared to the lag - ignoring models , therefore the additional computational costs of taking into account the lagging in the subensembles are not justified .note that different conclusions may apply for other ensemble prediction systems with lagged members . with regards to the tuning parameters for the semi- local approaches , we employ a fixed training period length of 80 days , and use a fixed number of features for -means clustering to ensure comparability across the different models .for the individual distance - based and clustering - based semi - local models we then choose suitable values for the number of most similar stations and the number of clusters from figures [ fig : tuningpar - l - distbased][fig : tuningpar - n - clustering ] ( see section [ sec:4-tuningpar ] for a detailed discussion of the effect of these tuning parameters ) . while the chosen tuning parameter combinations might not be the overall optimal values for the individual models , the results hold for a wide range of tuning parameter choices as indicated by the sensitivity considerations in section [ sec:4-tuningpar ] [email protected] & & crps & mae & coverage & width + forecast & & ( m s ) & ( m s ) & ( % ) & ( m s ) + local climatology & & 1.127 & 1.580 & 96.6 & 7.96 + glameps ensemble & & 1.058 & 1.376 & 67.1 & 3.50 + + simpl . && 0.957 & 1.324 & 90.3 & 6.36 + lag - ign . & & 0.955 & 1.320 & 90.3 & 6.33 + + simpl . & & 0.790 & 1.100 & 88.7 & 5.12 + & + d1 simpl . & & 0.873 & 1.218 & 90.2 & 5.99 + d1 lag - ign . & & 0.887 & 1.236 & 89.2 & 5.71 + d2 simpl . & & 0.816 & 1.136 & 90.0 & 5.61 + d2 lag - ign & & 0.815 & 1.136 & 89.6 & 5.42 + d3 simpl . & & 0.774 & 1.083 & 90.3 & 5.25 + d3 lag - ign . & & 0.774 & 1.083 & 90.2 & 5.21 + d4 simpl . & & 0.766 & 1.069 & 89.9 & 5.16 + d4 lag - ign . & & 0.770 & 1.075 & 90.0 & 5.18 + d5 simpl . & & 0.874 & 1.220 & 90.2 & 5.95 + d5 lag - ign . & & 0.895 & 1.248 & 89.8 & 5.91 + & + c1 simpl . & & 0.836 & 1.162 & 89.8 & 5.68 + c1 lag - ign . & & 0.832 & 1.156 & 89.6 & 5.55 + c2 simpl . & & 0.789 & 1.103 & 89.9 & 5.25 + c2 lag - ign . & & 0.787 & 1.099 & 89.8 & 5.22 + c3 simpl . & & 0.782 & 1.091 & 89.7 & 5.19 + c3 lag - ign . & & 0.781 & 1.090 & 89.7 & 5.17 + [ table : perf - measures ] table [ table : perf - measures ] shows the average crps , mae of median values , and coverage and average width of 96.2% prediction intervals for the considered models .the raw glameps ensemble predictions outperform the climatological forecasts and provide sharp prediction intervals , however , at the cost of being uncalibrated .regional tn models are able to improve the calibration of the ensemble , and result in around 10% better mean crps values , however , the semi - local approaches significantly outperform the regional approaches for all considered models and tuning parameter choices , see also figures [ fig : tuningpar - l - distbased ] and [ fig : tuningpar - k - clustering ] . among the distance - based semi - local models ,the best predictive performances are obtained by distance functions 3 and 4 which utilize the distribution of forecast errors and combinations with the station climatology to determine similarities between stations .note that these semi - local models are also able to outperform the local tn model for a wide range of tuning parameter choices without requiring subsequent corrections and while further allowing for a successful estimation of the more complex lag - ignoring and full semi - local models .the semi - local models based on distance functions 1 and 5 exhibit similar predictive performances which are slightly worse compared to the other distances , but are still able to outperform the regional model .the similarity is clearly caused by the large overlap of selected similar stations , see figure [ fig : distances - illustration ] . except for distance 2 ,the simplified model performs slightly better than the lag - ignoring model , however , the differences are negligible compared to the differences between the different model estimation approaches .we obtain similar results for the clustering - based semi local models which perform slightly worse compared to the corresponding distance - based models , however , still significantly outperform the regional models and the local model if the clusters are determined on the basis of forecast errors and station climatology . here , the lag - ignoring models show better predictive performances compared to the simplified models , but again , the differences are small compared to the influence of the choice of feature sets . for the corresponding tuning parameter choices .] figure [ fig : pits ] shows pit histograms of the lag - ignoring regional , the simplified local , and the distance - based and clustering - based semi - local models with the best average crps values ( see table [ table : perf - measures ] ) . compared to the verification rank histogram of the raw glameps ensemble forecasts ( see figure [ fig : fig1]b ) , all postprocessing models exhibit significantly improved calibration with pit histograms showing much smaller deviations from the desired uniform distribution .the hump - shaped pit histogram of the regional tn model indicates a slight under - prediction of lower wind speed values .the local and semi - local models are able to correct for this deficiency and show slightly better calibration , in particular for the semi - local models .most of the models in table [ table : perf - measures ] show similarly shaped pit histograms .alternative distributional choices such as log - normal or generalized extreme value distributions might lead to further improvement in calibration , see e.g. . to conclude, we note that the overall best predictive performance is achieved by distance - based semi - local models utilizing both the distribution of observations as well as the distribution of forecast errors at the observation stations , closely followed by clustering - based models with feature sets defined in a similar way .these models show better predictive performances than the local model , and can be estimated without any numerical issues .figures [ fig : tuningpar - l - distbased ] and [ fig : tuningpar - k - clustering ] indicate that these conclusions hold for a wide range of tuning parameter choices . with regards to the two semi - local approaches ,the respective distance - based models generally show slightly better predictive performance , however , the estimation of the clustering - based models is computationally much more efficient and allows for an iterative application of the clustering algorithm in each training period .we have proposed two semi - local approaches to parameter estimation for ensemble postprocessing where the training data for a given observation station are augmented with data from stations with similar characteristics .the distance - based approach roughly follows the ideas of and uses distance functions to determine the similarities between observations stations , whereas the novel clustering - based approach employs -means clustering to obtain groups of similar stations .various choices of distance functions , feature sets and tuning parameters have been tested .the best results are obtained for semi - local models where the similarities between stations are determined based on combinations of the climatological distribution of observations as well as the distribution of forecast errors at the given stations . while all semi - local models show significantly better predictive performance than the regional models , these best models are also able to outperform the locally estimated model .the semi - local parameter estimation methods further allow for estimating more complex models without numerical issues , whereas local estimation is only possible for simplified model formulations with a reduced number of parameters and still requires subsequent modifications .the semi - local models thus offer several advantages over the standard approaches to parameter estimation and are straightforward to implement .the clustering - based semi - local model estimation is further computationally much more efficient than local model estimation which arises as a special case with clusters of size 1 each .while distance - based semi - local models show slightly better predictive performance compared to the clustering - based models , the estimation requires substantially more computational resources .in particular , an adaptive computation of the similarities in every training period is not feasible for the distance - based models .compared to the work of , we propose several alternative distance functions and use the distance - based approach for observations at specific stations instead of gridded data .it would be interesting to apply the novel similarity measures as well as the clustering - based approach to grid - based forecast and analysis data and assess potential differences .in particular , distance functions incorporating the distribution of forecast errors ( distances 3 and 4 ) result in significantly better predictive performance for the glameps data and might also offer improvements over the climatology - based distance function used by ( similar to distance function 2 ) when applied to gridded data . with regards to the results for the employed distance functions it might appear somewhat surprising that models based on similarities defined by characteristics of the ensemble ( mean and variance ) as measured by distance 5 do not result in improvements compared to simple location - based similarities ( distance 1 ) .however , this might be due to the fact that these characteristics of the ensemble are substantially influenced by the locations of the stations , and the training sets thus largely overlap with those of the location - based distance 1 .these results might change for other ensemble prediction systems .further , potential improvements might be obtained by including different summary statistics of the ensemble , e.g. , by adding information about the within - group variances of the subensembles , or quantiles of the distribution of ensemble forecasts .the group memberships of the observation stations in the clustering - based semi - local models are all determined by applying -means clustering .alternative clustering methods exist and might potentially lead to improvements ( for reviews and comparisons see , e.g. , * ? ? ?* ; * ? ? ?we did not incorporate informations on the geographical locations of the stations or characteristics of the ensemble into the selected feature sets as initial tests indicated a worse predictive performance . for different ensemble prediction systems , these alternative choices of feature sets may lead to further improvements . in the interest of brevity , we limited our discussion to the standard truncated normal emos model proposed by .an extension of the similarity - based semi - local parameter estimation approach to other postprocessing models might in particular be interesting for complex models where larger numbers of parameters have to be estimated and local parameter estimation might thus not be feasible ( for recent examples see , e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ? propose analog - based local emos models where the training set for a given station is chosen by selecting forecast cases with similar ensemble forecasts for that station .this analog - based approach thus utilizes information for a given station in an optimal way by selecting subsets of the local training sets , whereas our semi - local models combine informations from multiple observation stations based on similarities .while the analog - based modification of the local parameter estimation method shows good predictive performance in a case study on hub height wind speed , it requires sufficiently long training periods for locally selecting similar forecast cases .the implementation of this analog - based approach is thus infeasible for the glameps data , however , comparisons and combinations with the similarity - based semi - local approaches proposed here are of interest and might result in further improvement in predictive performance .* acknowledgments . *lerch gratefully acknowledges support by the volkswagen foundation through the program `` mesoscale weather extremes theory , spatial modelling and prediction ( wex - mop ) '' and by deutsche forschungsgemeinschaft ( dfg ) through the research training group `` rtg 1953 statistical modeling of complex systems and processes '' .sndor baran was supported by the jnos bolyai research scholarship of the hungarian academy of sciences .the authors are indebted to tilmann gneiting and michael scheuerer for useful suggestions and remarks , and to maurice schmeits , jan barkmeijer and john bjrnar bremnes for providing the glameps data and assistance in data handling .deckmyn , a. ( 2014 ) introducing glamepsv2 . _ aladin forecasters meeting , _ ankara , turkey , 1011 september , 2014 .available at : http://www.cnrm.meteo.fr/aladin/meshtml/fm2014/presentation/aladinfm_ad_be.pdf .gneiting , t. , raftery , a. e. , westveld , a. h. and goldman , t. ( 2005 ) calibrated probabilistic forecasting using ensemble model output statistics and minimum crps estimation . _ mon .weather rev ._ * 133 * , 10981118 .hagedorn , r. , buizza , r. , hamill , t. , leutbecher , m. and palmer , t. ( 2012 ) comparing tigge multimodel forecasts with reforecast - calibrated ecmwf ensemble forecasts ._ q. j. r. meteorol .* 138 * , 18141827 .hamdi , r. , degrauwe , d. , duerinckx , a. , cedilnik , j. , costa , v. , dalkilic , t. , essaouini , k. , jerczynki , m. , kocaman , f. , kullmann , l. , mahfouf , j .- f . ,meier , f. , sassi , m. , schneider , s. , va , f. and termonia , p. ( 2014 ) evaluating the performance of surfexv5 as a new land surface scheme for the aladincy36 and alaro-0 models .model dev . _* 7 * , 2339 .iversen , t. , deckmin , a. , santos , c. , sattler , k. , bremnes , j. b. , feddersen , h. and frogner , i .-( 2011 ) evaluation of glameps a proposed multimodel eps for short range forecasting . _ tellus a _ * 63 * , 513530 .mller , a. , thorarinsdottir , t. l. , lenkoski , a. and gneiting , t. ( 2015 ) spatially adaptive , bayesian estimation for probabilistic temperature forecasts ._ working paper_. available at http://arxiv.org/abs/1507.06517 ruiz , j. j. and saulo , c. ( 2012 ) how sensitive are probabilistic precipitation forecasts to the choice of calibration algorithms and the ensemble generation method ?part i : sensitivity to calibration methods ._ meteorol ._ * 19 * , 302313 .sass , b. h. ( 2002 ) a research version of the straco cloud scheme ._ dmi tech ._ 02 - 10 .danish meteorological institute , copenhagen , denmark , 25 pp . available at : http://www.dmi.dk/dmi/index/viden/dmi-publikationer/tekniskerapporter.htm .schmeits , m. j. and kok , k. j. ( 2010 ) a comparison between raw ensemble output , ( modified ) bayesian model averaging and extended logistic regression using ecmwf ensemble precipitation reforecasts .weather rev . _ * 138 * , 41994211 .swinbank , r. , kyouda , m. , buchanan , p. , froude , l. , hamill , t. m. , hewson , t. d. , keller , j. h. , matsueda , m. , methven , j. , pappenberger , f. , scheuerer , m. , titley , h. a. , wilson , l. and yamaguchi , m. ( 2015 ) the tigge project and its achievements .soc . _ http://dx.doi.org/10.1175/bams-d-13-00191.1 thorarinsdottir , t. l. and gneiting , t. ( 2010 ) probabilistic forecasts of wind speed : ensemble model output statistics by using heteroscedastic censored regression. _ j. r. stat .a _ * 173 * , 371388 .
|
weather forecasts are typically given in the form of forecast ensembles obtained from multiple runs of numerical weather prediction models with varying initial conditions and physics parameterizations . such ensemble predictions tend to be biased and underdispersive and thus require statistical postprocessing . in the ensemble model output statistics ( emos ) approach , a probabilistic forecast is given by a single parametric distribution with parameters depending on the ensemble members . this article proposes two semi - local methods for estimating the emos coefficients where the training data for a specific observation station are augmented with corresponding forecast cases from stations with similar characteristics . similarities between stations are determined using either distance functions or clustering based on various features of the climatology , forecast errors , ensemble predictions and locations of the observation stations . in a case study on wind speed over europe with forecasts from the grand limited area model ensemble prediction system , the proposed similarity - based semi - local models show significant improvement in predictive performance compared to standard regional and local estimation methods . they further allow for estimating complex models without numerical stability issues and are computationally more efficient than local parameter estimation . _ key words : _ clustering , continuous ranked probability score , ensemble model output statistics , ensemble postprocessing , probabilistic forecasting , truncated normal distribution , weather forecasting , wind speed .
|
sparse signal reconstruction problems appear in many engineering fields . in most applications ,signals are often measured from an undersampled set of noisy linear transformations .typically , the problem of interest is the reconstruction of a _ sparse _ signal from a set of noisy measurements which is given by where is the measurement matrix , and is the noise vector with representing the noise magnitude .this problem has arisen in many areas , such as signal processing , communications theory , information science , and statistics , and is widely known as _ compressive sensing _ . in the past few years, many recovery algorithms have been proposed , see for a recent exhaustive list of the algorithms .one popular suboptimal and low - complexity estimator is -regularized least - squares ( ls ) , a.k.a . least absolute shrinkage and selection operator ( lasso ) , which seeks by in ( [ eq : rls ] ) , is a design parameter , and the _ _ complex__-norm is defined as , which is different from the complex -norm .a simple extension of lasso to the complex setting is to consider the complex signal and measurements as a -dimensional real - valued signal and -dimensional real - valued measurements , respectively .however , several papers ( e.g. , and the references therein ) have shown that lasso based on the complex -norm is superior to the simple real - valued extension when the real and imaginary components of the signals tend to either zero or nonzero simultaneously .therefore , we consider lasso using the the complex -norm definition of ( [ eq : rls ] ) rather than the simple real - valued extension of lasso . ]-norm is defined as the optimization problem of ( [ eq : rls ] ) is convex , and there are various fast and efficient solvers proposed .for example , the proximal gradient method in ( * ? ?? * section 7.1 ) resolves ( [ eq : rls ] ) by iteratively performing where is the iteration counter , is the chosen step size , and is a soft - thresholding function in which if and is otherwise .evaluating requires one matrix - vector multiplication by and another by , plus a ( negligible ) vector addition .the complexity for evaluating the soft - thresholding function is negligible .this kind of iterative thresholding algorithm requires few computations per - iteration , and therefore enables the application of lasso in large - scale problems .much of the theoretical work on ( [ eq : rls ] ) has focused on studying how aggressively a sparse signal can be undersampled while still guaranteeing perfect signal recovery .the existing results include those based on the restricted isometry property ( rip ) , polyhedral geometry , message passing , and the replica method .although rip provides sufficient conditions for sparse signal reconstruction , the results provided by rip analysis are often conservative in practice .in contrast , using combinational geometry , message passing , or the replica method , it is possible to compute the exact necessary and sufficient condition for measuring the sparsity - undersampling tradeoff performance of ( [ eq : rls ] ) in the limit .however , the theoretical work largely focused on the case of having a measurement matrix with independent and identically distributed ( i.i.d . ). a natural question would be `` _ how does the choice of the measurement matrix affect the typical sparsity - undersampling tradeoff performance ? _ '' .there are strong reasons to consider different types of measurement matrix .although the proximal gradient method performs efficiently in systems of medium size , the implementation of ( [ eq : proxgrad ] ) will become prohibitively complex if the signal size is very large .this is not only because performing ( [ eq : proxgrad ] ) requires matrix - vector multiplications up to the order of but it also requires a lot of memory to store the measurement matrix .there is strong desire to consider special forms of measurement matrix permitting faster multiplication process and requiring less memory .one such example is the randomly generated discrete fourier transform ( dft ) matrices or discrete cosine transform ( dct ) matrices . using dft as the measurement matrix , fast fourier transform ( fft ) can be used to perform the matrix multiplications at complexity of and the measurement matrix is _ not _ required to be stored .the entries of a dft matrix are however not i.i.d .. in the noiseless setup ( i.e. , ) , it has been revealed that the measurement matrix enjoys the so - called universality property ; that is , measurement matrices with i.i.d .ensembles and rotationally invariant ( or row - orthonormal ) ensembles exhibit the same recovery capability ( or the phase transition ) . the universality phenomenon is further extended to the measurement matrices which are constructed by concatenating several randomly square orthonormal matrices . although the universality phenomenon of lasso is known for a broad class of measurement matrices in the noiseless setup , little progress has been made in the practical _ noisy _ setting . in the noisysetting , perfect recovery is rare so we are interested in the ( average ) mean squared error ( mse ) of reconstruction defined by , where denotes the average with respect to and . in , an analytical expression for mse in lasso reconstruction was obtained when the measurement matrix is a row - orthonormal matrix generated uniformly at random . nevertheless , the emphasis of was in support recovery rather than the mse of reconstruction .it was not until very recently that the superiority of row - orthonormal measurement matrices over their i.i.d .gaussian counterparts for the noisy sparse recovery problem was revealed in .this characteristics is in contrast to the noiseless setup mentioned above .meanwhile , the authors of supported the similar argument and further argued that one can still claim universality in the noisy setup if we restrict the measurement matrices to similar row - orthonormal type .these arguments showed that the choice of measurement matrices does have an impact in the mse of reconstruction when noise is present . despite these previous works ,the study of lasso in the case of orthonormal measurement matrices remains incomplete , for the following reasons .first , in many applications of interest , the measurement matrices are constructed by selecting a set of columns and rows from a standard orthonormal matrix as depicted in figure [ fig : m - orthogonal](b ) , which we call it `` type - b '' matrix .let a standard orthonormal matrix be an unitary matrix .then we have in a type - b matrix .note that it is also possible to obtain row - orthonormal matrices by selecting a set of rows from orthonormal matrices rather than selecting a set of rows _ and _ columns , and in this case , we refer to such row - orthonormal matrix as `` type - a '' matrix , see figure [ fig : m - orthogonal](a ) .one prominent application of type - b matrices is the sparse channel estimator in orthogonal frequency - division multiplexing ( ofdm ) systems . in that case, represents a time - domain channel vector and is a partial dft matrix .another popular application arises in compressive sensing imaging , where randomly sampling a dft matrix from its row is common practice . nonetheless , _additional _ selection in columns is often needed because the signal size would be smaller than the size of available fft operators and the signal will be modified by zero - padding to fit the available fft size . in that case , the measurement matrix corresponds to the matrix formed by selecting a set of columns from row - orthonormal matrices .while type - b measurement matrices are widely employed in a vast number of sparse recovery problems , surprisingly , little is known on the lasso performance based on such measurement matrices .measurement matrix that is constructed by concatenating several randomly chosen orthonormal bases is another common type , which is referred to as `` type - c '' matrix .such construction can have several variations as shown in figure [ fig : m - orthogonal](c ) due to certain implementation considerations .for example , we can exploit parallelism or distributed computation of ( [ eq : proxgrad ] ) by using a parallel matrix - vector multiplication . in that case, each sub - block of the measurement matrix would be taken from a partial block of scrambled dft matrix . in this context, the authors of demonstrated that type - a matrix and type - c.1 matrix ( constructed by concatenating several randomly _ square _ orthonormal matrices ) have the same performance .except for , however , little progress has been made on this type of measurement matrix . in this paper, we aim to provide analytical characterization for the performance of lasso under such measurement matrices .in particular , we derive the mse of lasso in the general type - c setup by using the replica method from statistical physics as in .our mse result encompasses type - a and type - b matrices as special cases .then we compare their performances and behaviors with those for random i.i.d .gaussian matrices .we will show that all the structurally orthogonal matrices ( including types a c ) perform at least as well as random i.i.d .gaussian matrices over arbitrary setups .specifically , we have made the following technical contributions : * we show that type - a matrix has the best mse performance out of all other types of structurally orthogonal matrices and performs significantly better than the i.i.d . gaussian matrices .* in contrast to type - a matrices , the row - orthogonality in type - b is no longer preserved if .the mse performance of type - b matrices degrades with decreasing the ratio of while they still perform at least as good as their random i.i.d .gaussian counterparts .* we show that type - a , type - c.1 , and type - c.2 matrices have the same mse performance .specifically , _ horizontally _ concatenating multiple row - orthonormal matrices have the same mse performance as its single row - orthonormal counterpart .this argument extends the result of to the case of concatenating multiple row - orthonormal matrices .further , we reveal that the measurement matrices formed by concatenating several randomly orthonormal bases in _ vertical _ direction result in significant degradation . for example , type - c.4 and type - c.5 matrices have the worst performance among type - c matrices although they are at least as good as their random i.i.d . gaussian counterparts .the remainder of the paper is organized as follows . in sectionii , we present the problem formulation including fundamental definitions of the structurally orthogonal matrices . in section iii , we provide the theoretical mse results of lasso based on the structurally orthogonal matrices .simulations and discussions are presented in section iv and the main results are summarized in section v. _ notations_throughout this paper , for any matrix , {i , j} ] is the fraction of non - zero entries in .that is , the elements of are sparse and are i.i.d . generated according to . for generality , we consider the measurement matrix made of different blocks as outlined in fig .[ fig : type - c - matrix ] , which we refer to it as type - c matrix .the structurally random matrix was also considered by in the context of compressive sensing for different purposes . in the setup , is constructed by vertical and horizontal concatenation of blocks as ,\ ] ] where each is drawn independently from the haar measure of random matrix ( referred to as the _ standard _ orthonormal matrix in this paper ) . to shape in an -dimensional matrix, we randomly select rows and columns from the standard orthonormal matrix . we denote and the `` column selection rate '' and `` row selection rate '' , respectively . also , we define and . to make the setup more flexible, we assume that for the subblock , the standard orthonormal matrix has been multiplied by . by setting the values of appropriately ,each block can be made either only zeros or a partial orthonormal matrix .corresponding to the measurement matrix made of different blocks , the variables of are divided into blocks with variables in each block . meanwhile, the measurements are divided into blocks with measurements in each block . note that we have and .the measurement ratio of the system is given by .to facilitate our analysis based on the tools in statistical mechanics , we use the approach introduced in to reformulate the -regularized ls problem ( [ eq : rls ] ) in a probabilistic framework .suppose that the posterior distribution of follows the distribution where is a constant and is the partition function ( or normalization factor ) of the above distribution function .given the posterior probability of ( [ eq : postdis_rls ] ) , the bayes way of estimating is given by as , the posterior mean estimator ( [ eq : estx ] ) condenses to the global minimum of ( [ eq : rls ] ) , i.e. , . in ( [ eq : estx ] ) , ( or equivalently ) is estimated from given that is perfectly known .clearly , depends on and thus is _random_. we are thus interested in the ( average ) mse of given by where denotes an average over . specifically , we define where is defined by ( [ eq : px0 ] ) , and is the conditional distribution of given under ( [ eq : sysmodel ] ) .our aim of this paper is to derive an analytical result for . in the analysis of , we consider , while keeping and fixed and finite for and . for convenience, we refer to this large dimensional regime simply as .notice that the mse depends on the measurement matrix .however , in the large regime , we expect ( or assume ) that the average mse appears to be self - averaging . that is , the mse for any typical realization of coincides with its average over .from ( [ eq : defmse ] ) , the posterior distribution plays a role in the mse . in statistical mechanics, the key for finding the mse is through computing the partition function , which is the marginal of , or its logarithm , known as _free entropy_. following the argument of , it can be shown that is a saddle point of the free entropy .thanks to the self - averaging property in the large dimensional regime , we therefore compute by computing the _ average _ free entropy the similar manipulation has been used in many different settings , e.g. , .the analysis of ( [ eq : freeen ] ) is unfortunately still difficult .the major difficulty in ( [ eq : freeen ] ) lies in the expectations over and .we can , nevertheless , greatly facilitate the mathematical derivation by rewriting as in which we have moved the expectation operator inside the log - function .we first evaluate for an integer - valued , and then generalize it for any positive real number .this technique is known as the replica method , which emerged from the field of statistical physics and has recently been successfully applied to information / communications theory literature .details of the replica calculation are provided in appendix a. we here intend to give an intuition on the final analytical results ( i.e. , proposition [ pro1 ] to be shown later ) .note that the approach presented here is slightly different from that in appendix a. basically , the replica analysis allows us to understand the characteristics of the errors made by lasso by looking at the signal reconstruction via an equivalent scalar version of the linear system ( [ eq : sysmodel ] ) : where the subscript indicates that the equivalent linear system characterizes the signal in block , i.e. , and there are parallel equivalent linear systems of ( [ eq : eqscalsysmodel ] ) in block ; the parameters are arisen in the replica analysis to be given later in proposition [ pro1 ] ; is a random signal generated according to the distribution , is standard complex gaussian ; and is the effective measurement .in particular , our analysis shows that the characteristics of lasso output corresponding to the signal , , can be analyzed via the lasso output of the signal through the effective measurement , where and play the role of the _ effective _ measurement gain and _ effective _ noise level .therefore , following ( [ eq : rls ] ) , the recovery of from by lasso becomes using ( * ? ? ?* lemma v.1 ) , the optimal solution of ( [ eq : rls_eqscalsysmodel ] ) reads note that depends on and is therefore random .then the mse of is given by , where denotes an average over with as there are parallel equivalent systems in block , the mse of lasso reconstruction in group is given by where the second equality is due to and , and the last equality follows from the fact that and . using ( [ eq : optofxp ] ) and ( [ eq : postdis_eqscalsysmodel ] ) and following the steps of ( * ? ? ?* ( 349)(357 ) ) , one can get the analytical expressions of and , and then result in an analytical expression of .we summarize the results in the following proposition .[ pro1 ] consider a type - c matrix being the measurement matrix .let denote the mse of lasso reconstruction in block , and define [ eq : defgc ] then as , the average mse over the entire vector becomes where with [ eq : mq_pro1 ] in ( [ eq : mq_pro1 ] ) , we have defined where the parameters and are the solutions of the coupled equations [ eq : mqx_pro1 ] where see appendix a. note that except for , the remaining parameters in proposition [ pro1 ] are arisen from the replica analysis and can be regarded auxiliary .the parameters have to be solved in ( [ eq : mqx_pro1 ] ) for all .proposition [ pro1 ] provides not only a new finding but also a unified formula that embraces previous known results .for example , the mse of lasso under type - a measurement matrix in can be obtained if we set and in propositions [ pro1 ] . clearly , by setting ,we are also able to further study the mse of lasso under type - b measurement matrix . in the next section, we will discuss the mses of lasso under type - a and type - b measurement matrices and we compare their performances and behaviors with those for random i.i.d .gaussian matrices .another existing result is related to the type - c.1 measurement matrix in in which and for . in , a type - c.1 orthogonal matrixis referred to as the -orthogonal matrix as the matrix is constructed by concatenating independent standard orthonormal matrices . also , only considered the real - valued setting , where the signal , the measurements , and the measurement matrix are all real - valued . in this case , the -norm is defined as , which is different from the complex -norm ( see footnote ) . in the real - valued setting ,the analytical mse expression of lasso in proposition [ pro1 ] also holds while and in ( [ eq : defgc ] ) should be replaced by [ eq : defgr ] the difference between ( [ eq : defgc ] ) and ( [ eq : defgr ] ) is significant , and can be understood from ( [ eq : rls_eqscalsysmodel ] ) by considering its real - valued counterpart . in the real - valued setting of ( [ eq : rls_eqscalsysmodel ] ) , and are real - valued , and the optimal solution becomes , which is quite different from its complex - valued counterpart in ( [ eq : optofxp ] ) .this difference turns out to be reflected on and and thus on and of ( [ eq : mq_pro1 ] ) .in particular , the mse of lasso with the -orthogonal matrix can be perfectly recovered if we set , and those and for in proposition [ pro1 ] and replace and by and , respectively .clearly , proposition [ pro1 ] provides a unified result that allows us to quantify the mse of lasso under a variety of measurement matrices .we will present detailed discussions in the next section .in this subsection , we aim to study the mse of lasso under type - a and type - b measurement matrices .in particular , we will compare their performances and behaviors with those for random i.i.d .gaussian matrices . to make comparison fair between different setups , all cases of the measurement matrices are normalized so that ( referred to as the power constraint of the measurement matrix ) . if the elements of are i.i.d .gaussian random variables with zero mean and variance , then the power constraint of the measurement matrix is satisfied .we call this matrix the i.i.d . gaussian matrix . on the other hand ,if is a type - a matrix , the power constraint of the measurement matrix is naturally satisfied , and in fact , it satisfies the more stringent condition .meanwhile , in the type - b setup , we set the gain factor to satisfy this power constraint .since there is only one block , i.e. , , in the type - a and type - b setups , we omit the block index from all the concerned parameters hereafter , and proposition [ pro1 ] is anticipated to be greatly simplified .the mse of lasso under type - b orthogonal measurement matrix is given as follows .[ cor1 ] with the type - b orthogonal measurement matrix , the mse of lasso is given by , where are same as those in ( [ eq : mq_pro1 ] ) while the block index is omitted .the parameters are functions of which can be obtained by solving the following set of equations [ eq : par1_typeb ] with the following definitions [ eq : par2_typeb ] the above results can be obtained by substituting the corresponding parameters of type - b setup , i.e. , , into proposition [ pro1 ] .in addition , using ( [ eq : def_delta ] ) and ( [ eq : mqx_pro1a ] ) , we have eliminated .let us first consider the type - a setup , where we have and . if we set the gain factor , then we have . then ( [ eq : par2_typeb ] ) can be further simplified in the form [ eq : par2_typea ] recall that in the real - valued setting , and in ( [ eq : defgc ] ) should be replaced by and in ( [ eq : defgr ] ) . in this case, the above result gives exactly the same mse result as reported in ( * ? ? ?* example 2 ) . it should be noticed that the setting of `` '' is used above to align the setting of . according to the power constraint of the measurement matrix in this paper, we should set rather than . before proceeding , we present numerical experiments to verify our theoretical results . in the experiments ,type - a and type - b matrices were generated from a randomly scrambled dft matrix with .the proximal gradient method ( [ eq : proxgrad ] ) was used to solve lasso and obtain the reconstruction .the experimental average mse was obtained by averaging over independent realizations . to form a measurement matrix , the selected column and row sets at each realizationwere changed randomly .the experimental average mses of lasso under different selecting rates are listed in table [ tab : difselrate ] in which the theoretical mse estimates by corollary [ cor1 ] are also listed for comparison , with the parameters : , , , and . in table[ tab : difregpar ] , we fixed the selecting rate and repeated the previous experiment with different regularization parameters . finally , in table [ tab : difnoiselevel ] , we fixed the selecting rate and regularization parameter and repeated the experiment with different noise levels .we see that for all the cases , the differences between the two estimates are inappreciable .therefore , corollary [ cor1 ] provides an excellent estimate of the mse of lasso in large systems ..comparison between experimental and theoretical mses of lasso under different selecting rates for , , , and . [ cols="^,^,^,^,^,^",options="header " , ] + [ tab : fourcases ] next , we use the theoretical expression to examine the behaviors of mses under type - c measurement matrices . in figure[ fig : mse - fourcases ] , we compare the mses of lasso as a function of the regularization parameter for the four cases depicted in figure [ fig : fourcases ] .the mses for type - a and the i.i.d .gaussian counterparts are also plotted as references . as can be seen, type - a setup always gives the best mse result while the i.i.d .gaussian setup yields the worst mse result . however , type - a setup would not be always useful if the corresponding size of the fft operators is not available in some dsp chips . also , we see that case-1 and case-2 always have the same mse behaviors .this finding motivates us to get the following observation that can meet the same performance of type - b matrix via concatenating orthonormal bases .[ obs1 ] consider type - b measurement matrix with the column and row selection rates and .the mse of lasso under this measurement matrix is identical to that under the horizontal concatenation of matrices where each matrix is from a partial orthonormal matrix with the column and row selection rates and . for a meaningful construction , be subjected to . to see an application of this observation ,let us take two examples .first , consider type - b measurement matrix with and . applying observation [ obs1 ] , we have that the mse of lasso under the row - orthonormal measurement matrix is identical to that under the measurement matrix of ] with each being a partial orthogonal matrix with and . in this case , the columns of each are not orthogonal any more but _ nearly _ orthogonal .therefore , we can expect some performance degeneration under this measurement matrix .finally , it is clear that case-1 and case-2 in figure [ fig : mse - fourcases ] have the same mse behaviors but case-2 has a better structure in the parallel computation and less requirement in the size of the fft operator . from figure [ fig : mse - fourcases ] , we also observe that the measurement matrix constructed by vertical _ and _ horizontal concatenation of several blocks , i.e. , case-4 , has the worst performance among the structurally orthogonal matrices . as a matter of fact ,if we continue to increase the number of concatenation blocks , then their mse performances will degrade accordingly .however , in any cases , they are at least as good as their random i.i.d .gaussian counterparts .this observation hence provides us another way to meet the random i.i.d .gaussian matrix via vertically and horizontally concatenating orthonormal bases . finally , comparing the four cases in figure [ fig : mse - fourcases ], we notice that if type - a matrix is not available , case-3 provides the best mse result .this observation together with the previous experiments indicate that to construct a measurement matrix aiming for a good mse performance in lasso formulation , one should follow the example of case-3 .that is to say , first try to use a row - orthogonal matrix that can best fit the dimension of the measurement matrix and then horizontally concatenate the remaining part .we have investigated the mse performance of estimating a sparse vector through an undersampled set of noisy linear transformations when the measurement matrix is constructed by concatenating several randomly chosen orthonormal bases and lasso formulation is adopted . using the replica method in conjunction with some novel matrix integration results , we derived the theoretical mse result .extensive numerical experiments have illustrated excellent agreement with the theoretical result .our numerical results also revealed the fact that the structurally orthogonal matrices are at least as well performed as the i.i.d .gaussian matrices .in particular , we have made the following observations : * type - a matrices ( or row - orthogonal matrices ) have the best mse performance out of all the other types of structurally orthogonal matrices and is significantly better than the i.i.d .gaussian matrices . * the advantage of the row - orthogonal matrix over the i.i.d .gaussian matrix is still preserved even when a random set of columns is removed ( which leads to a type - b matrix ) . when increasing the number of the removed columns , the mse of lasso degenerates to the case of the i.i.d .gaussian matrices .in particular , we have shown that the asymptotic eigenvalue distribution of type - b matrix with small column selection rate converges to that of the i.i.d . gaussian matrix . * in addition , a measurement matrix obtained by orthogonal matrix constructions has fast computation and facilitates parallel processing . for this purpose , we have provided a technique to meet the same performance of type - b matrix via horizontally concatenating orthogonal bases .our argument is more systematic than and leads to much wider applications .* on the other hand , we have shown that the measurement matrix constructed by vertical concatenation of blocks usually gets the worst performance compared to the horizontal concatenation .however , they are at least as good as their random i.i.d .gaussian counterparts . as a consequence ,we conclude that in addition to the ease of implementation , the structurally orthogonal matrices are preferred for practical use in terms of their good estimation performance .it was reported that orthogonal measurement matrices also enhance the signal reconstruction threshold in the noisy setups when the optimal bayesian recovery is used .promising future studies include performance evaluation under the optimal bayesian recovery and development of recovery algorithms suitable for the structurally orthogonal matrices .first , recall that we have rewritten the average free entropy in ( [ eq : avgfreeen ] ) by using the replica identity . within the replica method, it is assumed that the limits of and can be exchanged .we therefore write we first evaluate for an integer - valued , and then generalize it for any positive real number . in particular , given the partition function of ( [ eq : partfun_b ] ) , we obtain with . using the -th moment of the partition function and in ( [ eq : postdis_sys ] ), we have where ) ] .the equality of ( [ eq : replicpartfun_b ] ) follows from the fact that is a random vector taken from the input distribution in ( [ eq : px0 ] ) if and otherwise , and if and otherwise . before proceeding ,we introduce the following preprocessing to deal with the cases in which is a randomly sampled orthogonal matrix ( or , deleting row / columns independently ) . in particular, we find that it is convenient to work with the enlarged orthogonal matrix with rows and columns setting to zero rather removed . for clarity , we use the following definition . a square matrix is called a diagonal projection matrix if its off - diagonal entries are all zeros and its diagonal entries are zeros or ones .let and be diagonal projection matrices , where the numbers of nonzero diagonal elements of and are and , respectively .therefore , we characterize each block by where is an standard orthonormal matrix . since are independent standard orthonormal matrices , the positions of nonzero elements of the diagonal projection matrices are irrelevant . for the sake of simplicity , we assume that all the diagonal entries of and appear first , i.e. , ~~\mbox{and}~~ { { \bf t}}_{q , p } = \left [ \begin{array}{cc } { { \bf i}}_{n_p } & { { \bf 0}}\\ { { \bf 0 } } & { { \bf 0}}\end{array}\right ] , ~\forall p , q.\ ] ] recall the type - c matrix in section ii that the standard orthonormal matrix has been multiplied by .the gain factor can be included in via a scaling factor . for notational convenience , here, we do not use the expression of but absorb into .also , we enlarge and to be -dimensional vectors by zero padding . as a consequence, the input - output relationship of ( [ eq : sysmodel ] ) can be equivalently expressed as }_{\triangleq{{\tilde{{{\bf y } } } } } } = \underbrace{\left[\begin{array}{ccc } & | & \\ - & { { \tilde{{{\bf a}}}}}_{q , p } & - \\ & | & \end{array}\right]}_{\triangleq{{\tilde{{{\bf a } } } } } } \underbrace{\left[\begin{array}{c } | \\ { { \bf x}}_q \\ | \end{array}\right]}_{\triangleq{{\tilde{{{\bf x } } } } } } + \sigma_0 \underbrace{\left[\begin{array}{c } | \\{{\bf w}}_q \\| \end{array}\right]}_{\triangleq{{\tilde{{{\bf w}}}}}}.\ ] ] notice that all the following derivations are based on the enlarged system ( [ eq : enlargedsysmodel ] ) .therefore , by abuse of notation , we continue to write , , , , , and for , , , , , and , respectively .next , we introduce a random vector per block the covariance of and is a hermitian with entries given by {a , b } .\ ] ] for ease of exposition , we further write , , and . now , we return to the calculation of ( [ eq : replicpartfun_b ] ) . in ( [ eq : replicpartfun_b ] ), the expectations introduce iterations between and .however , the resulting iterations depend only on the covariance as those shown in ( [ eq : defq ] ) .therefore , it is useful to separate the expectation over into an expectation over all possible covariance and all possible configurations with respect to a prescribed set of by introducing a -function . as a result , ( [ eq : replicpartfun_b ] ) can be rewritten as where and {a , b}\right ) } _ { { { \bf x } } } { { \rm d}}{{\bf q}}.\ ] ] next , we focus on the calculations of ( [ eq : g1 ] ) and ( [ eq : measure_mu ] ) , respectively .let us first consider ( [ eq : g1 ] ) . integrating over s in ( [ eq : g1 ] ) by applying lemma [ lemma_gaussianintegr ] yields where .\ ] ] next , we consider ( [ eq : measure_mu ] ) . through the inverse laplace transform of -function , we can show that where is the rate measure of and is given by with being a symmetric matrix and . inserting ( [ eq : measure_muinn ] ) into ( [ eq : sf_e2 ] ) yields .therefore , as , the integration over can be performed via the saddle point method , yielding .\ ] ] substituting ( [ eq : g3 ] ) and ( [ eq : rate_fun1 ] ) into ( [ eq : qf ] ) , we arrive the free entropy ( [ eq : limf_appa ] ) at the saddle - point asymptotic approximation where and [ eq : tqq ] the saddle - points can be obtained by seeking the points of zero gradient of with respect to and . rather than searching for the saddle - points over general forms of and , we invoke the following hypothesis : the dependence on the replica indices would not affect the physics of the system because replicas have been introduced artificially for the convenience of the expectation operators over and . it therefore seems natural to assume _ replica symmetry _( rs),_{0,0}=\tilde{r}_{q , p} ] .it turns out that when , we get .therefore , to simplify notation , we set at the beginning .in addition , it is natural to let and be complex - valued variables .we will find that the whole exponents will depend only on the real part of , and turns out to be a real - valued variable .therefore , we let and be real - valued variables at the beginning . ] i.e. , , \label{eq : rs_q } \\\tilde{{\bf q}}_{q , p}&= \left [ \begin{array}{cc } 0 & { { \tilde{m}}}_{q , p } { { \bf 1}}_{\tau}^t \\ { { \tilde{m}}}_{q , p } { { \bf 1}}_{\tau } & ( { { \tilde{q}}}_{q , p}-{{\tilde{q}}}_{q , p}){{\bf i}}_{\tau } + { { \tilde{q}}}_{q , p}{{\bf 1}}_{\tau } { { \bf 1}}_{\tau}^t \end{array } \right ] .\label{eq : rs_tq}\end{aligned}\ ] ] this rs has been widely accepted in statistical physics and used in the field of information / communications theory , e.g. , . by lemma [ lemma_eigprojectionmatrix ], we can show that for the rs of ( [ eq : rs_q ] ) , the eigenvalues of are given by for and , which is rather laborious but straightforward . for readers convenience ,we detail the calculation in appendix b. ] [ eq : eigofsandq ] write , where ] denotes a -dimensional orthonormal basis composed of ^t ] and orthonormal vectors , which are orthogonal to both and .[ lemma_haarintegr ] let be a set of vectors that satisfy for some non - negative real values , be a set of independent haar measure of random matrices , and be a set of positive - semidefinite matrices .define then for large , we have this lemma extends ( * ? ? ?* lemma 1 ) to deal with the formula of ( [ eq : integroforthrm ] ) when and are the haar measure of _ complex _ random matrices . from the definition of and ,the vector can be considered to be uniformly distributed on a surface of a sphere with radius for each .then the joint probability density function ( pdf ) of is given by where is the normalization factor and is a set of complex numbers .the normalization factor is given by using the gaussian integration formula ( i.e. , lemma [ lemma_gaussianintegr ] ) with respect to , the normalization factor becomes since we are interested in the large analysis , the saddle - point method can further simplify the normalization factor to the form where the second equality is obtained by solving the extremization problem . next , we deal with the calculation of by writting where the second equality follows from the definition of the joint pdf of . applying the hubbard - stratonovich transformation ( lemma [ lemma_gaussianintegr ] ) together with the expressions ( [ eq : pu_p ] ) and ( [ eq : zfinal ] ) to the above provides \\ + \log\frac{\sigma^2}{\pi } - \frac{1}{n}\log { { \sf z}}.\end{gathered}\ ] ] using the gaussian integration repeatedly with respect to and yields where the third equality is obtained by applying the saddle - point method . , `` observed universality of phase transitions in high - dimensional geometry , with implications for modern data an alysis and signal processing , '' _ philos . trans .london a , math .1906 , pp . 42734293 , nov .2011 .h. nishimori , _ statistical physics of spin glasses and information processing : an introduction_.1em plus 0.5em minus 0.4emser . number 111 in int .series on monographs on physics .oxford u.k .: oxford univ . press , 2001 .
|
in this paper , we consider a compressed sensing problem of reconstructing a sparse signal from an undersampled set of noisy linear measurements . the regularized least squares or least absolute shrinkage and selection operator ( lasso ) formulation is used for signal estimation . the measurement matrix is assumed to be constructed by concatenating several randomly orthogonal bases , referred to as structurally orthogonal matrices . such measurement matrix is highly relevant to large - scale compressive sensing applications because it facilitates fast computation and also supports parallel processing . using the replica method from statistical physics , we derive the mean - squared - error ( mse ) formula of reconstruction over the structurally orthogonal matrix in the large - system regime . extensive numerical experiments are provided to verify the analytical result . we then use the analytical result to study the mse behaviors of lasso over the structurally orthogonal matrix , with a particular focus on performance comparisons to matrices with independent and identically distributed ( i.i.d . ) gaussian entries . we demonstrate that the structurally orthogonal matrices are at least as well performed as their i.i.d . gaussian counterparts , and therefore the use of structurally orthogonal matrices is highly motivated in practical applications . compressed sensing , lasso , orthogonal measurement matrix , the replica method .
|
although the theory of quantum games , originated in 1999 by meyer and eisert , wilkens , and lewenstein is only six years old , numerous results obtained during these years have shown that extending the classical theory of games to the quantum domain opens new interesting possibilities .although eisert and wilkens noticed that any quantum system which can be manipulated by two parties or more and where the utility of the moves can be reasonably quantified , may be conceived as a quantum game , the extreme fragility of quantum systems may make playing quantum games difficult . in this respect it is interesting whether quantum games with all their ` genuine quantum ' features could be played with the use of suitably designed macroscopic devices .the aim of this letter is to show that this is possible , at least in the case of a ` restricted ' version of a two - players , two - strategies quantum game proposed by marinatto and weber in which only identity and spin - flip operators are used .moreover , we show that this can be done at once by anyone equipped with a pack of 10 cards bearing numbers .our idea of playing quantum games with macroscopic devices stems from the invention devices proposed by one of us that perfectly simulate the behavior and measurements performed on two maximally entangled spin-1/2 particles .for example , they allow to violate the bell inequality with , exactly ` in the same way ' as it is violated in the epr experiments .a more recent and further elaborated model consists of two coupled spin-1/2 for which measurements are defined using ` randomly breaking measurement elastics ' . in this paperwe use the older model for a single spin-1/2 for which measurements are defined using ` randomly selected measurement charges ' . in order to play marinatto andweber s ` restricted ' version of two - players , two - strategies quantum game we shall not use the ` full power ' of this machine , but we give its complete description such that the principle of what we try to do is clear .the quantum machine is a model for a spin-1/2 particle consisting of a point particle with negative charge on the surface of a 3-dimensional unit sphere .the spin - state is represented by the point on .all points of the sphere represent states of the spin : points on the surface correspond to pure states , interior points represent mixed states , such that the point in the center of the sphere represents the density matrix .hence states are represented equivalently as this is the case in the bloch model for the spin 1/2 .a measurement along the direction consists in placing a positive charge in and a positive charge in .the charges and are taken at random from the interval ] and that .if the initial state of the machine is as depicted on fig .[ fig : mqg01 ] , the forces and between the negative charge and , respectively , positive charges and are if the electromagnetic forces pull the particle to the point where it stays and the measurement is said to yield outcome ` spin up ' , and if the particle is pulled to yielding outcome ` spin down ' . denoting the angle between directions and by , one obtains and hence the probability that is : , since is assumed to be uniformly distributed in the interval ] with uniform distribution satisfying the constraint assuming for simplicity that forces between ` left ' positive and ` right ' negative , resp . `right ' positive and ` left ' negative charges are negligible ( which can be achieved by using a rod that is long enough or by suitable screening ) we can make analogous calculations as for the single sphere model . the forces and between the negative charges placed at both ends of the rod and , respectively , positive charges and are latexmath:[\[f_{1}=c\frac{qq_{1}}{|b|^{2}}\text { \qquad and\qquad } f_{2}=c\frac{qq_{2}}{% depends on which force , or , is bigger . if the labels and are placed as on fig . [ fig : mqg02 ] , the result of the measurement is iff , and iff .the probability that is as follows : , since is assumed to be uniformly distributed in the interval }_{b}(p , q ) & = q[p(\alpha + \beta -2\gamma ) -\alpha this result might be surprising since the rod connecting two particles represents entanglement in the macroscopic quantum machine so one could expect that when the initial state of the game is not entangled , this connection should be broken .however , it should be noticed that in the device depicted on fig .[ fig : mqg02 ] the rod connecting two particles is , in fact , redundant .the reason for which we left it on fig .[ fig : mqg02 ] is twofold : firstly , we wanted to stress that our idea of a macroscopic device that allows to play quantum games stems from the ideas published in aer91,aeretal00 , and secondly , this rod will be essential for macroscopic simulations of other quantum games , more general than marinatto and weber s ` restricted ' ones .thus , we see that what vanishes in the ` non - entanglement ' limit of the considered quantum game is the ` randomness in measurement ' , since now ( except for the zero - probability case when , ) the initial state of the machine does not change in the course of the measurement whatever is the value of .[ [ marinatto - and - webers - restricted - quantum - game - realized - with - a - pack - of-10-cards ] ] marinatto and weber s ` restricted ' quantum game realized with a pack of 10 cards ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ the lack of any importance of the connecting rod and the fact that all distances , charges , and forces in the device depicted on fig .fig : mqg02 are symmetric with respect to the middle of the rod allow to produce a still more simple model of the considered game , in fact so simple that it can be played with a piece of paper and a pack of 10 cards bearing numbers .the game is played in three steps . in the first step the initial ` quantum ' state of the game ( [ in ] ) is fixed . since only the squared moduli of entanglement coefficients and are important and , it is enough to fix a point representing latexmath:[ ] .if a chosen number is smaller than which , if the probability distribution is uniform in % that _ exactly _ equals since its probability is , as well as the fact that in a series of drawings we actually choose one of numbers represented by separate points uniformly distributed in the interval $ ] ) . of course calculations of the payoff functions that we made while describing the device depicted on fig .[ fig : mqg02 ] are still valid in this case , so we again obtain perfect macroscopic simulation of marinatto and weber s ` restricted ' two - players , two - strategies quantum games .thus , one does not have to be equipped with sophisticated and costly devices and perform subtle manipulations on highly fragile single quantum objects in order to play quantum games , at least in the ` restricted ' marinatto and weber s version : all that suffices is a piece of paper and a pack of 10 cards !
|
we show that it is perfectly possible to play ` restricted ' two - players , two - strategies quantum games proposed originally by marinatto and weber having as the only equipment a pack of 10 cards . the ` quantum board ' of such a model of these quantum games is an extreme simplification of ` macroscopic quantum machines ' proposed by aerts et al . in numerous papers that allow to simulate by macroscopic means various experiments performed on two entangled quantum objects .
|
pattern recognition algorithms often involve testing the significance of a statistical dependency between two binary variables and , given the observed counts , , , and .if the test is done only a couple of times , the computation time is not crucial , but in an exhaustive search the test may repeated thousands or even millions of times . for example , in data mining a classical problem is to search for the most significant classification rules of the form or , where is a set of binary attributes and is a binary class attribute .this problem is known to be -hard with common significance measures like the -measure and no polynomial time solutions are known .even a more complex problem is to search for all sufficiently significant dependency rules , where the consequence attribute is not fixed ( see e.g. ) . in both problemsthe number of all tested patterns can be exponential and therefore each rule should be tested as fast as possible , preferrably in a constant time .the problem is that typically the mined data sets are very large ( the number of attributes can be tens of thousands and the number of rows millions ) .still , the most significant ( non - trivial ) dependency rules may be relatively infrequent and the corresponding distributions too skewed for fast but inaccurate asymptotic tests .for accurate results , one should test the significance of dependency with fisher s exact test , which evaluates the exact probability of the observed or a stronger dependency in the given data , if and were actually independent . for a positive dependency between and the probability ( -value ) is defined by the cumulative hypergeometric distribution where is the absolute frequency of set , is the data size ( number of rows ) , and .( for a negative dependency between and it suffices to replace by . )however , fisher s exact test is computationally demanding , when the data size is large . in each testone should evaluate terms , which means that the worst case time complexity is .for example , if and , we should evaluate 200 001 terms .in addition , each term involves binomial factors , but they can be evaluated in a constant time , if all factorials , have been tabulated in the beginning . a common solution is to estimate the -values from the -measure , when the data size is moderate or large .the justification is that asymptotically ( when approaches ) can be approximated by the -based -values .however , in finite data sets the approximations are often quite inaccurate .the reason is that the -measure is very sensitive to the data distribution . if the exact hypergeometric distribution is symmetric , the -measure works well , but the more skewed the distribution is , the more inaccurate the approximated -values are .classically , it is recommended that the -approximation should not be used , if any of the expected counts ( under the assumption of independence between and ) , , , or is less than 5 .however , this rule of thumb prevents only the most extreme cases , which would lead to false discoveries . if the problem is to search for e.g. the best 100 dependency rules , the -measure produces quite different results than , even if all expected counts and the data size are large . in this paper, we introduce better approximations for the exact -values , which still can be calculated in a constant time .the approximations are upper bounds for the ( when the -based values are usually lower bounds , i.e. better than true values ) , but when the dependency is sufficiently strong , they give tight approximates to the exact values . in practice , they give identical results with the exact -values , when used for rule ranking .the idea of the approximations is to calculate only the first or a couple of first terms from exactly and estimate an upper bound for the rest terms .the simplest upper bound evaluates only the first term exactly .it is also intuitively the most appealing as a goodness measure , because it is reminiscent to the existing dependency measures like the odds ratio . when the dependencies are sufficiently strong ( a typical data mining application ) , the results are also highly accurate .however , if the data set contains only weak and relatively insignificant dependencies , the simplest upper bound may produce too inaccurate results . in this case, we can use the tighter upper bounds , one of which can be adjusted to arbitrary accurate .however , the more accurate -values we want to get , the more terms we have to calculate exactly .fortunately , the largest terms of are always the first ones , and in practice it is sufficient to calculate only a small fraction ( typically 210 ) of them exactly . the rest of the paper is organized as follows . in section [ sec2 ]we introduce the upper bounds and give error bounds for the approximations . in section [ seceval ]we evaluate the upper bounds experimentally , concentrating on the weak ( and potentially the most problematic ) dependencies .the final conclusions are drawn in section [ secconcl ] .the following theorem gives two useful upper bounds , which can be used to approximate fisher s .the first upper bound is more accurate , but it contains an exponent , which makes it more difficult to evaluate .the latter upper bound is always easy to evaluate and also intuitively appealing . [ ubtheorem ] let us notate and , . for positive dependency rule with lift each can be expressed as , where is constant .therefore , it is enough to show the result for . , where since decreases when increases , the largest value is .we get an upper bound the sum of geometric series is , which is the first upper bound . on the other hand , let us insert , and express the frequencies using lift . for simplicity , we use notations and .now , , and .we get the nominator is , because .therefore in the following , we will denote the looser ( simpler ) upper bound by and the tighter upper bound ( sum of the geometric series ) by . in ,the first term of is always exact and the rest are approximated , while in , the first two terms are always exact and the rest are approximated .we note that can be expressed equivalently as where is the leverage .this is expression is closely related to the odds ratio which is often used to measure the strength of the dependency .the odds ratio can be expressed equivalently as we see that when the odds ratio increases ( dependency becomes stronger ) , the upper bound decreases . in practice , it gives a tight approximation to fisher s , when the dependency is sufficiently strong .the error is difficult to bind tightly , but the following theorem gives a loose upper bound for the error , when is used for approximation .[ ub2bound ] when is approximated by , the error is bounded by upper bound can cause error only , if . if , and if , .let us now assume that .the error is .it has an upper bound this leads to the following corollary , which gives good guarantees for the safe use of : if , then . according to theorem [ ub2bound ] , , if is true , when . on the other hand , , when , because this is , because .a sufficient condition for is that this result also means that , when the lift is as large as required .the simpler upper bound , , can cause a somewhat larger error than , but it is even harder to analyze .however , we note that only , when . when , there is already some error , but in practice the difference is marginal .the following theorem gives guarantees for the accuracy of , when .if is approximated with and , the error is bounded by .the error is , where by theorem [ ub2bound ] . when , ( being a decreasing function of ) is therefore , the error is bounded by when , , and thus therefore , the latter factor is always , because .therefore .our experimental results support the theoretical analysis , according to which both upper bounds , and , give tight approximations to fisher s , when the dependency is sufficiently strong .however , if the dependency is weak , we may need a more accurate approximation .a simple solution is to include more larger terms to the approximation and estimate an upper bound only for the smallest terms using the sum of the geometric series .the resulting approximation and the corresponding error bound are given in the following theorem .we omit the proofs , because they are essentially identical with the previous proofs for theorems [ ubtheorem ] and [ ub2bound ] .[ generalub ] for positive dependency rule holds where and .the error of the approximation is [ examplefig ] shows the typical behaviour of the new upper bounds , when the strength of the dependency increases ( i.e. increases and and remain unchanged ) .in addition to upper bounds and , we consider a third upper bound , , based on theorem [ generalub ] , where the first three terms of are evaluated exactly and the rest is approximated .all three upper bounds approach to each other and the exact -value , when the dependency becomes stronger . and three upper bounds as functions of , when , , and .the strength of the dependency increases on the -axes.,scaledwidth=70.0% ] figure [ zoomfig ] shows a magnified area from figure [ examplefig ] . in this area ,the dependencies are weak , the upper bounds diverge from the exact .the reason is that in this area the number of approximated terms is also the largest .for example , when , contains 146 terms , and when , it contains terms . in these pointsthe lift is and , respectively .the difference between and is marginal , but clearly improves . showing the differences , when the dependency is weak.,scaledwidth=70.0% ]because the new upper bound gives accurate approximations for strong dependencies , we evaluate the approximations only for the potentially problematic weak dependencies . as an example , we consider two data sets , where the data size is either or . for both data sets , we have three test cases : 1 ) when , 2 ) when and , and 3 ) when and .( the second case with is shown in figures [ examplefig ] and [ zoomfig ] . ) for all test cases we have calculated the exact , three versions of the upper bound , and , and the -value achieved from the one - sided -measure .the -based -values were calculated with an online chi - square calculator .the values are reported for the cases , where , , and . because the data is discrete ,the exact -values always deviate somewhat from the reference values . the results for the first data set ( )are given in table [ pfapprcomp1000 ] and for the second data set ( ) in table [ pfapprcomp10000 ] .as expected , the -approximation works best , when the data size is large and the distribution is balanced ( case 1 ) . according to the classical rule of a thumb ,the -approximation can be used , when all expected counts are .this requirement is not satisfied in the third case in the smaller data set .the resulting -based -values are also the least accurate , but the -test produced inaccurate approximations also for the case 2 , even if the smallest expected frequency was 50 . in the smaller data set , the -approximation overperformed the new upper bounds only in the first case , when .if we had calculated the first four terms exactly , the resulting would have already produced a better approximation . in the larger data set , gave more accurate results for the first two cases , when and for the case 1 , when .when , the new upper bounds gave always more accurate approximations .if we had calculated the first eight terms exactly , the resulting would have overperformed the -approximation in case 1 with and case 2 with .calculating eight exact terms is quite reasonable compared to all 2442 terms , which have to be calculated for the exact in case 1 . with 15 exact terms , the approximation for the case 1 with would have also been more accurate than the -based approximation. however , in so large data set ( especially with an exhaustive search ) , a -value of 0.05 ( or even 0.01 ) is hardly significant. therefore , we can conclude that for practical search purposes the new upper bounds give better approximations to the exact than the .we have introduced a family of upper bounds , which can be used to estimate fisher s accurately . unlike the -based approximations , these upper bounds are not sensitive to the data size , distribution , or small expected counts . in practical data mining purposes , the simplest upper bound produces already accurate results , butif all existing dependencies are weak and relatively insignificant , the results can be too inaccurate . in this case, we can use a general upper bound , whose accuracy can be adjusted freely . in practice ,it is usually sufficient to calculate only a couple of terms from the fisher s exactly . in large data sets ,this is an important concern , because the exact can easily require evaluation of thousands of terms for each tested dependency .w. hmlinen : efficient search for statistically significant dependency rules in binary data .ph.d . dissertation .series of publications a , report a-2010 - 2 department of computer science , university of helsinki , finland .2010 .s. morishita and j. sese : transversing itemset lattices with statistical metric pruning . in proceedings of the nineteenth acm sigmod - sigact - sigart symposium on principles of database systems ( pods00 ) , pages 226236 , acm press 2000 .
|
fisher s exact test is often a preferred method to estimate the significance of statistical dependence . however , in large data sets the test is usually too worksome to be applied , especially in an exhaustive search ( data mining ) . the traditional solution is to approximate the significance with the -measure , but the accuracy is often unacceptable . as a solution , we introduce a family of upper bounds , which are fast to calculate and approximate fisher s -value accurately . in addition , the new approximations are not sensitive to the data size , distribution , or smallest expected counts like the -based approximation . according to both theoretical and experimental analysis , the new approximations produce accurate results for all sufficiently strong dependencies . the basic form of the approximation can fail with weak dependencies , but the general form of the upper bounds can be adjusted to be arbitrarily accurate . keywords : fisher s exact test upper bound approximation dependency rule
|
in many scientific and technical contexts one encounters phenomena that may be modeled by fluxes transferring population between sources and sinks for various species .examples include kinetic processes that modify abundances and transfer energy in atomic , molecular , and nuclear systems ; geochemical , climate , and other environmental systems ; electrical circuits ; economic models ; and population dynamics .terminology varies but let us refer generically to these sources and sinks as _boxes , _ and term the resulting systems of boxes connected by fluxes _ reaction networks ._ such systems are commonly modeled by a coupled set of differential equations that describe a continuous flow of population through the boxes . the reaction network is often classified as a _ stiff system _ , which we shall define to be a system of equations containing multiple timescales ranging over many orders of magnitude .most physical systems involve important processes operating on very different timescales , so realistic problems tend to be at least moderately stiff .some , such as those encountered in many astrophysics applications , are extremely stiff , with fastest and slowest timescales in the problem differing by as much as 1020 orders of magnitude . in stiff systems the timestep constraints are set by numerical stability requirements rather than accuracy considerations .hence , explicit numerical integration of stiff systems is usually impractical because the maximum stable timestep is far too small for efficient solutions ( see , for example , refs .this is commonly addressed by employing implicit or semi - implicit stiff solvers that are stable , but that require time - consuming iterative matrix solutions .a given box in a reaction network often is connected strongly only to a few other boxes .for example , the explosive burning conditions encountered in astrophysical novae , x - ray bursts , or supernovae may require reaction networks with hundreds to thousands of nuclear isotopes . yet individual isotopes are typically connected directly to other isotopes through ( at most ) reactions of consequence , and under many conditions no more than 23 reactions are important for a given isotope .such restrictions on the direct box reaction coupling imply that the matrices appearing in the iterative implicit solution are _ sparse ._ although various methods are available to deal with sparse matrices , in practice many codes for solving large reaction networks have not exploited sparseness in particularly effective ways .for example , in astrophysical calculations with implicit solvers in large networks ( say species or more ) , one finds often that greater than 90% of the processor time is consumed in matrix operations .efficient algorithms exist for the required matrix algebra ( with incremental improvements in them over time ) , but the matrix nature of the core problem implies that the time required for implicit solution grows non - linearly with the size of the network .in typical working codes for large - scale applications , increasing the size of the network increases the time for solution , often quadratically , sometimes as much as cubically , until there are enough boxes in the network to justify the overhead of sparse - matrix methods with more favorable scaling . in applications in thermonuclear networks , for example, it is often found that the overhead required to implement sparse - matrix iterative solutions is not justified until there are several hundred boxes in the network .thus , many present implicit stiff - network algorithms do not scale very gracefully to larger networks .we are primarily interested in the most ambitious applications of large networks , where the reaction network is only a portion of a larger problem .let us take as representative astrophysical thermonuclear reaction networks , where a proper description of the overall problem typically requires multi - dimensional hydrodynamics or radiation hydrodynamics coupled tightly to a large thermonuclear reaction network .the hydrodynamical evolution controls the conditions in the network such as temperature and density , and the network influences the hydrodynamic evolution strongly through energy production and modification of composition variables . as a consequence of the limitations discussed in the preceding paragraphs , the solution of large networks by the usual approaches is time - consuming and few calculations have attempted to couple the element and energy production strongly to the hydrodynamics with a network of realistic complexity .the most ambitious approaches use very small networks , perhaps tuned empirically to get critical quantities like energy production correct on average , coupled to the hydrodynamical simulation . in many calculationseven this is not done and the network is replaced entirely by parameterization . then a more complete network is run in a separate `` post - processing '' step , where fixed hydrodynamical profiles computed in the hydrodynamical simulation with the small network are used to specify the variation of thermodynamic variables such as temperature and density with time .astrophysical thermonuclear networks have been used for illustration , but many problems of scientific and technical interest exhibit similar complexity .examples include astrochemical kinetics , where one must model large chemical evolution networks in contracting molecular clouds , or combustion chemistry , where chemical burning networks are strongly coupled to simulations of the dynamics of the air and fuel mixture .physically - realistic networks in such contexts would often be quite large . in combustion of larger hydrocarbon molecules or studies of soot formation , hundreds to thousands of reacting species undergoing as many as 10,000 reactionsmay be encountered , and in supernova explosions hundreds to thousands of nuclear isotopes with tens of thousands of reaction couplings make non - zero contributions . for such cases one finds that current techniques do not allow for a coupling of realistic reaction networks to the full dynamics of the problem and often severely truncated or highly schematic reaction networks have been used in even the most realistic simulations .to be definite , we shall assume that the coupling of reaction networks is done using operator splitting , where the hydrodynamical solver is evolved for a numerical timestep holding network parameters constant , and then the network is evolved over the time corresponding to the hydrodynamical timestep holding the new hydrodynamical variables constant .this places two basic constraints on methods : 1 . at the end of each hydrodynamicaltimestep the network must be advanced with new initial conditions .thus , algorithms must be capable of rapid initialization and must not depend in a complex way on conditions from previous time intervals. 2 . with modern processors ,existing algorithms are reasonably adequate for many post - processing calculations .in contrast , for the operator - split , parallel processing environment that is our interest here , solution of the network over a hydrodynamic timestep must be fast enough that it does not require time substantially larger than that for the hydrodynamical solution .let us elaborate further on this second point .if a single processor were used to calculate both the hydrodynamical evolution and the network evolution in one hydrodynamical zone , the network evolution over a hydrodynamical timestep interval must be fast enough to not slow the calculation by too much relative to the hydrodynamical evolution alone .if we take the point of view that we are willing to tolerate longer compute times in the interest of a much more realistic calculation , but not longer by orders of magnitude , we estimate that the network must be capable of evolving over the time interval corresponding to the hydrodynamical timestep in roughly a second or less wall clock time .we take the multidimensional , adaptive - mesh , explicit hydrodynamical flash code [ 9 ] applied to type ia supernova simulations on large parallel systems as representative .the explicit hydrodynamical timestep will be limited overall by the courant time ( roughly , because stability requires a hydrodynamical timestep not larger than the sound - crossing time for the zone ) , and more stringently in zones of rapid burning where temperature and density may be changing rapidly . in current type ia supernova simulations the courant time would typically be s or smaller over most of the grid for the timescale relevant for the main part of the explosion , with rapid nuclear burning and associated temperature changes limiting the hydrodynamical timestep to s for some ranges of times . for qualitative estimates ,let us take as representative that a typical network integration for a single hydrodynamical timestep will be over an interval s during the time of strong burning and s over much of the approach to equilibration after strong burning . in flash, many spatial zones will be assigned to a single mpi rank on a parallel system .therefore , in the absence of node - level parallelism ( for example , with openmp ) , the network must be capable of calculating a number of hydrodynamic time intervals in a second or less if we wish to calculate an independent network for each zone .let us take for estimation purposed that we wish to be able to reliably integrate 1000 independent networks over a time interval of say seconds on a single processor in one second wall clock time , with each network containing several hundred isotopes .this places extremely strong startup and speed constraints on the required network .the explicit algorithms discussed here are capable of perhaps network timesteps per second on a single processor with present technology for a network with isotopes , so our goals require a network algorithm that can integrate a time interval of order seconds in no more than timesteps , implying average stable and accurate timesteps at least as large as times the elapsed integration time for the corresponding hydrodynamical evolution .figure [ fig : dthydro ] illustrates . k and initial mass density of .the shaded region between the two curves represents the range of timesteps an explicit network calculation must take to be able to integrate 100 - 1000 zones on a single processor over one operator - split hydrodynamical timestep in less than about one second elapsed time on modern processors .the hydrodynamical timestep lies roughly in the range , where is the elapsed time , over most of the range of integration . to maintain network timesteps within the band for each operator - split hydrodynamical timestep, we see that the algorithm must be capable of taking stable and accurate network timesteps approximately in the range over the entire range of hydrodynamical integration . ][ fig : dthydro ] such large timesteps are often possible with implicit and semi - implicit algorithms , but those methods are inefficient at computing each timestep ; explicit methods can compute a timestep efficiently , but timesteps this large are unthinkable with a normal explicit algorithm because they would be unstable in most realistic situations . in this and the other two papers of this series we shall demonstrate stabilization methods for explicit integration that realize such competitive integration timesteps in a variety of examples .thus we shall reopen the discussion of whether explicit methods , with their faster computation of timesteps and more favorable scaling with network size , are practical for large , stiff networks .the general task is to solve efficiently coupled ordinary differential equations subject to appropriate boundary conditions . in this expression ,the describe the dependent variables ( typically measures of abundance ) , is the independent variable ( the time in our examples ) , the fluxes between species and are denoted by , and is the effective rate for all processes depleting the species .the sum for each variable is over all species coupled to by a non - zero flux , and for later convenience we have decomposed the flux into a component that increases the abundance of and a component that depletes it . for an -species networkthere will be such equations in the population variables , generally coupled to each other because of the dependence of the fluxes on the different .( for notational simplicity we will not always display the index explicitly on the right side in our equations ) . the variables are typically proportional to a number density for the species . to keep the discussion general the variable will be used in most of our equations , but for the specific astrophysical examples that follow we shall replace the generic population variables with the mass fraction , which satisfies where is avogadro s number , is the total mass density , and is the atomic mass number and the number density for the species carbon nitrogen oxygen ( cno ) cycle that powers main - sequence stars more massive than the sun provides a graphic illustration of stiffness in a relatively simple system of large physical significance .the cno cycle is displayed in fig .[ fig : cnocycle ] .[ fig : cnocycle ] we shall illustrate by considering the primary part of the cno cycle illustrated on the right side of the figure . if the thermonuclear network corresponding to the main part of the cno cycle is integrated under typical cno cycle temperature and density conditions by explicit forward euler methods using standard rates for the reactions and constant timesteps , the integration is stable for timesteps less than or equal to 285.7 seconds , but becomes catastrophically unstable for a timestep of 285.8 seconds or more .this instability threshold is _ precisely two over the fastest rate _ for the transitions in the network , which corresponds to the -decay of o to produce n. we now show that this instability for forward differencing of the cno cycles arises because rapidly - decreasing small populations can become negative in an explicit integration if the timestep is too large .these negative populations export unphysical negative population that can destabilize the system because they can lead to exponentially growing solutions in small components that ultimately couple to the larger components .let us elaborate through the use of a simple model illustrated in figs.[fig : negprob][fig : o15logabsolutediverge ] , which will generalize a discussion that may be found in ref . .is presumed to be sufficiently short that the large , slow components remain essentially unchanged over a time equal to many times .the explicit integration for the small , fast component diverges if , such that lies in the shaded region .this divergence is exponential for longer times and quickly propagates to the larger slow components , destabilizing the entire network .the illustration of the slow components is schematic ; in typical stiff systems their timescales may be many orders of magnitude longer than those for fast components and often they are larger in value by many orders of magnitude . ][ fig : negprob ] in fig .[ fig : negprob ] we assume a coupled system in which there are many components varying on a relatively long timescale ( illustrated schematically by the curves at the top ) and a single component that is small and exponentially decreasing on a much faster timescale , with a typical behavior so that .let us consider timesteps that remain small compared with the larger timescales in the system ( so that for a single step we are in an approximately adiabatic situation and the populations varying on long timescales may be considered frozen ) , but comparable to or larger than the timescale set by . to advance the solution for the fast component from to by the explicit euler method , we take a timestep and extrapolate the solution using the derivative evaluated at .the triangle of height in fig .[ fig : negprob ] summarizes , where represents the numerical solution and is the exact solution . from this constructionwe see that 1 . for , the forward euler approximant for step will yield a value of that lies between 0 and , so that .2 . for , the sign of will be negative but again .3 . for the sign of will be negative and .but for the forward euler approximation to we have , so that by iterating for successive steps of fixed size ( with still considerably less than the longer timescales in the system , so that the adiabatic approximation remains valid ) , we obtain this converges toward the correct value of zero at larger times only if the product lies between zero and two , implying that the maximum value of that yields a convergent solution is bounded by .thus , in fig.[fig : negprob ] the maximum stable value of is less than and any point extrapolated by forward euler integration that lies in the shaded zone ( that is , ) will be unstable under the iteration ( [ negative1.2 ] ) , diverging to infinity rather than converging to the correct value of zero . because the small , fast components are coupled to the other components of the network , this divergence will quickly destabilize the entire network .let us illustrate with a concrete example .figure [ fig : compositeconvergenceabridged ] with for increasing explicit timesteps .generally for the solution converges to the correct value of zero but for the solution diverges to under successive iterations .note that the vertical scale has been increased by a factor of 10 in the rightmost figure . ][ fig : compositeconvergenceabridged ] corresponds to forward euler solution of with and timesteps ranging between and 0.03 . in this example, we see that 1 . for , the solution converges to zero . 2 . for , the solution oscillates between positive and negative values of the same absolute value and neither converges nor diverges .3 . for , the solution diverges to under successive iterations , with the divergence exhibiting exponential behavior for larger times .figure [ fig : o15logabsolutediverge ] o , which normally is of order or smaller under the conditions assumed here , can become negative .this unphysical condition is unstable and triggers an exponential runaway that quickly crashes the entire network . on the left side ,the _ absolute value _ of the envelope of the oscillating solution is plotted on a log scale , with the sign indicated adjacent to different parts of the curves .the right side shows a blowup of the region where the mass fraction begins to oscillate into negative values , now plotted on a linear scale to show clearly the envelope of the diverging oscillating solution .this figure is a generalization of fig .[ fig : negprob ] to a more complex network . ][ fig : o15logabsolutediverge ] applies the preceding model of the origin of the stiffness instability to the cno cycle . as noted earlier , ( for the parameters assumed ) ordinary explicit euler integration is highly unstable for a timestep larger than 285.7 seconds .this critical timestep is exactly two divided by the fastest rate parameter in the system , which is that for the -decay of to .expressing the network in matrix form and diagonalizing indicates that the origin of this instability lies in eigenvalues that exceed unity in absolute value if .generalizing the discussion associated with eq.([negative1.2 ] ) , for a coupled set of equations a finite - difference iteration for steps entails a matrix raised to the power applied to the original vector .this is guaranteed to converge only if no eigenvalue of the matrix exceeds unity in magnitude .less abstractly , fig .[ fig : o15logabsolutediverge ] indicates that the origin of this instability for standard explicit integration is the tendency of the population to become negative for large explicit timesteps .this is a more complex system than the previous simple decaying - exponential example because the population is depleted by the -decay but also replenished by the proton capture reaction on ( see fig.[fig : cnocycle ] and the inset diagram on the left side of fig.[fig : o15logabsolutediverge ] ) .nevertheless , we see that the origin of the stiffness instability is very similar to that illustrated in the preceding simple example : a small box population becomes unphysically negative because of taking too large a timestep , so in the next timestep the offending box exports an unphysical flux of negative population , triggering a divergence that rapidly compromises the entire network .motivated by the properties of stiffness illustrated in the previous examples , we first introduce a rather crude approximation for stabilizing explicit integration in stiff networks .although we shall discuss much better algorithms after that , it is instructive that even this simple approximation , by removing a primary source of stiffness , leads to an explicit algorithm that is usable in large , realistic , extremely - stiff networks .the results of the previous section indicate that one form of stiffness instability is generated by box populations that become very slightly negative and that these destabilize the network when they are exported as fluxes to other boxes ( effectively because this anomalous sign for the box population turns what should be exponential decay into exponential growth ) .thus , we invoke a simple flux - limiting prescription that if the population of a box becomes negative we do not change the population itself ( which would quickly violate conservation of probability ) but we suppress all export of that negative population to other boxes .thus , in the course of a calculation no flux is permitted out of boxes that have negative populations until their populations again become positive because of flux into them .formally , we require for all computed outgoing fluxes that . in shorthand, we refer to this as _ suppression of negative flux , _ and refer to the resulting algorithm as the _ flux - limited forward difference ( flfd ) algorithm . _ in its simplest implementation , which we shall illustrate here , we use the ( explicit ) forward euler method supplemented by the flux - limiting prescription , but the same approach can be applied to higher - order forward differencing . because we do not alter populations but only restrict their flow by this algorithm , it conserves probability .we may expect that some populations in the network will now be in error because of the flux - suppression criterion .however , such effects tend to involve the smallest populations ( because they are the ones most easily made negative by the numerical error ) , so we may expect that this approximation could be a good one for the larger populations , with the error concentrated in the smallest populations . for large networks coupled to hydrodynamical evolutionit is the changes in the larger populations that dominate energy production and concentration changes , so errors in the smaller populations are largely irrelevant .therefore , an algorithm that removes stiffness by preferentially concentrating errors in the smallest populations may be a very usable one . in the next section we test this idea on a realistic thermonuclear network .we illustrate the flux - limited forward difference algorithm by application to astrophysical thermonuclear networks under nova conditions .some representative isotopic abundances under nova ( hot cno cycle ) conditions calculated using the flfd algorithm are shown in fig . [fig : compare ] .k and constant density were assumed .the explicit network contained 145 isotopes , with 924 non - zero couplings .( b ) rates and timescales characteristic of a flfd nova simulation .conditions as for part ( a ) but with a larger reaction library : 896 isotopes with 8260 couplings were included ( though only several hundred isotopes were populated significantly ) .extremal rates plotted are restricted to those involving non - zero fluxes .( c ) comparison of maximum stable timestep [ from part ( b ) ] possible for a standard explicit integration with much larger stable timesteps and for some representative explicit flfd integrations . ][ fig : compare ] an initial isotopic abundance distribution enriched in heavy elements has been assumed , and reaction rates from the reaclib library have been used .figure [ fig : compare](a ) displays some representative populations , with the results of a standard implicit calculation shown as dashed lines and flfd calculations shown as symbols . an adaptive timestep was used in the flfd integration , with the timestep adjusted to keep the populations transferred between boxes in a timestep for some key populations within a prescribed range .this timestep is _ much _ larger than would be stable in a standard explicit integration , as we shall discuss further in [ ss : stiffness ] .note the very good agreement between implicit and explicit flux - limited methods over six orders of magnitude in the mass fractions in fig .[ fig : compare](a ) . in a realistic coupling of such a network to hydrodynamics in a nova simulation , isotopes with mass fractions smaller than would likely be not very important to the hydrodynamical evolution .figure [ fig : compare](b ) displays the fastest and slowest rates entering a representative flfd nova simulation as a function of time .the difference of some 18 orders of magnitude between the fastest and slowest rates at any timestep is an indication that this is an extremely stiff system . for standard explicit algorithms ,the largest timestep permitted by stiffness stability criteria generally is of order the inverse of the fastest rate in the network ( see the discussion in [ ss : simplemod ] and in ch .16 of ref. ) . for the calculations illustrated in fig .[ fig : compare](b ) , the inverse of the fastest rate gives the lower curve in fig .[ fig : compare](c ) . thus a normal explicit algorithm would be restricted by stability requirements to timesteps lying approximately in the shaded region below this curve ( seconds or less ) .in contrast , fig . [fig : compare](c ) displays two curves for stable flfd integration timesteps lying far above this region .the curve marked is for a timestep small enough to give accuracy comparable to fig .[ fig : compare](a ) .this timestep is seen to be about times larger than would be stable for a normal explicit integration .the curve marked is for a much larger flfd algorithm timestep that compromises accuracy for the weaker transitions but remains stable and calculates the stronger transitions correctly .the timestep 100 seconds is about times larger than would be stable for a standard explicit algorithm . since already is comparable to the characteristic timescale of a nova explosion, this example exhibits a stable explicit integration timestep for a realistic , extremely stiff system that is effectively arbitrarily large with respect to the usual upper limit for explicit integration .the picture that emerges from the present results is that the flfd method is susceptible to stiffness instability , just as for any other explicit method .however , the effect of the instability can be confined to controlled errors in small populations for a range of timesteps far beyond the normal onset of stiffness instability simply by imposing a flux limiter .this flux limiter does not prohibit negative populations but prevents their export between boxes in the network and thus prevents them from growing in uncontrolled fashion .the results of the preceding section indicate that even a relatively crude limit on propagation of negative population can remove large amounts of stiffness and leads to an algorithm useful in realistic applications , even for the extremely stiff networks common in astrophysics .but we now will demonstrate that one can do much better than that , by exploiting the algebraic structure of the differential equations to make a better approximation than just constraining the sign of propagating populations .the first step is that we must look more deeply at the stiffness instability . as we now discuss , for the sort of large and very stiff networks that we are addressing here there are several fundamentally different sources of stiffness instability that are often not clearly distinguished in the literature .the examples discussed to this point ( and in many textbooks ) emphasize the type of instability associated with small quantities that should strictly be non - negative becoming negative because of an overly ambitious numerical integration step .the discussion of the flux - limited forward difference algorithm in [ sh : flfdalgorithm ] illustrates that this type of instability can be removed by approximations that do not permit unphysical negative quantities to influence the rest of the network . however , there are other stiffness instabilities that may be initiated even when no population variables become negative in an integration step . in this kind of instabilitywe end up having to take the difference of large numbers to obtain a result very near zero .the numerical errors that ensue in a standard explicit approach can then accumulate rapidly and destabilize the network , even before any abundances become negative .this may still be viewed as a stiffness instability because it results from a numerical integration trying to deal with very different timescales , but the origin of these timescales is different from that discussed above . in this casethe disparate timescales are the very rapid reactions driving the system to equilibrium contrasted with the very slow timescale associated with equilibrium itself ( which tends to infinity ) .as we now consider , this distinction is essential to what follows because these stiffness instabilities have essentially different solutions .furthermore , we shall find that the second kind of instability can be divided into two subclasses requiring different stabilizing approximations , and that the approximations that we shall introduce in these cases will also take care naturally of the first class of stiffness instabilities because they will as a matter of course prevent the occurrence of negative probabilities in the network .we shall use `` equilibrium '' in a broad sense to mean a condition where the populations in a network are being strongly influenced by competition between terms of opposite sign on the right sides of the differential equations governing their evolution . how do we measure the degree of equilibration in a large , stiff network ? in terms of the coupled set of differential equations describing the network ,we may distinguish two qualitatively different conditions : 1 . a macroscopic equilibration that acts at the level of individual differential equations .2 . a microscopic equilibration that acts at the level of individual terms within a given differential equation . let us consider each of these cases in turn .the differential equations that we must solve take the general form of eq .( [ eq1.1 ] ) , , where the total flux has been decomposed into a component increasing the population and a component depleting the population in a given timestep .one class of approximations that we will investigate depends upon assuming that ( asymptotic approximations ) or constant ( steady - state approximations ) .we shall refer to these conditions as a macroscopic equilibration , since they involve the entire right side of a differential equation in eq .( [ eq1.1 ] ) tending to zero or a finite constant. we shall introduce approximations exploiting this whereby whole differential equations are removed from the numerical integration for a network timestep in favor of a algebraic approximate solutions for that timestep .such approximations do nt reduce the number of equations to integrate , but they reduce the number of equations integrated _ numerically by forward difference_. they reduce stiffness for any remaining equations that are integrated numerically in the timestep because removing the equations satisfying these conditions tends to reduce the disparity in timescales for the remaining equations . in eq .( [ eq1.1 ] ) , and for a given species generally each consist of a number of terms depending on the other populations in the network , at the more microscopic level , groups of individual terms on the right side of eq .( [ equildecomposition ] ) may come approximately into equilibrium ( the sum of their fluxes tends to zero ) , even if macroscopic equilibration conditions are not satisfied .the simplest possibility for this _ microscopic equilibration _ is that forward reverse reaction pairs such as , which will contribute flux terms with opposing signs on the right sides of differential equations in which they participate , come approximately into equilibrium .then we may consider an algebraic approximation that removes groups of such terms from the numerical integration , replacing their sum of fluxes identically with zero .this will not generally reduce the number of equations to be integrated numerically in a network timestep , but it can reduce dramatically the stiffness of those equations by removing terms with fast rates from the equations , thereby reducing the disparity between the fastest and slowest timescales in the system .such considerations will be the basis of the partial equilibrium methods that will be discussed in depth in the third paper in this series .there we shall also demonstrate two important general conclusions : ( 1 ) approximations based on microscopic equilibration are much more efficient at removing stiffness than those based on macroscopic equilibration , because they target more precisely the sources of stiffness in the network .( 2 ) the most powerful approach will be to use macroscopic and microscopic approximations simultaneously in the same set of equations , because they can complement each other in removing stiffness from the equations to be integrated numerically .the preceding section suggests that when microscopic equilibration becomes important in a network , the methods for dealing explicitly with stiffness are different from those used to deal with macroscopic equilibration .therefore , it is important to establish a quantitative measure of how much microscopic equilibration is present in the network .we shall introduce the simplest possibility : that the amount of microscopic equilibration in the network is measured by the fraction of reaction pairs such as that are judged to be in equilibrium ( with each reaction pair considered in isolation from the rest of the network for purposes of this determination ) .the full machinery to carry this out will be described in the third paper of this series , but our essential approach will be to extend the formalism introduced by mott to derive conditions on the populations for equilibrium in a reaction pair , and to determine whether the pair is in equilibrium by comparing the actual populations computed from the preceding network timestep with the theoretical equilibrium populations .the general result that we shall obtain is that for a forward reverse reaction pair of the form , the equilibrium abundance of each species is given by where , the parameters , , and are known functions of the current rate parameters and the populations at the beginning of the timestep , and the approach to equilibrium for the reaction pair is governed by a single timescale .we may then estimate whether a given reaction is near equilibrium at time by requiring for each species involved in the reaction , where is the actual abundance , is the equilibrium abundance determined by eq .( [ 2body1.6 ] ) , and is a user - specified tolerance that we shall choose typically to be of order . alternatively , we may use the equilibration timescale compared with the numerical timestep being taken as a measure of microscopic equilibration .we shall term a network _ strongly ( microscopically ) equilibrated _ if a significant fraction of its reaction pairs satisfy the condition ( [ 2body1.8b ] ) or its equilibration timescale is much less than than the current integration timestep , and _ weakly ( microscopically ) equilibrated _ if at most a few percent of its reaction pairs satisfy eq.([2body1.8b ] ) or have values of considerably larger than the integration timestep. the remainder of this paper will deal with methods based on asymptotic approximations to stabilize explicit integration for networks that are at most weakly equilibrated , and a second paper will deal with methods based on quasi - steady - state approximations to stabilize weakly - equilibrated systems .the corresponding stabilization of networks near microscopic equilibrium will be discussed in the third paper of this series .the flux - limited forward difference approximation described in [ sh : novaexample ] illustrates the basic principle that explicit integration in non - equilibrium situations can be stabilized by forbidding the propagation of negative flux .however , the flfd algorithm represents only a zero - order solution to this problem ( a yes / no decision on whether flux components are allowed to propagate ) that can be improved substantially by exploiting the structure of the coupled equations to replace the numerical solution with an algebraic approximation that is strictly non - negative .although the approximations that we now discuss have been implemented in some form in earlier literature , we shall find that our implementation appears to be much more successful than previous applications to large , extremely stiff networks , and we shall reach different conclusions about these methods than those reached in earlier publications . the differential equations that we must solve take the form given by eq .( [ eq1.1 ] ) .generally , and for a given species each consist of a number of terms depending on the other populations in the network . the depletion flux for the population of species will be proportional to , where the are rate parameters ( in units of time ) for each of the processes that can deplete , which may depend on the populations and on hydrodynamical variables such as temperature and density .the characteristic timescales will differ by many orders of magnitude in the systems of interest , implying that the equations are stiff . from eq .( [ eq1.2 ] ) we may define the effective total depletion rate for at a given time , and a corresponding timescale as permitting eq .( [ eq1.1 ] ) to be written as thus , in a finite - difference approximation at timestep we have we now define the _ asymptotic limit _ for the species to be , implying from eq .( [ eq1.1 ] ) that . in this limit( [ eq1.5 ] ) gives a first approximation and local error , respectively , for as for small we may then get a correction term by writing the derivative term in eq .( [ eq1.5 ] ) as where denotes order of .if we retain only the first term and approximate by , we obtain therefore , the estimate for is improved to where we now employ compact index notation , and so on , and have dropped the species index to avoid notational clutter .because we are approximating the derivative term , we expect that eq .( [ asysimple ] ) is valid only if the second term is small , implying that our approximation becomes more valid if is large .the preceding discussion implements an asymptotic approximation in a very simple way .other more sophisticated asymptotic approximations may be derived .for example , if we retain the full expression for the derivative in eq .( [ derivative ] ) and substitute in eq .( [ eq1.5 ] ) , we obtain setting and solving for gives where a term of order has been discarded .another approach is to use a predictor corrector scheme .we may solve eq .( [ eq1.1 ] ) by finite difference , replacing quantities on the right side of eq .( [ eq1.1 ] ) with averaged quantities : where for notational convenience we have traded the rate parameters for the corresponding timescales . solving this equation for gives which is not easy to use because the and are implicit functions of the that we seek .we obtain a usable explicit algorithm by solving an approximate form of eq .( [ soln1 ] ) to get a first guess for ( the predictor step ) , using that approximate result to estimate values of and , and then using these values obtained from the predictor step to solve eq .( [ soln1 ] ) ( the corrector step ) .specifically , * the predictor estimate is obtained by setting and in eq .( [ soln1 ] ) . *the corrector value is obtained by substituting for , and for in eq .( [ soln1 ] ) .this yields a predictor and corrector where `` 0 '' denotes initial quantities and `` p '' denotes quantities computed using the results of the predictor step .we shall test the asymptotic approximations ( [ asysimple ] ) , ( [ asysophia ] ) , and ( [ asyob ] ) in examples below .the mathematical and numerical properties of asymptotic approximations have been explored in refs . and we only summarize them here . 1 .the preceding asymptotic formulas are explicit , since only quantities already known are required to advance a timestep .2 . asymptotic methods do not need to compute jacobians or invert matrices .asymptotic methods are a - stable on linear problems .explicit asymptotic methods should scale linearly with network size .asymptotic methods require initial values from only one timestep ( self - starting ) . a more extensive discussion may be found in oran and boris and in mott .we now use the preceding formalism to define an explicit asymptotic flux - limiting integration algorithm .since the asymptotic approximation specified above is expected to be valid if is large , we define a critical value of and at each timestep cycle through all network populations and compute the product for each species using eq .( [ eq1.3 ] ) and the proposed timestep .then , for each population species 1 .if , we update the population numerically using the standard flux - limiting explicit algorithm discussed in [ sh : flfdalgorithm ] .2 . otherwise ,for , we update the population algebraically using one of the asymptotic approximations given in eqs.([asysimple ] ) , ( [ asysophia ] ) , or ( [ asyob ] ) . from considerations such as those in [ sh : large - stiff ] we expect an explicit integration to be stable if and to potentially be unstable if , a possible choice is .for the networks discussed in this paper , we have found this value of to work well and have adopted it .for that specific choice of , explicit numerical integration should be stable for those cases where it is applied , so we expect that at each timestep all abundances will remain positive and the flux - limiting prescription for the explicit numerical integration may be dropped .notice that at each timestep some species may be updated by explicit forward difference and some by the asymptotic approximation , and that the division of species between these two categories could change at each timestep since the product is time - dependent .implementing the preceding algorithm requires an adaptive timestepper .we take the point of view that the present task is to establish whether explicit methods can even compete with implicit methods for stiff networks .since previous studies have generally found that explicit methods fail by many orders of magnitude to attain the speeds of implicit methods in highly - stiff networks , our timestepper need not be highly optimized at this point to answer that question .the first consideration is a standard one that limits the population change at each timestep .however , the preceding algorithm does not guarantee that total population is conserved , so in setting an adaptive timestep one should make a check that ensures conservation of population at the desired level .generally , if population conservation does not satisfy the required tolerance at a given timestep , making the timestep small enough is guaranteed to improve conservation of population because it will reduce and therefore will tend to decrease the number of isotopes being treated asymptotically . to ensure conservation of particle number at a desired level in the overall calculation , we may limit the deviation in any one timestep to a small amount .thus , we adopt a simple timestepper with two stages : 1 .after the rates and fluxes are computed at the beginning of a new timestep , compute a trial timestep based on limiting the change in populations that would result from that timestep to some specified tolerance . use the minimum of this trial timestep and the timestep that was taken in the previous integration timestep to update the populations by the explicit asymptotic algorithm .2 . check for conservation of population .if the conservation law is satisfied within the desired tolerance for this timestep , proceed . if it is not , or is satisfied too well , decrease or increase the timestep as appropriate by a small factor and repeat the calculation of populations with the new timestep but original fluxes .we then accept this timestep , without further check .one could also iterate to ensure that the conservation condition is satisfied at each step , but this did not significantly improve the results in our tests .far - removed from equilibrium the limitation in population changes determines the timestep with this algorithm , but in the approach to equilibrium the timestep becomes dominated by the probability conservation criterion . though it is likely not very optimized , we have found this simple timestepper to be stable and accurate for the varied astrophysical thermonuclear networks that we have tested , and thus adequate for our task here .this should be contrasted with previous attempts to apply asymptotic methods to thermonuclear networks , which failed to produce accurate results and were abandoned as unsuitable for such stiff networks .we shall be comparing explicit and implicit methods using codes that are at very different stages of development and optimization .we assume that for codes at similar levels of optimization the primary difference between explicit and implicit methods would be in the extra time spent in implicit - method matrix operations . hence , if the fraction of time spent on linear algebra is for an implicit code , we assume that an explicit code at a similar level of optimization could compute a timestep a factor of faster . in table [ tb : explicitspeedup ] we display factors network & isotopes & speedup + pp & 6 & + alpha & 16 & 3 + nova & 134 & 7 + 150-isotope & 150 & 7.5 + 365-isotope & 365 & + based on data obtained by feger using the implicit , backward - euler code xnet with both dense and sparse solvers .then we may compare roughly the speed of explicit versus implicit codes ( possibly at different levels of optimization ) by multiplying by the ratio of implicit to explicit integration steps required for a given problem .this procedure has obvious uncertainties , and likely underestimates the speed of an optimized explicit versus optimized implicit code , but will give a useful lower limit on how fast the explicit calculation can be .first we shall establish that the asymptotic algorithm is capable of correct integration of coupled sets of extremely stiff equations .we have tested this for a variety of calculations in two ways : 1 .comparisons with results from standard implicit codes , and 2 . for some smaller networks where the corresponding integration time is not prohibitive , comparison with explicit forward euler calculations made with timesteps short enough to be stable .our general finding is that the asymptotic algorithm outlined above gives the same results as standard implicit and explicit codes , even for the stiffest networks found in astrophysics applications , provided that the numerical timesteps are limited sufficiently to ensure conservation of overall probability in the network at the desired level of precision .figure [ fig : compareasyexplicitt9_7rho1e8 ] comparison of the asymptotic approximation ( [ asysimple ] ) with a forward euler integration that used timesteps short enough to be stable for an alpha network at constant temperature and density of .initial abundances corresponded to equal mass fractions of and , and rates from the reaclib library were used .the left figure compares mass fractions ; the right figure compares differential and integrated energy production .solid lines are explicit forward - euler integration with a timestep constrained to be less than the inverse of the fastest rate in the system so that it is stable ; dashed lines are the corresponding asymptotic approximation using eq .( [ asysimple ] ) .the value of oscillates between positive and negative values so the log of the absolute value of is plotted , with the sign of indicated in each region . ]illustrates an asymptotic approximation calculation that gives results essentially identical to results from exact numerical integration . in this example the sum of the mass fractions was constrained to deviate from unity by not more than 1% over the entire range of the calculation by requiring that if it deviates by more than from unity in any one timestep , the timestep is reduced in size .we shall generally use this global 1% criterion for examples presented in this paper .higher precision may be obtained by tightening this constraint , but this is typically already more conservative than justified since input parameter uncertainties in realistic large thermonuclear networks and uncertainties in the coupled hydrodynamics may each be considerably larger than 1% .having established that explicit asymptotic approximations can give correct results even for extremely stiff networks , we now turn to the question of their efficiency ( and further tests of accuracy ) by examining calculations of various astrophysical thermonuclear networks using this approximation . in this sectionwe shall use the simplest asymptotic approximation , corresponding to eq.([asysimple ] ) . in [ sh : asysophisticated ] we shall test the alternative asymptotic approximations of eqs .( [ asysophia ] ) and ( [ asyob ] ) .[ ppasy ] the solar pp - chains provide a striking example of stiffness in a simple network . in fig .[ fig : ppchains ] integration of the pp - chains under constant temperature and density characteristic of the core of the present sun : and a density of , assuming solar initial abundances .reaction rates were taken from the reaclib library .( a ) mass fractions for the asymptotic method of eq .( [ asysimple ] ) ( dotted curves ) and for the standard implicit code xnet ( solid curves ) .( b ) integration timesteps for the asymptotic method ( dotted magenta ) and the implicit method ( solid green ) .the expected maximum stable fully - explicit timestep is indicated by the dashed blue curve . ]we illustrate integration of the pp - chains at a constant temperature and density characteristic of the core in the present sun , using the asymptotic method and the implicit backward - euler code xnet .we see that the explicit asymptotic integration gives results for the mass fractions in rather good agreement with the implicit code over 20 orders of magnitude , and is generally taking timesteps that are comparable to those for the implicit code over the entire range of integration .( the asymptotic method required 333 total integration steps versus 176 steps for the implicit code . ) as shown in [ sh : large - stiff ] , the maximum stable timestep for a standard explicit integration method may be estimated as the inverse of the fastest rate contributing to the network .this is illustrated by the dashed blue curve in fig .[ fig : ppchains](b ) . at late timesthe explicit integration timesteps are times larger than the maximum stable timestep for a normal explicit integration .the calculation illustrated in fig .[ fig : ppchains ] takes less than a second on a 3 ghz processor with the explicit asymptotic method ( as does the implicit solver ) .in contrast , from fig .[ fig : ppchains](b ) we estimate that a standard explicit method taking the largest stable fully - explicit timestep would require a time about 1000 times longer than the age of the universe ( of processor time ) to compute the pp - chains to hydrogen depletion .[ asynovaconditions ] the preceding example entailed an exceedingly stiff but rather small network .let us now turn to an example that is highly stiff and involves hundreds of isotopes . in fig .[ fig : nova125d_xplushydroprofile](a ) ( a ) mass fractions for a network under nova conditions , corresponding to the hydrodynamical profile shown in ( b ) .the calculation used the explicit asymptotic method corresponding to eq .( [ asysimple ] ) and a network containing 134 isotopes coupled by 1531 reactions , with rates taken from the reaclib library and initial abundances enriched in heavy elements . ]we illustrate a calculation using the explicit asymptotic algorithm and a hydrodynamical profile shown in fig .[ fig : nova125d_xplushydroprofile](b ) that is characteristic of hot - cno burning in a nova outburst .the explicit asymptotic timesteps are displayed in fig .[ fig : nova125d_dtplusfraction](a ) .( a ) timesteps for integration of fig .[ fig : nova125d_xplushydroprofile ] .the solid red curve is from the asymptotic calculation .the dotted green curve is from an implicit integration using the backward - euler code xnet .the dashed blue curve estimates the largest stable fully explicit timestep as the inverse of the fastest rate in the system .( b ) fraction of isotopes that become asymptotic and fraction of reactions that reach partial equilibrium in the asymptotic - method calculation . ] in these simulations we see that the asymptotic network takes stable and accurate timesteps corresponding to over most of the time integration , except in the region of sharp temperature rise and strong burning , where . over most of the range of integration after burning commences , the asymptotic solver timesteps ( solid red curve in fig .[ fig : nova125d_dtplusfraction](a ) ) are a million or more times larger than the maximum stable timestep for a purely explicit method ( dashed blue curve in fig .[ fig : nova125d_dtplusfraction](a ) ) ; at late times this disparity increases and by the end of the calculation the asymptotic timesteps are approximately times larger than would be possible for a normal explicit integration . the generally large explicit asymptotic timesteps over the entire integration range illustrated in fig .[ fig : nova125d_dtplusfraction](a ) are greater than or equal to those for a typical implicit code , as may be seen by comparing with the implicit ( backward euler ) calculation timestepping curve shown in dotted green . in this calculationthe implicit method required 1335 integration steps while the explicit asymptotic calculation required only 935 steps , and furthermore we expect that the explicit timesteps can be computed more quickly than the implicit timesteps . for a network with 134 isotopes ,an optimized explicit code should calculate a timestep perhaps 7 times faster than typical implicit codes ( table [ tb : explicitspeedup ] ) .these large explicit timesteps are possible because during the simulation many isotopes become asymptotic but few reactions reach partial equilibrium , as illustrated in fig .[ fig : nova125d_dtplusfraction](b ) .similar results for nova simulations were obtained with the asymptotic method in refs . using a different nova hydrodynamical profile and a different reaction library .we conclude that the explicit asymptotic method may intrinsically be an order of magnitude faster than a state - of - the art implicit code for simulations under nova conditions .[ asytidalconditions ] the mass fractions as a function of time for a thermonuclear supernova event induced by tidal interaction in a white - dwarf are illustrated in fig .[ fig : tidalalpha_xplushydroprofile](a ) ( a ) mass fractions for an alpha network under tidal supernova conditions with an initial abundance of pure .the network contained 16 isotopes coupled by 46 reactions , with rates from reaclib .dotted curves calculated in asymptotic approximation using eq .( [ asysimple ] ) .solid curves calculated using the fully implicit code xnet .( b ) the corresponding hydrodynamical profile . ] for an alpha network and the hydrodynamical profile is illustrated in fig .[ fig : tidalalpha_xplushydroprofile](b ) .dotted curves correspond to the explicit asymptotic calculation and solid curves correspond to a fully - implicit , backward - euler calculation using xnet .the mass fractions for the two calculations are almost indistinguishable .the timestepping is compared for the asymptotic calculation and xnet in fig .[ fig : tidalalpha_dtplusfraction](a ) . ( a )asymptotic integration timesteps ( dotted red ) , integration steps for the implicit code xnet ( solid green ) and maximum stable purely explicit step ( dashed blue ) for the calculation in fig . [ fig : tidalalpha_xplushydroprofile ] .( b ) fraction of isotopes that become asymptotic and fraction of reactions equilibrated in the network . ]we see that the timestepping for the asymptotic calculation is somewhat better than for the implicit code ( 480 total integration steps for the asymptotic calculation versus 2136 total integration steps for the implicit calculation ) . estimating that a fully - optimized explicit solver can calculate a timestep about 3 times faster than an implicit code like xnet in a 16-isotope network ( table [ tb : explicitspeedup ] ), we surmise that the asymptotic method is capable of doing the integration for fig .[ fig : tidalalpha_dtplusfraction ] perhaps 10 times faster than a current implicit code .the relatively good timestepping for the asymptotic method in this case is because essentially no reactions in the network come into equilibrium , as illustrated in fig .[ fig : tidalalpha_dtplusfraction](b ) .the flat mass fraction curves at late times in fig .[ fig : tidalalpha_xplushydroprofile](a ) are not a result of equilibrium but rather of reaction freezeout caused by the temperature and density dropping quickly at late times as the system expands ( fig .[ fig : tidalalpha_xplushydroprofile](b ) ) . a calculation for the hydrodynamical profile illustrated in fig .[ fig : tidalalpha_xplushydroprofile](b ) but for a 150-isotope network is illustrated in fig .[ fig : tidal150xdt ] . ( a )mass fractions for a 150-element network with 1563 reaction couplings under tidal supernova conditions , corresponding to the hydrodynamical profile shown in fig .[ fig : tidalalpha_xplushydroprofile](b ) .initially the abundance was pure and rates were taken from reaclib .the mass fractions were calculated in asymptotic approximation using eq .( [ asysimple ] ) .( b ) the corresponding integration timesteps for the asymptotic method ( dashed red ) and the implicit code xnet ( solid green ) .the maximum stable explicit timestep is estimated by the dashed blue curve . ]we note that in this example the asymptotic approximation permits timesteps that are typically 89 orders of magnitude larger than the maximum stable explicit timestep ( indicated by the dashed blue line ) .these timesteps are again competitive with those of a standard implicit code ( shown as the solid green curve in fig . [ fig : tidal150xdt](b ) ) . in this casethe implicit code took 2425 timesteps to complete the integration while the asymptotic method took 5593 integration steps , but this factor of about two fewer implicit timesteps would be more than offset by the significantly faster computation of each integration step expected for an optimized explicit asymptotic code integrating a network this large . for a 150-isotope networkwe expect that an optimized explicit code would be more than 7 times faster computing a timestep than an implicit code ( table [ tb : explicitspeedup ] ) , so a fully - optimized asymptotic method is probably capable of doing the integration in fig . [fig : tidal150xdt ] several times faster than a current implicit code . a calculation for the hydrodynamical profile illustrated in fig .[ fig : tidalalpha_xplushydroprofile](b ) for a 365-isotope network is illustrated in fig .[ fig : tidal365xdt ] . ( a ) mass fractions calculated in asymptotic approximation using eq .( [ asysimple ] ) for a 365-element network with 4325 reaction couplings under tidal supernova conditions , corresponding to the hydrodynamical profile shown in fig .[ fig : tidalalpha_xplushydroprofile](b ) . initially the abundance was pure and rates were taken from reaclib .( b ) the corresponding integration timesteps for the asymptotic method ( dashed red curve ) and the implicit code xnet ( solid green curve ) .the maximum stable explicit timestep is indicated by the dashed blue curve . ]we note that in this example the asymptotic approximation permits timesteps that are as much as times larger than the maximum stable explicit timestep ( dotted blue curve ) .this timestep is again competitive with that of an implicit calculation ( solid green curve in fig .[ fig : tidal365xdt](b ) ) .the implicit calculation required 2707 integration steps , compared with 5778 steps for the asymptotic calculation .but as noted above , this factor of two advantage of the implicit calculation should be more than offset by the much more efficient computation of each timestep in an optimized asymptotic code .for this 365-isotope network we may assume from table [ tb : explicitspeedup ] that the explicit code can calculate each timestep times faster than the implicit code , so an optimized asymptotic code should be capable of performing the integration in fig .[ fig : tidal365xdt ] perhaps 10 times faster than a state - of - the - art implicit code .similar results for networks under tidal supernova conditions have been found in refs .although a different set of reaction network rates was used in these references , the explicit asymptotic method was again found to be highly competitive with standard implicit methods for the tidal supernova problem . in fig .[ fig : scalingnetworksizenovasimulation ] linear scaling of wall clock time per integration step with number of isotopes in the network for the explicit asymptotic approximation .( a ) the nova simulation of fig .[ fig : nova125d_xplushydroprofile ] .( b ) the tidal supernova simulation of figs .[ fig : tidalalpha_xplushydroprofile][fig : tidal365xdt ] .the dashed lines are drawn only to guide the eye . ]we illustrate scaling of integration time with network size for the nova simulation of fig .[ fig : nova125d_xplushydroprofile ] and the tidal supernova simulation of figs .[ fig : tidalalpha_xplushydroprofile][fig : tidal365xdt ] .the behavior is seen to be approximately linear , as expected for an explicit algorithm since no matrix inversions are required .all results presented to this point have used the simplest asymptotic formula defined in eq .( [ asysimple ] ) .this section compares results from eq.([asysimple ] ) to those obtained with the more sophisticated formulas ( [ asysophia ] ) , and ( [ asyob ] ) , for the case of an alpha network with constant and . in fig . [ fig : xasycompositet9_5rho1e8 ] comparison of mass fractions ( a ) and timesteps ( b ) using different asymptotic approximations for an alpha network with constant and .the network contained 16 isotopes coupled by 46 reactions , with rates taken from reaclib .initial abundances corresponded to equal mass fractions of and .simple refers to eq .( [ asysimple ] ) , he to eq .( [ asysophia ] ) , and o b to eq .( [ asyob ] ) . ]we display a composite of the computed mass fractions and the timestepping for the asymptotic approximations corresponding to these three formulas .we find that the more sophisticated asymptotic approximations in eqs.([asysophia ] ) and ( [ asyob ] ) give results for abundances that are very similar to those from eq .( [ asysimple ] ) , but can in some cases give more favorable timestepping by factors of several .these results are representative of tests on a variety of networks and we conclude that for typical thermonuclear networks eqs .( [ asysimple ] ) , ( [ asysophia ] ) , and ( [ asyob ] ) yield similar results , except for possible differences by factors of up to 23 in computational speed .in previous sections evidence has been presented that , well - removed from equilibrium , asymptotic methods can provide stable and accurate integration of the stiffest large networks with timesteps that are comparable to those employed in standard implicit and semi - implicit stiff solvers . in practice , for astrophysical thermonuclear networksthis means that timesteps are typically from to of the current time over most of the integration range , except for brief time periods where very strong fluxes are being produced and timesteps may need to be shorter to maintain accuracy . since explicit methods can generally compute each timestep substantially faster than for implicit methods , this suggests that such methods offer a viable alternative to implicit solvers under those conditions. however , the preceding statements are no longer true when substantial numbers of reaction pairs in the network begin to satisfy microscopic equilibrium conditions ( according to the criteria given in [ ss : measureequil ] ) .then the typical behavior for asymptotic approximations is for the timestep to become constant or only slowly increasing with integration time .( indeed , we have already seen an indication of this behavior at late times in fig .[ fig : xasycompositet9_5rho1e8](b ) ) .figure [ fig : dthydro_withband_withnetwork_withequilfraccomposite ] . ][ fig : dthydro_withband_withnetwork_withequilfraccomposite ] illustrates for two different networks coupled to a single - zone hydrodynamical simulation .we see that the 19-isotope network is able only marginally to keep up with the hydrodynamical timesteps , while the 150-isotope network lags orders of magnitude behind for hydrodynamical integration times later than about s. the reason for this loss of timestepping efficiency for the asymptotic method is the approach to equilibrium at late times ; this is documented in the inset plots for fig .[ fig : dthydro_withband_withnetwork_withequilfraccomposite ] , which show the fraction of reactions in the respective networks that satisfy equilibrium conditions .the reason that the 19-isotope network is able to keep up much better than the 150-isotope network is also clear from the inset plots : the 19-isotope network slowly approaches 3040% equilibration in this region , but the 150-isotope network , with much faster reactions because it includes fast proton and neutron reactions not found in the 19-isotope network , quickly reaches 70% equilibration . since in most applications for extremely stiff astrophysical networksthe physical phenomena require integration over many decades of time , this lag of the asymptotic timestepping as equilibrium is approached is disastrous for such approximations and they quickly lose out to implicit methods , which can continue to take large integration steps even nearing equilibrium ( though they are inefficient at computing each timestep ) .another example of the failure of the asymptotic approximation to generate competitive timesteps is illustrated in fig .[ fig : viktorextended2alphacomposite ]. indicates approximately the maximum stable timestep for a purely explicit method ( corresponding to the inverse of the fastest rate in the network ) .( c ) the hydrodynamic profile .( d ) fraction of isotopes that become asymptotic and the fraction of network reactions that become equilibrated in the course of the explosion . ][ fig : viktorextended2alphacomposite ] the asymptotic mass fractions calculated in fig .[ fig : viktorextended2alphacomposite](a ) are quite accurate in comparison with the results from standard implicit codes , but the timestepping illustrated in fig .[ fig : viktorextended2alphacomposite](b ) is not competitive with implicit methods at late times . for , the asymptotic integration timesteps are only of order seconds ,whereas the implicit code is taking timesteps larger than seconds at this point .the reason for this failure of the asymptotic approximation can be seen clearly in fig . [fig : viktorextended2alphacomposite](b ) and ( d ) . in the region of rapid temperature rise illustrated in fig .[ fig : viktorextended2alphacomposite](c ) , the rates in the network increase dramatically and this causes the maximum stable explicit timestep to become much smaller , as illustrated in fig .[ fig : viktorextended2alphacomposite](b ) by the dashed blue curve .the steadily increasing explicit timestep ( solid green curve ) intersects this maximum stable explicit timestep curve near .but from fig .[ fig : viktorextended2alphacomposite](d ) we see that essentially no isotopes in the network satisfy the asymptotic condition at this point . thus the integration is forced to use a standard explicit method and the timestep must ( to maintain stability ) follow the decreasing dashed blue curve until around , when significant numbers of isotopes finally begin to satisfy the asymptotic condition and the explicit asymptotic algorithm is able begin to take timesteps larger than the explicit limit .however , at this point the asymptotic timesteps are already orders of magnitude smaller than those an implicit method would use , and the asymptotic method is able to increase the timestep by only about two orders of magnitude before the system reaches equilibrium .thus , for the entire time range from until the explicit asymptotic method computes the network accurately but its timesteps lag many orders of magnitude behind those from standard implicit methods . as we shall explain in considerable detail in the third paper of this series ,the reason for the loss of efficiency for asymptotic methods as equilibrium is approached is that the asymptotic approximation removes a large amount of stiffness associated with macroscopic equilibration , but near ( microscopic ) equilibrium a fundamentally new source of stiffness begins to play a role and it is not generally removed by the asymptotic approximation .indeed , we see clearly from fig . [fig : viktorextended2alphacomposite](d ) that the fraction of reactions in the network that satisfy equilibrium conditions increases rapidly beginning at and reaches unity by . in this third paperwe shall describe a new implementation of _ partial equilibrium methods _ that can be used in conjunction with asymptotic methods to turn equilibrium from a liability into an asset and increase the explicit timestepping by orders of magnitude in the approach to equilibrium . in that paperwe will give examples suggesting that these methods are capable of giving timestepping competitive with that of implicit methods across the entire range of interesting physical integration times for a variety of extremely stiff reaction networks .explicit numerical integration can compute a timestep faster then implicit methods , and the time to compute a network explicitly scales linearly and therefore more favorably with network size than for implicit codes .nevertheless , previous discussions of numerical integration for very stiff systems have concluded rather uniformly that explicit methods are not competitive with implicit methods for stiff networks because they are unable to take large enough stable timesteps . to quote _ numerical recipes _ , `` for stiff problems we _ must _ use an implicit method if we want to avoid having tiny stepsizes . ''improvements in explicit methods based on using asymptotic and steady - state limiting solutions to remove stiffness from the network have had some success for systems of moderate stiffness such as various chemical kinetics problems .however , it has been concluded in the previous literature that such methods are not competitive , failing even to give correct results , with timesteps that are far too short to be useful even if they gave correct results , for extremely stiff networks such as those encountered commonly in astrophysical thermonuclear networks .this paper has presented evidence strongly challenging all of these conclusions .1 . situations where small populations can become negative if the explicit timestep is too large , with the propagation of this anomalous negative population leading to exponentially growing terms that destabilize the network .situations where the right sides of the differential equations expressed as approach a constant derived from the difference of two large numbers ( the total flux in and total flux out ) , and numerical errors in taking this difference destabilize the network if the timestep is too large .3 . situations where on the right sides of the differential equations expressed in the form of eq .( [ equildecomposition ] ) the net flux in specific forward - reverse reaction pairs tends to zero as the system approaches equilibrium , leading to large errors if the timestep is too large because the net flux is derived from the difference of two large numbers and the timescale equilibrating the populations is short compared with the desired numerical timestep . using the extremely stiff systems characteristic of astrophysical thermonuclear networks as a stringent test, we have shown that asymptotic methods are very successful at removing the first two types of stiffness , and give correct results , even for the stiffest of thermonuclear networks , provided that adequate attention is paid to conservation of probability in the network .furthermore , we have shown various examples of stable and accurate timestepping with these methods in extremely stiff systems that are competitive with that of standard implicit codes , demonstrating in some simple but physically - important networks timesteps that are as much as 20 orders of magnitude larger than the maximum timestep that would be stable in a standard explicit method .asymptotic methods are adept at removing the first two types of stiffness listed above , permitting explicit numerical timesteps that are competitive with implicit methods even in the stiffest networks . however , we have also shown that such methods give correct results but fail to exhibit competitive timestepping when the system approaches microscopic equilibrium and the third type of stiffness instability begins to dominate . in a following paper , we shall provide evidence for competitive timestepping , even in the approach to equilibrium , if the explicit asymptotic method is supplemented by partial equilibrium approximations designed specifically to deal with the third type of stiffness instability .taken together , this paper and the following ones on quasi - steady - state methods and partial equilibrium methods present compelling evidence that algebraically - stabilized explicit integration methods are capable of timesteps competitive with implicit integration methods for a variety of highly - stiff reaction networks . since explicit methods can execute a timestep faster than an implicit method in a large network , our results suggest that algebraically - stabilized explicit algorithms may be capable of performing as well as , or even substantially outperforming , implicit integration in a variety of moderate to extremely stiff applications . because of the highly - favorable linear scaling for explicit methods , this fundamentally new view of the efficacy of explicit integration for stiff equations may be particularly important for applications in any field where it is imperative that more realistic and therefore much larger networks be used in complex physical simulations .we thank tony mezzacappa for useful discussions , austin harris for help with some of the calculations , eric lingerfelt for programming assistance , and christian cardall for a careful reading of the manuscript .research was sponsored by the office of nuclear physics , u.s .department of energy .fryxell b , olson k , ricker p , timmes f x , zingale m , lamb d q , macneice p , rosner r , truran j and tufo h 2000 flash : an adaptive mesh hydrodynamics code for modeling astrophysical thermonuclear flashes _ astrophys .j. suppl . _* 131 * 273 - 334
|
we show that , even for extremely stiff systems , explicit integration may compete in both accuracy and speed with implicit methods if algebraic methods are used to stabilize the numerical integration . the required stabilizing algebra depends on whether the system is well - removed from equilibrium or near equilibrium . this paper introduces a quantitative distinction between these two regimes and addresses the former case in depth , presenting explicit asymptotic methods appropriate when the system is extremely stiff but only weakly equilibrated . a second paper examines quasi - steady - state methods as an alternative to asymptotic methods in systems well away from equilibrium and a third paper extends these methods to equilibrium conditions in extremely stiff systems using partial equilibrium methods . all three papers present systematic evidence for timesteps competitive with implicit methods . because an explicit method can execute a timestep faster than an implicit method , algebraically - stabilized explicit algorithms might permit integration of larger networks than have been feasible before in various disciplines . _ keywords _ : ordinary differential equations , reaction networks , stiffness , reactive flows , nucleosynthesis , combustion
|
to illustrate the dynamics with a real - world example , consider two colleagues in science : a phd student and her supervisor .after years of collaboration , the student s scientific skills and interests have become more and more similar to her supervisor s .consequently , when graduation day approaches and the student looks for a postdoc position , her choice is inevitably biased by the influences from her supervisor .so where does she go ? from experience , we know with high probability that she goes to one of her supervisor s scientific friends friends who themselves have influenced and have been influenced by the supervisor and with whom it will be easy to establish a connection .that is , when the student navigates her social network for better access to information she is interested in , she uses information that has traveled across the network beyond her nearest neighbors . in this way , whether it is the quest for up - to - date information in science , business , or fashion , the organization , individual preferences , and flow of information make a social system integrated .+ to capture this dynamic , our model approach is to use agents with one goal : to be updated about topics they find interesting . for simplicity ,we limit the objects of interest to the agents themselves and exclude extrinsic topics .agents achieve this goal by communicating with connected friends and establishing new contacts in a changing social network .to improve their position in the network when making new friends , the agents need a perception of the overall system . by mimicking conversation in everyday life ,the agents gather information from distant parts of the network and build a simplistic map of the network beyond nearest neighbors .as the agents build their perception of the system through repeated communication with their friends , they gradually align their interests with agents in their proximity and thereby also align their future social choices .+ * implementation .* we incorporate the above elements of human interactions in a simple model with agents that quantifies communication and social navigation through 3 parameters : the communication to social navigation ratio , the interest size , and the flexibility .central to the model is to build and use a perception of the system .we therefore give each agent an individual memory .the memory consists of three one - dimensional arrays , the recollection memory contains names of the friends that provided information about agents . to compare the quality of the information with friends , the quality memory stores the age of each of the pieces of information .finally , the interest memory contains names of agents in a proportion that reflects the interest in these agents .recollection and quality memories and constitute agent s local map of the social structure , and is the interest memory with priorities of other agents ( see fig . [ fig1 ] ) . the basic model , accessible as an interactive java applet , is defined in terms of agents connected by a fixed number of links .the network model is executed in time steps , each consisting of one of the two events 1 ._ communication , and _ + 2 ._ social navigation _ , where the selection of communication topic and social - navigation direction are associated with interests as described in fig .[ fig1 ] . to select a topic of communication or direction of social navigation, an agent simply picks a random element in her interest memory and reads off the name of the agent that she has stored there .because the agents also update their interest memories when they communicate , the generated feedback between the organization and the agents interests makes the structure of the interest memory of crucial importance to the outcome of the dynamics .for example , the degree to which the selection is biased toward recent communication , or local interests , controls the strength of this feedback .global interests generate a homogeneous organization ; local interests generate a heterogeneous organization . by letting the first elements of the interest memory form the global interest and the remaining elementsform the local interest , the parameter provides full control of the strength of the feedback .the elements of the static global interests are fixed to each of the agents names , whereas the elements of local interest are updated by communication .the interest size therefore effectively works as a local to global interest bias . for , any topicis selected with equal chance , whereas larger increases the bias of proportionate local interest selection over random global interest selection .the modeling of proportional allocation of interests is not only the simplest possible mechanism ; it is also in accord with h. spencer s observation of proportionality between interest and previous experience . also related to this use of proportionate selectionis the work by h. simon to explain zipf s law for word usage , and the work presented in refs . to model emergence of money and fashions .+ * simulation . *we initiate each simulation by filling the local interest memory with random names .later , each turn agent communicates with or about another agent , the name of randomly replaces a fraction of s dynamic interest memory .that is , for values of $ ] .thereby old priorities will fade as they are replaced by new topics of interest .we denote by the number of elements of agent s interest memory that are allocated to agent . when selecting a communication topic or the direction of social navigation , agent , by choosing a random element in her interest memory , selects agent proportional to .we increment the age by one after every communication events .because every agent always has information with age 0 about itself , , the age of the information about an agent becomes older as , through communication , it percolates away from the agent in the network . assuming that agents are not lying , the age of the information is therefore a good proxy for how far it has traveled across the network . consequently, when two agents communicate about a third agent , and evaluate the quality of the information based on its age , the agent with the newest information tends to be closer to the third agent .this guarantees that the recollection memory works as an efficient local map of the social structure .social navigation , which corresponds to a rewiring of the network , is a slow process compared to communication .if this were not the case , random people would share reliable information with anybody and the interactions could more simply be described by a mean - field model .we therefore simulated the model with on average communications per link for each rewiring event in the system .because links are formed to friends of friends , the model captures the concept of triadic closure .moreover , because friends refer to the particular agents that have provided the most recent information about the selected agent , new links are formed on the basis of the memory rather than on the basis of the present network .for example , because the recollection memory can be out - of - date links do not always complete triangles as in fig .[ fig1](b ) and because all agents are represented in every agent s static global interest memory separated clusters can reconnect .in particular , two friends with large mutual interest in each other that by chance lose their common link tend to reestablish a direct link at some later occasion .the model of communication and social navigation presented above generates interest groups in modular networks without assuming that people are different from the beginning . the mechanism that drives the process is a feedback between interest formation and the emergence of social structures catalyzed by the flow of information .+ * model networks . * to illustrate the formation of groups , in figs .[ fig2](a - c ) we show three networks generated by interest sizes , , and respectively .that is , in the network in fig .[ fig2](a ) , there is only random global interest selection , whereas the more modular networks in figs .[ fig2](b ) and [ fig2](c ) are generated with dominating local proportionate interest selection .because an agent s interest memory is filled with other agents names proportional to their occurrence in recent local conversations , social navigation will be directed toward these agents .subsequent reinforcements generate interest groups manifested in the modular networks .to quantify how modular the networks are , we partition the network into groups so as to minimize a description of the network .given this information - theoretic partitioning of the network into modules of sizes , we define the typical module size as the average module size that a randomly selected agent is part of , to only consider true modules , we do not count modules of size 1 with agents without links .[ fig2](d ) shows the result of increasing local interest memory . after a small increase in the typical module size for small interest memory ,because fewer agents are disconnected , decreases steadily as agents increasingly focus their attention on other agents in their proximity .when close - by agents receive more attention , they will also be frequent targets of social navigation .[ fig2](e ) illustrates , this strongly affects the abundance of triads , here measured in units of the random expectation of triangles . when agents shift their attention to their neighborhood , the centralized network breaks down .figure [ fig2](f ) , showing the typical size of the largest hub , , captures this transformation .overall , for increasing but small interest size , the largest hubs receive more attention , which allows the system to remain in one module .when exceeds 5 , decreases strongly , the degree distribution narrows further and the number of triangular cliques increases substantially .the topological measures quantify a transition from a scale - free network at to a modular network at .moreover , a striking feature is that as increases , there are fewer nodes without links .presumably , these `` singletons '' more easily integrate into a social context in which they have a history .the transition from a centralized to a modular structure , driven by the potential to form individual interests , is of course also manifested in the interest memory itself . to quantify this transition, we counted the typical number of individuals an agent has in her interest memory , , and the overall number of agents that receives attention from other agents , .the local social horizon , is calculated in a similar fashion as the typical module size .the denominator , with averages over , corresponds to agent s typical interest allocation in an agent .the typical number of individuals an agent has in her interest memory is simply the number of such allocations there is room for in an agent s interest memory , averaged over all agents .because only a limited amount of information is exchanged with agents outside the local social horizon , it can also be thought of as an information horizon .the global social horizon , is calculated by pooling the agents interest memories together into for the total number of elements allocated to agent . figure [ fig2](g ) shows the local horizon of the individual agent together with the global horizon of all individuals . as increases , collapseswhile remains on the order of ; the development toward social cliques is democratic , with anyone getting a fair share of attention while still allowing people to focus locally on members of their particular `` club . '' to illustrate the robustness of the model , in figs .[ fig2](d - g ) we show the results for two interest - adaptation rates , corresponding to two widely different speeds ( stubborn and flexible ) at which old priorities are replaced .we observe that even a factor 100 change in frequency of priority replacement only has a small effect on the network topologies .ultimately , at sufficiently high flexibility , such that agents have completely different interests every time they update their social connections , the modular structure breaks down .* real - world networks . * in fig .[ fig2 ] , we used a small network with relatively few links to illustrate the effect of interest - memory size on the topology of the network . for larger networks with more links, the group size will similarly be determined by the interest - memory size rather than the system size . in general , at any reasonably high level of communication to social navigation ratio and low level of flexibility ,the result is independent of variation in and , and the outcome of the dynamics therefore predominantly determined by the interest size .that is , for a given set of nodes and links , our model will map each size of interest memory to social networks with a certain degree of modularity , quantified by , , and .accordingly , by fitting the interest size to match the typical module size for real - world networks , the consistency of our model assumptions can be tested directly . to execute this test , we compared the number of triangles and the maximum degree of the simulated networks with the values of the real - world network .the dynamics were initiated by communication without rewirings to let the agents adapt their memories to the network . in fig .[ fig3 ] we show the extrapolated dynamics and in table 1 we report the results for w. w. zachary s karate club network and the dolphin social network reported by d. lusseau _ et al_. . for comparison ,in table 1 we have also included a very modular network , the largest component of the coauthorship network in network science compiled by m. newman , and a non - modular network , the prison network collected by j. gagnon and analyzed by j. macrae . the modular structure in the karate club network can be reproduced by , whereas the more integrated social ties of the dolphin social network are reproduced by ( see table 1 and fig .[ fig3 ] ) . because the average degree is higher in the real - world networks than in the test network in fig .[ fig2 ] , the modularity for a given is reduced from the expectations of fig .overall table 1 and fig .[ fig3 ] show good fits to the real networks , with reproduced triangle enhancement in both real and modeled networks .the main deviation is from the high of the real karate club network , which presumably reflects a particularly high communication frequency of the administrator and the principal trainer of the club , the two hubs in the network .+ 1.0 & & & & & & + & 17 & 45 & 17 + ' '' '' simulated & 34 & 78 & 30 & & & + & 34 & 95 & 12 + ' '' '' simulated & 62 & 159 & 12 & & & + & 57 & 921 & 34 + ' '' '' simulated & 379 & 914 & 100 & & & + & 67 & 58 & 11 + ' '' '' simulated & 67 & 142 & 2 & & & + to capture the very modular structure in the coauthorship network , local interests dominate over global interests by a factor for in the simulation .presumably geographical constraints generate the remarkably limited social horizons .contrary , in the non - modular prison network , local and global interests were simulated with equal weights .figure [ fig3 ] illustrates two key aspects of the model : the predictive power of the dynamics and the strong coupling between the network and the agents interests .first , the networks in the top panels illustrate ( here assuming steady - state modularity ) an ensemble of future network developments for the karate club network and the dolphin social network .consequently , the model can be used to analyze the effects of social engineering and managed information flow in real - world systems .one example would be to explore the effect on an organization of changing the communication rate , by introducing interest biases , or by broadcasting certain ideas across the system .second , the middle panels of fig .[ fig3 ] show how the the total interest in the black agent in the networks correlates with the number of contacts the agent has .in general , the more links an agent has , the more attention it receives .further , the bottom panels show how the network distance between the black and the shaded agents covaries with the information divergence between their interest memories .the information divergence , also known as the kullback - leibler divergence , corresponds to the number of bits needed to determine the shaded node s interest memory , given information about the black node s interest memory .accordingly , the strong correlation between network distance and interest divergence in the bottom panels illustrates the popular saying , `` tell me who your friends are , and i will tell you who you are . ''we have used communication and social navigation to model the feedback between people s interests and the social structure .this makes it possible to investigate the interplay between fragmentation and coherence in social systems .the abstract model of human interactions quantifies communication and social navigation through 3 parameters , the communication to social navigation ratio , the interest size , and the flexibility .we find that the interest size is the predominant parameter and that agents with an increased possibility to form individual interests ( high ) drive the evolving system to a modular network with a tighter information horizon .accordingly , the model emphasizes the reinforcement of interest allocation as the key mechanism for the development of groups .our idealized model - world starts out with agents with equal properties . in spite of this homogeneity ,the dynamics generate groups manifested in networks with modular structure and agents with widely different priorities .repetition of recent communication and reinforced contacts with people one talks about lead to local agreement and global divergence .central to the model is to build and use the interest memory .here we have explored a particularly simple linear model for both the construction and the use of priorities , and shown that this is sufficient to generate heterogeneous interests .however , the model framework can be extended to more detailed networking games , including , for example , trust , cheating agents , or update of priorities based on experiences of the reliability of the obtained information .undoubtedly , real humans will have different intrinsic properties . here, without positing that people have different intrinsic properties , we have illustrated how the constraints on the information flow through a system and the potential for individuals to form heterogeneous interests affect the future development of a system . to achieve this , we extrapolated dynamics from the topology of four social networks and found a good agreement between modeled and real - world data .this substantiates our claim that one important step when trying to understand social dynamics is to understand the feedback between interest formation and the emergence of social structures catalyzed by the flow of information across the system . 1. _ network centrality _ + being central new information 2 . _ positive assortment _ + agent sinterest neighbor s interest 3 ._ group formation _ + move toward interest localization of interest without individuals with personal interests , only the first feedback is active , but it is in itself enough to give the network a broad degree - distribution .the two subsequent reinforcements generate interest groups in modular networks . togetherthese positive feedback mechanisms make it favorable to manipulate the spreading of interests .positive - feedback mechanisms are also inherent in the models of homophily and influence , in which agents forming groups develop a `` language '' that makes interactions more likely within groups and less likely between groups .but when those models see the heterogeneity of the population as driving the cultural differentiation , the model presented here instead emphasizes communication barriers in the system as the driving force behind group formation .thereby this model makes it possible to manipulate the spreading of interests and study the emerging social structures . from an altruistic perspective , increased global random broadcasting ( lower ) counteracts fragmentation and facilitates better communication across the network . from a competitive perspective , individuals exploring global broadcasting to project their own interest onto others will gain enormously in prestige and emerge as central hubs .this work was supported by the danish national research foundation for support through the center for models of life .mr was also supported by national institute of general medical sciences models of infectious disease agent study program cooperative agreement 5u01gm07649 .p. lazarsfeld and r. merton , friendship as a social process : a substantive and methodological analysis , in _ freedom and control in modern society _ , edited by m. berger , t. abel , and c. h. page , pp .1866 , van nostrand , new york , 1954 .h. heclo , issue networks and the executive establishment : government growth in an age of improvement , in _ the new american political system _ , edited by a. king , pp . 87124 , american enterprise institute , washington , dc , 1978 .
|
to investigate the role of information flow in group formation , we introduce a model of communication and social navigation . we let agents gather information in an idealized network society , and demonstrate that heterogeneous groups can evolve without presuming that individuals have different interests . in our scenario , individuals access to global information is constrained by local communication with the nearest neighbors on a dynamic network . the result is reinforced interests among like - minded agents in modular networks ; the flow of information works as a glue that keeps individuals together . the model explains group formation in terms of limited information access and highlights global broadcasting of information as a way to counterbalance this fragmentation . to illustrate how the information constraints imposed by the communication structure affects future development of real - world systems , we extrapolate dynamics from the topology of four social networks . social groups with different music tastes , political convictions , and religious beliefs emerge and disappear on all scales . but how do they form ? do they form because heterogeneous people search and navigate their social network to find like - minded people , or because interests are reinforced by interactions between people in social networks with modular topologies ? for example , assuming heterogeneous people who seek like - minded neighbors , t. schelling proposed a simple model to understand how segregation emerges in urban areas . later b. arthur suggested that the emergence of industrial centers is a result of positive feedback between agencies that prefer to be close to similar agencies . however , if groups form because people are inherently different and search for people who are like them , then the question becomes where the different interests come from . if , instead , it is because interests are reinforced in modular social networks , then we must first understand why social networks are modular . here we combine the two views and investigate whether group formation can occur without positing that people have different intrinsic properties : can the heterogeneity in organization and the heterogeneity in individual interests that drives the organization arise _ de novo _ ? axelrod has demonstrated in a lattice model of homophily and influence that global divergence can emerge from local convergence ; groups form , endure , and diverge because people more likely influence like - minded people and thereby gradually build interaction barriers to people that are different . recently centola _ et al_. showed that adding passive network dynamics to axelrod s model makes it less sensitive to cultural drift . we also take the dynamic network perspective , but consider a different viewpoint and a different framework . instead of passively adding and removing links based on the similarity between agents , we study how individuals actively drive a flow of information beyond nearest neighbors and make changes in the network in their quest for information . with this approach , assuming only that people are influenced by recent communication , we demonstrate that the flow of information works as a glue that maintains an integrated society , and that limited access to global information and reinforcement of local interests can generate social groups . to achieve this , and to better understand the effect of constrained communication on group formation , we introduce a simple agent - based network model of communication and social navigation . we use social navigation to represent peoples attempt to come nearer to the information source in the network they find interesting . the model is inspired by everyday human conversation and captures the feedback between interest formation and emergence of social structures . taking this approach , we acknowledge that the goal of individuals to understand and agree with their closest associates can be obtained either by adjusting their interests or by adjusting their contacts . because people can only interact with a few friends , we use networks to represent the social structure in which the dynamic is embedded . if people were not limited to interactions with only a few friends , and everybody could share information with everybody else , the interactions in a society could instead be described by a mean - field model in which everybody has access to all information . by contrast , a network representation can capture the constrained flow of information through social systems , and offers an efficient way to study adaptive changes in the social structure .
|
deterministic filtering methods have been developed in the literature for linear and nonlinear systems as an alternative to stochastic techniques .they are especially applicable to situations where the noise characteristics are not stochastic and/or whose statistics are not known apriori . in such cases , the noise is typically modeled as an unknown process satisfying some bound in an or sense , where this may be interpreted as a generalization of an energy bound ; e.g. , see . in particular , the set membership state estimation approach in provides a deterministic interpretation of the kalman filter in terms of a set - valued state estimate , where the solution to the estimation problem is obtained by constructing the set of all possible states consistent with the given measurements . this set membership approach has been extended to estimation for nonlinear systems as presented in .most estimation schemes proposed for nonlinear systems both in the stochastic and the deterministic settings , use some kind of approximation schemes for the state dynamics which often consist of linearizing the state and measurement equations about a suitable operating point . indeed , this is not an issue for systems with small nonlinearities but the effect of nonlinearity induced errors needs to be considered for systems with large nonlinearities as presented in .another approach to nonlinear filtering that does not consider linearization of the underlying nonlinear dynamics is presented in using the max - plus machinery . there, the nonlinear filtering problem is recast into an optimal control problem leading to a hamilton - jacobi - bellman ( hjb ) equation .we note that the approach taken in bears a resemblance to that here in that max - plus machinery was employed for value function propagation . in both cases , the value , as a function of the state variable is semiconvex , where the space of semiconvex functions is a max - plus linear space ( or moduloid ) , for which a countable set of quadratic functions forms a basis ( or spanning set ) .however , in the case of , the basis elements used in the max - plus value function expansion were fixed . here, we adopt a modification of the more recently developed curse - of - dimensionality - free approach , where that approach to infinite time - horizon control is adapted to the time - dependent filter value function propagation . that approach has been demonstrated to be highly effective from a computational standpoint in several applications .in particular , the quadratic functions used in the truncated max - plus expansion are boot - strapped by the algorithm .we note that , in order to maintain computational tractability with this approach , one employs a max - plus optimal projection at each time - step .this is optimally performed by pruning the set of quadratics in the representation . in this paper , we present a set - valued state estimation approach to nonlinear filtering for systems with nonlinear dynamics and observations using min - plus methods to obtain the corresponding deterministic filter .the constraint on the system noise is described by a sum quadratic constraint ( sqc ) .a set - valued state estimation scheme is utilized to reduce the filtering problem to a corresponding optimal control problem in terms of an hjb equation .the optimization problem consists of computing the minimum quadratic supply needed to drive the system to a given terminal state subject to the sqc .the computations are achieved by applying a min - plus scheme to the optimization process where the solution operator is linear in the min - plus algebra . indeed, this scheme does not employ the linearization of the system dynamics and provides a less conservative solution in terms of the filter recursion equations .the rest of the paper is organized as follows : section [ sec : problem - formulation ] describes the formulation of a nonlinear system with the noise bounded by an sqc .section [ sec : svse ] describes the set - valued state estimation scheme for nonlinear filtering , and recasts the nonlinear filtering problem into a corresponding optimal control problem .the solution to the optimal control problem using the min - plus linearity property and the corresponding filter recursion equations which arise therefrom are discussed in section [ sec : filter - equations ] .an illustrative example is presented in section [ sec : ex ] , and the paper is concluded with remarks on future research in section [ sec : conclusions ] .consider a continuous - time system described by where is the state , is the known control input , and are the process and measurement disturbance inputs respectively , and is the measured output . and are given nonlinear functions and is a given matrix function of time .the noise associated with system - can be described in terms of an iqc as in , here , vectors are organized as columns , } ] is well - defined for any real symmetric matrix . also , is the initial state value and is the nominal initial state .a finite difference is allowed by a non - zero value of the constant . if , then .also , and represent admissible uncertainties and is a given matrix , is a given state vector , is a given constant and , are given positive - definite , symmetric matrix functions of time . in order to derive equations for a discrete - time set - valued state estimator , the continuous - time system in - needs to be discretized in reverse time .the reverse - time system formulation is used to formulate and solve the filtering problem which is recast as a subsidiary optimal control problem using the hjb equation . for further details ,see .such a discretization can be achieved by using standard techniques such as the euler or higher - order runge - kutta methods . in particular , applying the euler scheme to in reverse time yields , where is the sampling time .thus , leads to the reverse - time discrete system of the form where and represent discrete - time nonlinear functions and is a given time - varying matrix .the control variable is a known quantity and will be omitted for brevity as this paper deals with only the filtering problem .finally , the iqc in is discretized to obtain an equivalent sqc of the form to be a fixed measured output for the system - with disturbances bounded by the sqc .the set valued state estimation problem consists of constructing the set ] , that \ ] ] if and only if there exists a disturbance sequence such that , where the cost functional is obtained from the sqc and is of the form with . here , the vector is the solution to the system - with input disturbance and terminal condition .hence , = \left\ { \tilde{x } \in \mathbb{r}^{n } \,:\ , \inf_{w_{(\cdot ) } } j_t(\tilde{x } , w_{(\cdot)})\;\le \ ; d \right\ } \hskip -1pt .\ ] ] the nonlinear optimal control problem for the system in - is defined by the optimization problem here , it is assumed that the infimum in exists . if not , the fulfillment of the inequality does not guarantee the reachability of the terminal state under the sqc , in which case the inequality in can only be defined as an inclusion , \subset \left\ { \tilde{x } \in \mathbb{r}^{n } \,:\ , \inf_{w_{(\cdot ) } } j_t(\tilde{x } , w_{(\cdot)})\;\le \ ; d \right\}.\ ] ] now in order to obtain the optimal state estimates we must solve for the value function .this is done by applying the dynamic programming approach from optimal control theory . in a discretized form, the value function satisfies the dynamic programming equation where denotes the value function at time given a state at time . using the notation and for the min - plus addition ( min ) and multiplication ( plus ) operators respectively, we may rewrite the above as in the following section we describe an approach to solving the above .in this section we solve the dynamic programming equation as follows .we express the value function in a particular min - plus basis ( specifically the min of quadratic forms ) .then we exploit the linearity of the dynamic programming operator in this space to obtain a recursive equation for the parameters used in this expansion .this recursion is possible owing to the fact that after propagation by the dynamic programming operator , this min - of - quadratic - forms structure is preserved .the submatrices of the quadratic form , in fact , correspond to the solution of the riccati equation for optimal filter design .we will omit the time subscripts for the state and nonlinear functions for brevity . fromand we have at which can be written in the quadratic form where ,\ ] ] , and . at , the dynamic programming recursion equation can be written in the form where we let and be time - independent for simplicity . substituting for from - in and using the backward time dynamics we obtain , where , for the sake of enhancing clarity , we use the notation ,\\ g_{y , x}&:= y - c(x).\end{aligned}\ ] ] the minimizing found from the following expression : .\end{aligned}\ ] ] solving for and rewriting the matrix in terms of its constituent matrices from yields ^{-1}\times [ b^t { { l^0_i}^t } + b^t n^0_i a(x ) ] \label{eq : w - opt}\end{aligned}\ ] ] which is of the form .substituting from in we obtain where ] .collecting terms in we find + { { ( w^i_c)}^t { \qweight } { ( w^i_c ) } } { \nonumber\\ } & \quad \quad\quad \quad\quad\quad\quad+ \bar{\phi}^0_i \bigg ] \bigg\ } + \frac{1}{2}{\|y - c(x)\|}_r^2 , \label{\eqnname v1expandedform}\end{aligned}\ ] ] where .consider the following quadratic approximations note the dependence of the above terms on the output ( however more specifically only depends on the sign of ) .adding and yields where is a matrix , some of whose terms depend on . here for which the following holds : such that we further have the following representations for the terms in : thus using - in for we obtain where \nonumber \\ & \qquad \qquad\qquad + { { ( w^i_c)}^t { \qweight } { ( w^i_c ) } } , \nonumber \\q^c & : = \left(\begin{array}{ccc}0 & 0 & 0 \\0 & 0 & 0 \\0 & 0 & \phi^1_i\end{array}\right).\end{aligned}\ ] ] now , to further simplify the expression , we note that for all ( where ) such that hence can be written in the form thus we have a recursive relationship in the _ coefficients of the quadratic forms _ between two consecutive time steps . by propagating these terms across multiple time steps we may evaluate the cost function at any desired ( without storing the value for at each time step ) . from the results above , after performing the minimization with respect to the set of quadratics at the current time step , the value function has the form to obtain a state estimate we minimize the above with respect to , i.e. , a set - valued estimate is obtained as a sub - level set of .note that the minimizing occurs at one of the troughs of one of the quadratics ( say ) which has the structure .\end{aligned}\ ] ] as indicated in the value function may also be associated with the real symmetric precision matrix and the state estimate as follows by comparing coefficients in and it can be seen that ^{-1 } \,{l^*}^t.\end{aligned}\ ] ] here , the matrix corresponds to the estimation error covariance matrix in the traditional kalman filter in the stochastic setting .note that in order to choose the minimizing quadratic we obtain the minimizing point for each quadratic as follows . given a form \left(\begin{array}{cc}q_{11 } & q_{12 } \\q_{21 } & q_{22}\end{array}\right ) \left[\begin{array}{c}x \\1\end{array}\right],\end{aligned}\ ] ]the minimizing for this quadratic is given by ^{-1 } [ q_{12 } + q_{21}^{t } ] .\label{eq : xstar}\end{aligned}\ ] ] the latter is true if and only if the states are free to take on any values . in the case where the states are constrained , the minimization in must be performed in the permissible set of states .in order to demonstrate the concepts introduced in this article , we analyze a two dimensional system with linear dynamics and a nonlinear output function , defined as follows & = \left[\begin{array}{cc}0 & 0 \\1 & 0\end{array}\right ] \left[\begin{array}{c}x_1 ( t ) \\x_2 ( t ) \end{array}\right ] + \left[\begin{array}{c}1 \\0\end{array}\right ] w(t),\\ y(t ) & = \sqrt{2 } \sin(x_2 ( t ) ) + v(t),\end{aligned}\ ] ] where and are the process disturbance and measurement noise respectively .after discretization the system dynamics is & = \left[\begin{array}{cc}1 & 0 \\0.1 & 1\end{array}\right ] \left[\begin{array}{c}x_1 ( k ) \\x_2 ( k ) \end{array}\right ] \\ & \quad + \left[\begin{array}{c}0.1 \\0\end{array}\right ] \delta b_k , \label{\eqnname discdynamicsexample}\end{aligned}\ ] ] where is the increment corresponding to over the sampling time . in order to apply the deterministic filtering approachwe approximate the output function ( where the sign of the function used depends on the sign of the output as described previously in and ^ 2 $ ] as a minimum of convex functions as indicated in fig. [ figcapprox ] , [ figccsqapprox ] . + by applying the min - plus filter design approach we obtain the estimation results indicated in fig .[ figstateest1 ] , [ figoutputest1 ] .intuitively , the first state is more difficult to estimate as can be inferred from , there is a weak dependence of the second state on the first ( in addition to a one sample delay ) and the noise increment in the output has a reasonably high variance .the technique described herein provides an approach to the design of filters for systems with nonlinear dynamics and nonlinear output .its main contribution is in the utilization of the min - plus basis expansion of the value function coupled with the exploitation of the linearity of the dynamic programming operator over such a ( semi)-field . a few of the avenues along which a study of the ramifications and salient features of these methods may be pursued are : the error analysis of the dependence of the accuracy on the approximation of the output and system dynamics by convex functions , the extension to systems with uncertainty , and the development of optimal approximation techniques for approximation of any desired function via a sequence of convex functions .in addition , these methods provide a computationally tractable approach for nonlinear filtering , and the applications of this to time critical problems would also provide a fruitful direction of practical relevance while driving further insights into these classes of approaches .complexity reduction , cornices and pruning . in _ proc . of the international conference on tropical and idempotent mathematics , gl litvinov and sn sergeev ( eds . ) , contemporary math _ ,volume 495 , pages 293303 , 2009 .
|
this article approaches deterministic filtering via an application of the min - plus linearity of the corresponding dynamic programming operator . this filter design method yields a set - valued state estimator for discrete - time nonlinear systems ( nonlinear dynamics and output functions ) . the energy bounds in the process and the measurement disturbances are modeled using a sum quadratic constraint . the filtering problem is recast into an optimal control problem in the form of a hamilton - jacobi - bellman ( hjb ) equation , the solution to which is obtained by employing the min - plus linearity property of the dynamic programming operator . this approach enables the solution to the hjb equation and the design of the filter without recourse to linearization of the system dynamics/ output equation .
|
when addressing standard issues of control theory such as motion planning and stabilization , one may adopt an approach based on optimal control , e.g. , hamilton - jacobi type methods and shooting algorithms .one is then immediately facing intrinsic difficulties due to the possible presence of singular trajectories .it is therefore important to characterize these trajectories , by studying in particular their existence , optimality status , and the related computational aspects . in this paper , we provide answers to the aforementioned questions for control - affine systems , under generic assumptions , and then investigate consequences in optimal control and its applications .let be a smooth ( i.e. ) manifold of dimension .consider the control - affine system where , is a positive integer , is a -tuple of smooth vector fields on , and the control takes values in an open subset of . for and , a control ,\omega) ] . on the set of admissible controls , define the _ end - point mapping _by it is classical that is an open subset of ,\omega) ] .singular trajectories appear as singularities in the set of trajectories of joining two given points , and hence , they play a crucial role in variational problems associated to and in optimal control , as described next .let and be two points of , and . consider the following optimal control problem : among all the trajectories of steering to , determine a trajectory minimizing the _ cost _ where , , and takes values in the set of symmetric positive definite matrices . according to the pontryagin maximum principle ( see ) , for every optimal trajectory , there exists a nonzero pair , where is a nonpositive real number and is an absolutely continuous function on ] , where is the _hamiltonian _ of the system .an _ extremal _ is a -tuple solution of the system of equations ( [ hamil ] ) .the extremal is said to be _ normal _ if and _ abnormal _ if .the relevance of singular trajectories in optimal control lies in the fact that they are exactly the projections of abnormal extremals .note that a singular trajectory may be the projection of several abnormal extremals , and also of a normal extremal .a singular trajectory is said to be _ strictly abnormal _ if it is not the projection of a normal extremal .notice that a singular trajectory is of corank one if and only if it admits a unique ( up to scalar normalization ) abnormal extremal lift ; it is strictly abnormal and of corank one if and only if it admits a unique extremal lift which is abnormal . for a normal extremal , it is standard to adopt the normalization , and to derive the control as the feedback function of for every ] . for abnormal extremals ,the situation is much more involved , since equations ( [ hamil ] ) do not provide directly an expression for abnormal controls .abnormal extremals may be nonsmooth , and it is not always possible to determine an explicit expression for singular controls .indeed , it follows from ( [ hamil ] ) that ,\ i=1,\ldots , m,\ ] ] along every abnormal extremal . at that point , in order to compute the singular control , one usually differentiates iteratively ( [ hiegalzero ] ) with respect to , until the control appears explicitly ( in an affine way ) . to recover the control , an invertibility property is then required , which may not hold in general . in this paper , we prove that , in a generic context , such an invertibility property is obtained with a minimal number of differentiations ( cf .theorem [ thma ] ) .this is the concept of _ minimal order _ , defined in definition [ defordre ] . here, genericity means that the -tuple belongs to an open and dense subset of the set of vector fields equipped with the whitney topology .the corank one property is also proved to hold generically .we obtain similar results in the driftless case ( cf .theorem [ srthma ] ) . in a preliminary step for deriving the above results, we establish a theorem of independent interest , asserting that any trajectory of a generic control - affine system satisfies almost everywhere on the set where the vector fields are linearly dependent ( cf .theorem [ thmrang ] ) . when considering optimal control problems , singular minimizing trajectories may exist , and play a major role , since they are not dependent on the specific minimization problem .the issue of such minimizing trajectories was already well known in the classical theory of calculus of variations ( see for instance ) and proved to be a major focus , during the forties , when the whole domain eventually developed into optimal control theory ( cf ) .the optimality status of singular trajectories was chiefly investigated in in relation to control - affine systems with , in regarding driftless systems with and in for general nonlinear control systems . in this paper, we prove that , for generic systems with ( and in the driftless case ) , and for a fixed cost , there does not exist minimizing singular trajectories ( cf .corollaries [ cor1112 ] and [ srcor1112 ] ) .we also prove that , given a fixed system , singular trajectories are strictly abnormal , generically with respect to the cost ( [ coutintro ] ) ( cf .propositions [ propstrictb11 ] and [ srpropstrictb ] ) .we then show how the abovementioned results can be used to derive regularity results for the value function and in the theory of hamilton - jacobi equations , which in turn have applications for stabilization and motion planning .this paper is organized as follows .section [ sec2 ] is devoted to the statement of the main results , firstly in the control - affine case , and secondly in the driftless case .the consequences are detailed in section [ sec3 ] , and proofs are provided in section [ sec4 ] .let be a smooth , -dimensional manifold . throughout the paper, denotes the set of smooth vector fields on , endowed with the whitney topology .let be a positive real number .consider the control - affine system where is an -tuple of smooth vector fields on , and the set of admissible controls is an open subset of ,\omega) ] \ \vert\ \mathrm{rank}\{f_0(x(t)),\ldots , f_m(x(t))\ } < m+1 \ } .\ ] ] note that , on the open subset of where , there is a one - to - one correspondence between trajectories and controls .in contrast , on , there is no uniqueness of the control associated to ; in particular , may be associated to both singular and nonsingular controls .this fact emphasizes the following result , which describes , in a generic context , trajectories on the subset of where .[ thmrang ] let be a nonnegative integer .there exists an open and dense subset of so that , if the -tuple belongs to , then every trajectory of the associated control - affine system verifies in addition , for every integer , the set can be chosen so that its complement has codimension greater than .[ remidep ] at the light of the previous result , one can choose the admissible control on such that , for every , consists of any -tuple so that in particular , on any subinterval of , the trajectory is constant , and the control can be chosen constant as well .[ remtrivial ] a trajectory is said to be _ trivial _ if it reduces to a point ; otherwise it is said to be _it is clear that , if ] .recall that a singular trajectory is the projection of an abnormal extremal . for ] , differentiating ( [ s - affine ] ) , one gets , almost everywhere on ] is the skew - symmetric matrix given by since is skew - symmetric , is even , and equation ( [ sss - affine ] ) rewrites , almost everywhere on ] , define the matrix as augmented with the row , and the -dimensional vector as augmented with the coefficient .then , from equations ( [ au = b ] ) and ( [ sss - affine2 ] ) , there holds , almost everywhere on ] , if is even , resp . , if is odd . on the opposite , for arbitrary ,a singular trajectory is said to be a _goh trajectory _if it admits an abnormal extremal lift along which the goh matrix is identically equal to .[ thma ] let be a positive integer .there exists an open and dense subset of so that , if the -tuple belongs to , then every nontrivial singular trajectory of the associated control - affine system is of minimal order and of corank one .in addition , for every integer , the set can be chosen so that its complement has codimension greater than .[ corgoh2 ] with the notations of theorem [ thma ] and if , there exists an open and dense subset of so that every control - affine system defined with an -tuple of does not admit nontrivial goh singular trajectories .we keep here the notations of the previous sections .consider the control - affine system and the quadratic cost given by where and .here , denotes the set of smooth mappings on , taking values in the set of real positive definite matrices . for and ,define the optimal control problem we next state two sets of genericity results , depending whether the cost or the control system is fixed . [ propstricta ] fix and .there exists an open and dense subset of such that every nontrivial singular trajectory of a control - affine system defined by an -tuple of is strictly abnormal for the optimal control problem ( [ opt ] ) .corollary [ corgoh2 ] together with proposition [ propstricta ] yields the next corollary .[ cor1112 ] fix and .let be an integer .there exists an open and dense subset of so that the optimal control problem ( [ opt ] ) defined with an -tuple of does not admit nontrivial minimizing singular trajectories . in both previous results ,the set can be chosen so that its complement has arbitrary codimension . we endow with the whitney topology .an tuple of is said to verify the _ lie algebra rank condition _ if the lie algebra generated by is of dimension at every point of .[ propstrictb ] fix so that the lie algebra rank condition is satisfied and the zero control is not singular .let .then , there exists an open and dense subset of such that every nontrivial singular trajectory of the control - affine system associated to the -tuple is strictly abnormal for the optimal control problem ( [ opt ] ) defined with and .assuming that the zero control is not singular is a necessary hypothesis .indeed , the fact that a control is singular is a property of the sole -tuple and is independent of the cost . on the other hand ,every trajectory associated to the zero control is always the projection of the normal extremal of any optimal control problem ( [ opt ] ) . as a consequence ,if the zero control is singular , such a trajectory can not be strictly abnormal . in order to handle the case of a singular zero control ,it is therefore necessary to consider more general quadratic costs such as where , and .[ propstrictb11 ] fix satisfying the lie algebra rank condition and .then , there exists an open and dense subset of such that every nontrivial singular trajectory of the control - affine system associated to the -tuple is strictly abnormal for the optimal control problem ( [ casyst])-([coutug11 ] ) defined with and .let be a positive real number .consider the driftless control - affine system where is an -tuple of smooth vector fields on , and the set of admissible controls is an open subset of ,\omega) ] \ \vert\ \mathrm{rank}\{f_1(x(t)),\ldots , f_m(x(t))\ } < m \ } .\ ] ] [ srthmrang ] let be a positive integer .there exists an open and dense subset of so that , if the -tuple belongs to , then every trajectory of the associated driftless control - affine system verifies in addition , for every integer , the set can be chosen so that its complement has codimension greater than .let be a singular trajectory , projection of an abnormal extremal .similarly to the previous section , we define , for ] , differentiating ( [ srs - affine ] ) , one gets , almost everywhere on ] is the skew - symmetric matrix given by since is skew - symmetric , is even , and equation ( [ srsss - affine ] ) rewrites , almost everywhere on ] , define the matrix as augmented with the row .then , from equations ( [ srau = b ] ) and ( [ srsss - affine2 ] ) , there holds , almost everywhere on ] , if is odd , resp ., if is even . on the opposite , for arbitrary ,a singular trajectory is said to be a _goh trajectory _if it admits an abnormal extremal lift along which the goh matrix is identically equal to .[ srthma ] let be an integer such that .there exists an open and dense subset of so that , if the -tuple belongs to , then every nontrivial singular trajectory of the associated driftless control - affine system is of minimal order and of corank one .in addition , for every integer , the set can be chosen so that its complement has codimension greater than .[ srcorgoh2 ] with the notations of theorem [ srthma ] and if , there exists an open and dense subset of so that every driftless control - affine system defined with an -tuple of does not admit nontrivial goh singular trajectories .consider the optimal control problem associated to the driftless control - affine system with the quadratic cost given by where and . for and ,define the optimal control problem we next state genericity results with respect to the control system , with a fixed cost .[ srpropstricta ] fix and .there exists an open and dense subset of such that every nontrivial singular trajectory of a driftless control - affine system defined by an -tuple of is strictly abnormal for the optimal control problem ( [ sropt ] ) .corollary [ srcorgoh2 ] together with proposition [ srpropstricta ] yields the next corollary .[ srcor1112 ] fix and .let be an integer .there exists an open and densee subset of so that the optimal control problem ( [ sropt ] ) defined with an -tuple of does not admit nontrivial minimizing singular trajectories . in both previous results , the set be chosen so that its complement has arbitrary codimension .we also have have a genericity result with respect to the cost , with a fixed control system .[ srpropstrictb ] fix so that the lie algebra rank condition is satisfied .let .then , there exists an open and dense subset of such that every nontrivial singular trajectory of the driftless control - affine system associated to the -tuple is strictly abnormal for the optimal control problem ( [ sropt ] ) defined with and . in the driftless case ,the control is always singular , but corresponds to a trivial trajectory .therefore , in opposition to the control - affine case , it is not necessary to add the linear term in the cost .consider the optimal control problem ( [ opt ] ) , associated to the control - affine system ( [ casyst ] ) and the cost ( [ coutug ] ) .the value function is defined by for every ( with , as usual , ) .we assume in the sequel that all data are analytic .the regularity of is closely related to the existence of nontrivial minimizing singular trajectories starting from .it is proved in that , in the absence of minimizing singular trajectories , the value function is continuous and subanalytic ( see e.g. for a definition of a subanalytic function ) . for driftless control - affine systems and ,the value function coincides with the square of a sub - riemannian distance ( see for an introduction to sub - riemannian geometry ) .in particular , in this case , the value function is always continuous , but the trivial trajectory is always minimizing and singular . moreover ,if there is no nontrivial minimizing singular trajectories , then the value function is subanalytic outside ( see ) .this situation holds for generic distributions of rank greater than or equal to three ( see ) .the results of section [ subsec2.3 ] have the following consequence on the regularity of . with the notations of corollary [ cor1112 ] , and if in addition the functions , , and the vector fields of the -tuple in are analytic, then the associated value function is continuous and subanalytic on its domain of definition .if there exists a nontrivial minimizing trajectory , the value function may fail to be subanalytic or even continuous .for example , consider the control - affine system in given by and the cost .the trajectory , associated to the control , is a nontrivial minimizing singular trajectory , and the value function has the asymptotic expansion , near the point , ( see for details ) .hence , it is not continuous , nor subanalytic , at the point . in the driftless control - affine case , by using results of section [ subsubsec2.4.2 ] , we derive the following similar consequence . with the notations of corollary [ srcor1112 ] , and if in addition the functions , , and the vector fields of the -tuple in are analytic , then the associated value function is subanalytic outside .assume that the assumptions of the previous subsection hold .it is standard ( see ) that the value function is a viscosity solution of the hamilton - jacobi equation where .conversely , the viscosity solution of ( [ hjbeq ] ) with analytic dirichlet - type conditions is subanalytic , as soon as the corresponding optimal control does not admit minimizing singular trajectories ( see ) . using the results of the previous sections, this situation holds generically if ( and , similarly for driftless control - affine systems , if ) . as a consequence , the analytic singular set of the viscosity solution , i.e. , the subset of where is not analytic , is a ( subanalytic ) stratified manifold of codimension greater than or equal to one ( see for more details on the subject ) .since is also the locus where characteristic curves intersect , the abovementioned property turns out to be instrumental for the global convergence of numerical schemes for equation ( [ hjbeq ] ) ( see ) .indeed , the analytic singular set must be as `` nice '' as possible in order to integrate energy functions on the set of characteristic curves . for a driftless control - affine system verifying the lie algebra rank condition ,there exist general stabilizing strategies stemming from dynamic programming . as usual ,the stabilizing feedback is computed using the gradient of the value function for a suitable optimal control problem .of course this is only possible outside the singular set , and one must device another construction for the feedback on .let us mention two such strategies , the first one providing an hybrid feedback ( see ) , and the second one a smooth repulsive stabilizing ( srs ) feedback ( see ) .both strategies crucially rely on the fact that is a stratified manifold of codimension greater than or equal to one . as seen before, the latter fact holds generically in the analytic category for . on the other hand ,the absence of singular minimizing trajectories is the basic requirement for the convergence of usual algorithms in optimal control ( such as direct or indirect methods , see e.g. ) .we have proved that this situation holds generically for control - affine systems if , and for driftless control - affine systems if . as a final application ,consider a driftless control - affine system verifying the lie algebra rank condition . according to proposition [ srpropstrictb ] , it is possible to choose a ( generic ) cost function such that all singular trajectories are strictly abnormal .combining that fact with ( * ? ? ?* theorem 1.1 ) , we deduce that there exists a dense subset of such that every point of is reached by a unique minimizing trajectory , which is moreover nonsingular . as a consequence ,a shooting method with target in will converge .that fact may be used for solving ( at least approximately ) motion planning problems .every trajectory of the control - affine system is also a trajectory of the driftless control system with .therefore , theorem [ thmrang ] follows from theorem [ srthmrang ] , whose proof is provided next .let be a trajectory of the driftless control system , with .consider the set defined by ( [ idep ] ) .we argue by contraposition , and assume that contains a subset of positive measure such that for . since lebesgue points of are of full lebesgue measure , we assume that is continuous on .* there exists such that * are linearly independent , and thus , there exist real numbers , , , such that therefore , where ; * .* , * is a linear combination of iterated lie brackets of , of length smaller than , and of iterated lie brackets of , of length smaller than or equal to .[ lem - recursif ] consider a set of positive measure and so that on , i.e. , for every , there exist real numbers , , such that for , define by then , there exists a set of positive measure such that (x(t))\in \mathrm{span}\{f_1(x(t)),\ldots , f_k(x(t))\ } \quad \hbox { on } j'.\ ] ] using remark [ remrangouvert ] , there exist , , so that , for every , the vectors , , , , , span . thus , there exist smooth functions , , defined on , such that for in an open neighborhood of . for , define , for ( this notation is consistent with ( [ h00 ] ) ) .the are absolutely continuous on and differentiable everywhere on . for , there holds on and therefore , it follows that on a subset of full measure ( cf .* lemma p. 177 ) ) .for , using that , and , it holds (x(t ) ) & = & dg_t\circ f_t(x(t ) ) \\ & = & \sum_{i=1}^k ( db_i(x(t)).\dot{x}(t ) ) f_i(x(t ) ) + \sum_{i = k+1}^n ( db_i(x(t)).\dot{x}(t ) ) e_i(x(t ) ) \\ & = & \sum_{i=1}^k \dot{\beta}_i(t ) f_i(x(t ) ) + \sum_{i = k+1}^n \dot{\beta}_i(t ) e_i(x(t)).\end{aligned}\ ] ] on , the second sum of the right - hand side of the last equation vanishes , and the lemma follows . set ] . applying lemma [ lem - recursif ] , there exists a subset of positive measure such that on . combining proposition [ lemroutine ] with routine transversality arguments( see for instance and ) , it follows that the -tuple belongs to a closed subset of of codimension greater than or equal to .theorem [ srthmrang ] follows .[ keylemma1 ] there exists an open and dense subset of so that , if the -tuple belongs to , then every singular trajectory of the associated control - affine system is of minimal order .in addition , for every integer , the set can be chosen so that its complement has codimension greater than .[ keylemma2 ] there exists an open and dense subset of so that , if the -tuple belongs to , then every nontrivial singular trajectory of the associated control - affine system is of corank one .in addition , for every integer , the set can be chosen so that its complement has codimension greater than .it is therefore enough to show the existence of an open and dense subset of such that , if , then every singular trajectory of verifies item ( ii ) of definition [ defordre ] .then , by choosing , the conclusion of lemma [ keylemma1 ] follows .consider a singular trajectory of , admitting an abnormal extremal .assume that there exists \setminus i_{dep}(x(\cdot)) ] , the vector fields are linearly independent .the remaining part of the argument consists of reformulating the problem in order to follow the chain of arguments in the proof of ( * ? ? ?* theorem 2.4 ) concerning the case of everywhere linearly independent vector fields . for that purpose , we distinguish the cases even and odd .assume first that is even .as in ( [ gbar ] ) , define , for , from ( [ au = b ] ) , we have , for , since the ranks of both and are even , they must be equal , for , and hence , the rank of is smaller than on .this is exactly the starting point of the proof of ( * ? ? ?* lemma 3.8 ) .the machinery of then applies and we deduce that the -tuple belongs to a subset of arbitrary codimension in whose complement contains an open and dense subset of .it amounts to show that , implies .this follows from the fact that if , then , and thus is orthogonal to the range of since is skew - symmetric . using lemma [ lemrank ] , the rank of is less than on is exactly the starting point of the proof of ( * ? ? ?* lemma 3.9 ) .the machinery of then applies and we deduce that the -tuple belongs to a subset of arbitrary codimension in whose complement contains an open and dense subset of .we argue by contraposition .consider a nontrivial singular trajectory of , with .assume that admits two abnormal extremal lifts }(\cdot)) ] such that , for some ] and }(t_0) ] and}(\cdot) ] . since is nontrivial , it follows from remark [ remtrivial ] that there exists a nonempty subinterval of \setminus i_{dep}(x(\cdot)) ] of positive measure such that is not of rank if is odd , respectively , is not of rank if is even .following exactly the proofs of lemmas 3.8 and 3.9 in , the -tuple belongs to a subset of arbitrary codimension in whose complement contains an open and dense subset .we only treat the control - affine case , the argument for the driftless control - affine case being identical .we argue by contraposition .consider a nontrivial singular trajectory of , with .assume that admits on the one part a normal extremal lift }(\cdot)) ] .let us introduce some notations . for , let be a multi - index of .the length of is and is the vector field defined by , \dots],f_{l_k}].\ ] ] a multi - index with consecutive occurrences of the index is denoted as . for every multi - index of and ] , }(t ) = \sum_{l=1}^m u_l(t ) h_{ll}^{[n]}(t)\label{eq:00},\\ \frac{d}{dt } h_{l}^{[a]}(t ) = \sum_{l=1}^m u_l(t ) h_{ll}^{[a]}(t)\label{eq:11}.\end{aligned}\ ] ] recall that , according to the pontryagin maximum principle , there holds }(t ) \\ \vdots \\ h_m^{[n]}(t)\end{pmatrix},\ ] ] and , for every ] .since the trajectory is nontrivial , there exists an open interval ] with respect to , one gets , by using ( [ eq:00 ] ) , ( [ eq:11 ] ) , and ( [ inconnus ] ) , that }(t ) = ( u_i(t))^s h^{[a]}_{ji^s}(t ) + r_s(t),\ ] ] where is polynomial in }(t) ] , , , with different from and .fix . since and , we are in a position to apply routine transversality arguments .it follows that the -tuple belongs to a closed subset of of arbitrary codimension .proposition [ propstricta ] follows .[ [ proofs - of - propositionspropstrictb - propstrictb11-andsrpropstrictb ] ] proofs of propositions [ propstrictb ] , [ propstrictb11 ] , and [ srpropstrictb ] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ we first prove proposition [ propstrictb11 ] and argue by contraposition .consider a nontrivial singular trajectory of .assume that admits on the one part a normal extremal lift }(\cdot)) ] . from the pontryagin maximum principle , there holds , for , }_p(t)-\alpha_p(x(t)),\ ] ] where the and the are respectively the coefficients of and of . note that the s are smooth functions of the time. in that case , for , and is never vanishing on .moreover , for , on , i.e. }_p(t) ] on , one gets }(t ) & = h_{l0}^{[a]}(t ) + \sum_{l=1}^m u_l(t ) h^{[a]}_{ll}(t ) \\ & = h_{l0}^{[a]}(t ) + \sum_{1{\leqslant}l{\leqslant}p{\leqslant}m } c_{lp}(t ) q^{lp}(x(t ) ) , \end{split}\ ] ] where }_{ll}(t ) ] if . if , then }_{l0}(t)\neq 0 ] on , }(t )= c_{lj}(t ) l^n_{f_t } q^{jl}(x(t ) ) + r_n(t),\ ] ] where is a linear combination of , with , in and if , and of , with , in , and . applying routine transversality arguments, it follows that belongs to a closed subset of of arbitrary codimension .proposition [ propstrictb11 ] is proved .l. rifford , _ the stabilization problem : agas and srs feedbacks _ , in `` optimal control , stabilization , and nonsmooth analysis '' , lectures notes in control and information sciences * 301 * , springer - verlag , heidelberg , 2004 , 173184 .
|
when applying methods of optimal control to motion planning or stabilization problems , some theoretical or numerical difficulties may arise , due to the presence of specific trajectories , namely , singular minimizing trajectories of the underlying optimal control problem . in this article , we provide characterizations for singular trajectories of control - affine systems . we prove that , under generic assumptions , such trajectories share nice properties , related to computational aspects ; more precisely , we show that , for a generic system with respect to the whitney topology , all nontrivial singular trajectories are of minimal order and of corank one . these results , established both for driftless and for control - affine systems , extend results of . as a consequence , for generic systems having more than two vector fields , and for a fixed cost , there do not exist minimizing singular trajectories . we also prove that , given a control system satisfying the larc , singular trajectories are strictly abnormal , generically with respect to the cost . we then show how these results can be used to derive regularity results for the value function and in the theory of hamilton - jacobi equations , which in turn have applications for stabilization and motion planning , both from the theoretical and implementation issues .
|
fueled primarily by the promise of optical imaging beyond the diffraction limit , metamaterials , artificial composites with the unit cell size well below the optical wavelength , now for more than a decade remain one of the primary foci of modern research .however , while metamaterial devices , from the originally promised superlens based on the negative index media , to its younger challenger the hyperlens that uses hyperbolic metamaterials , have been demonstrated in experiment in broad range of frequencies and even extended to other wave phenomena such as e.g. ultrasound imaging , they have yet to find their way to broad acceptance in real - world applications .the inherent curvature of the metamaterial device that is necessary for optical magnification ( either in the general formfactor of the device or in its unit cell topology ) , leading to the requirement on precise positioning of the object at subwavelength scale , remains a major practical obstacle to a wider acceptance of metamaterials - based imaging systems .more important , however , is the result of substantial loss in metamaterial structures , that is generally a direct and inevitable consequence of using metallic ( or other negative permittivity ) components that remain necessary for either the negative - index metamaterials or hyperbolic media . faced with an order of magnitude intensity loss as the cost of producing a magnified optical image of a subwavelength pattern, one must either accept a prohibitive cost in the signal - to - noise ratio , or substantially increase the intensity of the field that illuminates the target .the latter option however does not fit well with the constraints of biomedical imaging , where high optical fields may severely damage biological tissue .while directly related to the fundamental restrictions on optical metamaterials via the kramers - kronig relations , this problem however allows a natural and straightforward solution : the metamaterial component of a super - resolution imaging setup must be located `` in front '' , rather that `` after '' , the object so that it s probed by the already attenuated field , thus avoiding damage .this is precisely the case in the structured illumination approach to optical imaging , where a grid pattern , usually generated through the interference of incident light , is superimposed on the specimen. the imaging resolution is then determined by the periodicity of the illuminating field , and for illumination from free space this limit reduces to one quarter of the wavelength , .however , as we demonstrate in the present paper , hyperbolic metamaterials that are free from the conventional diffraction limit and can support optical patterns with the periodicity that is orders of magnitude below the free - space wavelength , can dramatically improve the imaging resolution in structured illumination .the resulting _ hyper - structured illumination _ approach to super - resolution imaging is the main objective of this work .in a conventional transparent medium with the refractive index , the frequency generally limits the magnitude of the wavenumber to the value of . however , the situation is dramatically different in the world of hyperbolic metamaterials , where the opposite signs of the dielectric permittivity components in two orthogonal directions ( see the inset in fig .1(a ) ) lead to a hyperbolic dispersion of -polarized propagating waves with the wave numbers unlimited by the frequency . for , the hyperbolic iso - frequency surface of ( [ eq : hyperbola ] ) asymptoticallyapproaches a cone , with as the direction of the group velocity corresponding to a given wavevector coincides with the normal to the iso - frequency surface , at large wavenumbers in a uniaxial hyperbolic medium the angle between the group velocity and the material symmetry axis ( see the inset to fig .1(a ) ) approaches the value of . as a result , for a point source placed at the edge of such medium , the emission intensity diagram forms a conical pattern ( see fig . 1 ( b)-(d ) , calculated for a practical realization of hyperbolic media using silver - glass layered metamaterial ) , with the `` thickness '' of the expanding conical `` sheet '' of electromagnetic radiation well below the free - space wavelength .this phenomenon is now well - established , and has recently been successfully used for subwavelength optical lithography . as strong material dispersion is inherent to hyperbolic materials , a change in the frequency will result in a substantial variation of the emission cone angle , as illustrated in fig .1(a ) for three different wavelengths . while generally considered detrimental to metamaterial applications ( since this would immediately push the system away from whatever resonant condition is was designed for ) , it is precisely this property that is essential to the concept of the hyper - structured illumination .while a single source ( or an opening ) at the bottom of the hyperbolic `` substrate '' only illuminates a small part of the object plane at its `` top '' , a change in the electromagnetic frequency would allow to sweep the entire object plane ( see fig . 1 ( b ) - ( d ) ) .as a result , a complete subwavelength image can be obtained in a single hyper - spectral measurement .this behavior is further illustrated in fig .2 , which shows the object formed by four silicon nanowires on top of the silver - glass metamaterial ( silver and glass layers , each nm thick see fig .2 ( a ) ) , illuminated at the wavelengths of nm ( panel ( b ) ) and nm ( panel ( c ) ) . note the subwavelength localization of the electromagnentic field near the object , and a dramatic difference in the intensity distribution for different wavelengths that allows to clearly `` resolve '' distinct elements of the target despite their close spacing .a finite size of the unit cell of the metamaterfial , , introduces an upper limit on the ( bloch ) wavenumber , equal to , as well as quantitative corrections to the dispersion and to the geometry of the iso - frequency surface , leading to its deviations from the hyperbolic shape .however , the overall topology of the iso - frequency surface , together with its dependence on the wavelength , are generally preserved 1 ( a ) . as a result, one will still observe the general profile of the conical illumination pattern , as well as its evolution with the wavelength that are consistent with the predictions of the effective medium theory , as clearly seen in fig . 1 ( b)-(d )however , as the size of the unit cell limits the maximum wavenumber that can be supported by the propagating waves in the hyperbolic metamaterial , it defines the ultimate resolution limit of the hyper - structured illumination : with the scale of the unit cell in high - quality hyperbolic metamaterials that have already been demonstrated , on the order of a few nanometers , eqn .( [ eq : hsi_limit ] ) allows for optical imaging of deeply subwavelength objects , to the point of resolving large individual molecules .while a hyperbolic metamaterial supports propagating electromagnetic waves with the characteristic length scale up to its unit cell size , these waves will evanescently decay outsize this medium . as a result, the imaging resolution of the proposed hyper - structured illumination setup will rapidly deteriorate with ( see fig .2(a ) ) , approaching the diffraction - limited value of for . fora 3d object structured at a deeply subwavelength scale , the proposed imaging system based on hyper - structures illumination , will resolve the motti projection of its tree - dimensional permittivity distribution : note that the integral transformation in eqn .( [ eq : motti ] ) is independent of the illumination wavelength , and does not introduce an additional length scale : the effective averaging in ( [ eq : motti ] ) is defined by the distance to the object plane . as a result , eqn .( [ eq : motti ] ) reduces the resulting resolution to the corresponding depth of fields .this is illustrated in fig .3 , which shows the motti projection ( panel ( b ) ) for several solid spheres near the object plane .when illuminated from the slits at the `` bottom '' of the hyperbolic substrate ( see fig .[ fig:2](a ) ) , an object with the dielectric permittivity placed near the image plane , will induce scattered light with the far - field amplitudes for the and polarizations \right ) \right .\nonumber \\ & + & \left .r_s\left({\bf k'}\right ) \left ( { \bf p}_{\bf k ' } \cdot \left[\hat{{\bf n } } \times { \bf k'}\right ] \right ) \right\ } , \label{eq : mtr_s}\\ e_p\left({\bf k } ; \omega\right ) & = & \frac{4 \pi^2 \omega}{c k_n k_\tau } \left\ { \left ( \left[{\bf p}_{\bf k } \times { \bf k}\right ]\cdot \left[\hat{{\bf n } } \times { \bf k}\right ] \right ) \right .\nonumber \\ & - & \left .r_p\left({\bf k'}\right ) \left ( \left[{\bf p}_{\bf k ' } \times { \bf k'}\right ] \cdot \left[\hat{{\bf n } } \times { \bf k'}\right ] \right ) \right\ } , \label{eq : mtr_p}\end{aligned}\ ] ] where and correspond to the tangential and normal to the surface of the hyperbolic substrate components of the far - field wavevector , is the normal to the surface unitary vector , and are the reflection coefficients at the air - substrate interface for the - and -polarizations , , and is the spatial fourier transform of the polarization of the object , , when it is illuminated by the incident field . for an object placed close to the image plane and with the vertical dimension well below the free - space wavelength ( ) , the polarization of the object can be related to the spatial fourier transform of its motti projection ( [ eq : motti ] ) : given the hyper - spectral measurement of the far field , the system of coupled linear equations ( [ eq : mtr_s]),([eq : mtr_p ] ) can be used to calculate . in particular ,when the hyper - structured illumination pattern at the image plane originates from a periodic array of the slits at the bottom of the substrate , separated by the distances of and in the and directions , then each spatial fourier component of the object profile is only coupled to a discrete set of other spatial components , that are different by an integer multiple of the reciprocal lattice vectors and : where and \cdot { \bf e}_{n_x , n_y}\left(\omega\right ) \right ) , \nonumber \\ % \label{eq : gs } \\g^{(p)}_{n_x , n_y}\left ( { \bf k } ; \omega \right ) & = & \frac{c}{\omega } \\left[\hat{{\bf n } } \times { \bf k}\right ] \cdot % \nonumber \\ % & \cdot & \left [ { \bf e}_{n_x , n_y}\left(\omega\right ) \times \left({\bf k } - r_p \{ \bf k ' } \right ) \right ] , \nonumber\end{aligned}\ ] ] with given the desired resolution , this implies a summation by terms in either direction . even with nm and diffraction - limited slit - to - slit spacing , we find thus reducing the numerical complexity of solving eqn .( [ eq : invmtr ] ) to that of inverting a matrix . on a modern cpu, such computational task can be completed within a 0.1 ms time frame .thus , despite the need for data post - processing that is inherent to structured illumination methods , the proposed approach can be successfully used for real - time imaging of the dynamical processes at sub - ms time scales .the linear inversion procedure of the previous section is robust to the presence of the noise and material absorption which is illustrated in fig . 4 , where panels ( a ) and ( b ) present the gray - scale images of the original object and its image obtained in a noisy environment . for a quantitative comparison , panel ( c )shows the dimensions of the original object ( left scale ) and the recovered motti projection ( right scale ) .note that the nm separation , which is less than of the shortest illumination wavelength , is clearly resolved despite a substantial amount of noise in the system .the hyper - structures illumination approach introduced in the present paper , combines deeply subwavelength resolution with the relative simplicity of _ planar _ geometry that would facilitate surface functionalization for cell targeting and imaging , essential for biomedical applications .compared to the existing super - resolving extensions of structured illumination , such as the plasmonic structured illumination , the proposed approach is not limited to particular resonant wavelength ( such as e.g. the surface plasmon wavelength ) , and is not sensitive to the material absorption . in terms of the actual fabrication of the required metamaterial substrate ,it s well within the limits of already demonstrated capabilities , and experimental demonstration of the proposed imaging method will be straightforward .
|
we present a new approach to super - resolution optical imaging , based on structured illumination in hyperbolic media . the proposed system allows for planar geometry , has unlimited field of view , and is robust with respect to optical noise and material losses .
|
power laws appear in a large variety of settings throughout nature and often signify that there is a simple process at the origin of what appears to be a very complex phenomena .examples of the variety of settings in which power laws appear are the gutenberg - richter law for the size of earthquakes , the allometric scaling laws that appear throughout biology , and paretos s law of income distributions .power laws have also been witnessed within the brain in electroencephalographic ( eeg ) and magnetoencephalographic ( meg ) recordings while studying a wide variety of brain function .the signals recorded outside the skull by these techniques represent the global activity of a large amount of cortical and subcortical tissue and give rise to a range of exponents .much more local measurements of cerebral activity may be recorded by a single microelectrode . while the emphasis of these measurements is usually focused on the spiking activity of single cells within the vicinity of the recording electrode , local field potentials ( lfps ) , which comprise the much slower , background of electrical activity ,may also be extracted from the signal .we found that electrophysiological recordings , taken from pharmacologically intractable epilepsy patients with microelectrodes implanted in a variety of cerebral areas , display a surprisingly universal power law in the frequency spectrum of lfp activity ( fig .[ fig : humelec ] ) .a power spectrum of this sort is said to display the statistics of brownian noise since it has the same scaling exponent as a 1d random walk .however , it is far from clear what the underlying mechanism is that gives rise to these statistics . to address this question, we developed a general method for modeling the lfp from what we refer to as neuronal shot noise , that allows one to study the microscopic origin ( i.e. , single neuron activity ) of the power law dependence in the power spectrum .we propose two quite different processes that could both give rise to the observed dependence .the first involves the uncorrelated firing of single neurons that display very slow dendro - synaptic decay in the extracellular field which they generate .the second possibility involves the correlated firing of a single neuron which displays either no activity ( down state ) or very rapid spiking ( up state ) .we end with a discussion of the up - down states ( uds ) suggested by our model and how they compare to experimentally observed uds within the cortex .we recorded local field potentials from the cerebral cortex of 10 subjects with pharmacologically intractable epilepsy ( 4 males ; years old ) , implanted with chronic electrodes to localize the seizure focus for possible surgical resection .electrode locations were based exclusively on clinical criteria and were verified by mri or by computed tomography co - registered to preoperative mri .each electrode probe had nine micro - wires ( platinum / iridium , diameter ) protruding from its tip , eight high - impedance recording channels ( typically ) and one low - impedance reference with stripped insulation .the differential signal from the micro - wires was amplified using a 64-channel neuralynx system , filtered between 1 and 9000 hz , and sampled at 28 khz .we recorded from a total of 684 micro - wires ( 106 in the frontal lobe , 546 in the temporal lobe , 16 in the parietal lobe , 16 in the occipital lobe ) .recordings lasted for 10 min while subjects were awake and at rest with eyes open .all studies conformed to the guidelines of the medical institutional review board at ucla . for analysis , the data was down - sampled to 7 khz using an anti - aliasing filter .the power spectral density was estimated by applying welch s method to consecutive 5-sec segments and subsequently averaging over the entire 10 min ( fig .the scaling parameter was determined as the slope of a least - square linear fit of the double - logarithmic power spectrum . to diminish the influence of amplifier roll - off ,the linear fit was restricted to a frequency range of 1 to 400 hz ( fig .1a , inset ) .figure 1b shows the scaling parameters averaged across different micro - wires for four different regions of the cerebral cortex along with their standard deviation .note that in all four regions the scaling parameter is close to a value of , indicating a universal scaling behavior of local electrical brain activity . to the best of our knowledge ,this is the first time such a universal feature of the lfp has been reported in humans .the microscopic generation of the local field potential ( lfp ) may be formulated in a similar way to that of shot noise , originally described by schottky to account for the noise across an electrical resistor .this may be seen by writing the extracellular potential generated by neurons , at a given spatial location within the brain , as follows : which is exactly how one quantitatively models shot noise . herethe function accounts for the temporal profile of the extracellular field generated by neuron while represents the activity of that neuron and may be explicitly written as , where is the dirac delta function . from this definition , we see that the function may be thought of as analogous to the spike train with firing activity occurring at times for neurons .note , this model does not require that the neurons generate action potentials , it only assumes a stereotyped profile for the electrical field generated by each neuron which repeats at times ( see fig . [ shot ] ) . representing neuronal shot noise .a function , representing the extracellular field associated with the neuron , occurs at times governed by the statistics of . ]we will assume that the relevant neural activity has reached a steady - state such that the autocorrelations are independent of . by the wiener - kinchin theorem ,the autocorrelation function is simply related to the power spectrum by fourier transformation . from eq .[ shotnoise ] , we may write the power spectrum as where , , and are the fourier transforms of their respective temporal functions .to solve for the power spectrum as written in eq .[ pow ] would require us to know the location of each neuron involved in generating the lfp , the extracellular field produced by each neuron , and the decay of that field through the neuronal medium . while we have carried out such biophysical calculations in the past for single neurons , we are here only concerned with understanding the source of the behavior of the power spectrum , not in reconstructing the lfp .the scale invariant nature of greatly simplifies our problem since we need not concern ourselves with any constants that arise from the details mentioned above . to clarify this point , let us assume that the power spectrum , where is a constant .we can solve for the coefficient of the power by plotting the log of both sides of this equation , .the power dependence is given by the slope and is unaffected by the constant offset . since the power law dependence of is independent of overall constants , we may neglect the previously mentioned biophysical details and focus on finding solutions to eq .[ pow ] that have a functional dependence . in general, this problem is still quite difficult ; however , there are two limits which allow a simple , analytical solution .we will now discuss these two cases ._ case i : slow dendro - synaptic decay ._ the simplest case to consider is that the spiking statistics are independent between neurons , and that the spiking of each neuron is an independent poisson process . in this case where is the average firing rate of the neuron and is the kronecker delta function .we can now ask when the power spectrum satisfies the solution requires , whose fourier transform is a heaviside step function . this answer is a bit unrealistic since it implies that the field generated by the cell does not decay .a more realistic solution would be to assume a form such as which has fourier transform . in the limit of slow decay , , a neuron with an extracellular field of this form , firing with poisson statistics , would give rise to brownian noise in the lfp . in this case , the behavior originates from the steep onset of the extracellular field .the rise time of an action potential may occur within a fraction of a millisecond , which could account for a sharp onset , while the decay of the dendro - synaptic extracellular field may last for as long as a second .the functional form of the decay does not affect these results , so long as the cell takes much longer to return to baseline than it took to spike ._ case ii : up - down states . _the second case that we consider is the limit of a sharply peaked extracellular field . in this case, we may treat as a constant , and we will assume that the activity of different neurons are either uncorrelated ( ) or synchronous ( ) .the spike timing of a single neuron , however , may show a temporal correlational structure .these assumptions lead to a power spectrum since all the frequency behavior is contained within the statistics of , and we are assuming that all cells are active with the same statistics , we need to look for a sequence of spikes that have individual spike timing correlations of the form since linear time correlations are consistent with frequency correlations .this is the same linear in scaling as that of a 1d random walk and is at the origin of the term brownian noise .since is analogous to the spike train of each neuron , we need to formulate a binary sequence that shares the correlational structure of a random walk .a simple way to generate a binary sequence representing white noise is to pick a random number at each timestep and then apply a threshold such that all values above the threshold are set to one , and all below to zero .brownian noise may be created by integrating a white noise signal . however , it is not obvious how to apply a similar thresholding procedure to convert the resulting analog signal into a digital one . behavior .the slope of the dashed line is -2 . ] an alternative procedure that will generate brownian noise is given by setting up a telegraph process . in this casea binary sequence is generated by constructing a two - state system , ( and ) where , at each timestep , the probability of making the transition ( ) is given by ( ) .the autocorrelation function for such a process may be explicitly written as in the limit of equal transition rates , , and low probability of making a transition , , eq .[ telegraph ] reduces to which has the desired linear in statistics of a random walk .figure [ brownnoise ] displays a binary sequence generated by a telegraph process and the dependence of its power spectrum .the telegraph process gives rise to periods of sustained , rapid activity followed by intervals of inactivity .this results in collective oscillations that display a much lower frequency than the rapid firing witnessed during depolarization .the result is a pattern of behavior reminiscent of up - down states common in cortex .studies of the lfp and single neuron spiking activity , combined with current source - density analysis , suggest that lfps are primarily the result of dendritic activity distributed over a large region of the cortex .lfps are therefore believed to provide a measure of the local processing and input to a given region of the brain .we developed a very simple model to explain our key experimental finding , a decay in the local field potential recorded from individual microelectrodes implanted into human cortex . in particular , we showed two examples of how biologically realistic , single neuron activity , parametrized by the temporal shape of the resulting extracellular fields and the statistics of cellular activity , can give rise to power law behavior within the lfp . in case i , we showed how a population of cells , each displaying a sharp onset of activity and a much slower decay of the extracellular field , could give rise to a brownian power spectrum .the time course of dendritic activity is often much longer than that of an action potential .this is in line with the above statement concerning the origin of the lfp .however , the sharpness of the temporal onset of activity is what gives rise to the power - law behavior .one mechanism that might account for this result would be the rapid initiation of an action potential , followed by slow dendro - synaptic decay . of course , this model not only assumes that the spiking statistics of each neuron is poisson , but that there are no correlations among neurons .it is not uncommon to find the firing rate of single neurons uncorrelated with the averaged behavior of the local population , however , this is not always the case . in case ii , we explored the opposite extreme from case i , that of rapidly firing , single neurons with linear temporal correlations .this behavior is similar in nature to so - called up - down states seen in cortical neurons . during periods of sleep , quiet awake behavior , or under a variety of anesthetics ,spontaneous activity of neocortical neurons display slow 0.1 to 2 hz oscillations called up - down states ( uds ) .these states appear to be characteristic of slow - wave sleep and are thought to be involved in the consolidation of long - term memories and in learning .the uds of cortical pyramidal neurons are highly synchronized and may be clearly seen in lfp recordings of the cortex .the up states are characterized by a sustained depolarization that leads to rapid , 20 - 70 hz spiking activity while the down states show periods of hyper - polarized inactivity .it should be pointed out that our recordings were performed in the awake resting state in the human cortex , whereas uds and ultra - slow oscillations have been described only in states of low vigilance such as slow - wave sleep and anesthesia in animal studies .it is therefore unlikely that the power law scaling behavior observed in our recordings would be caused exclusively by the mechanisms illustrated in case ii .nevertheless , it is encouraging that this extreme analytical case of a power law scaling gives rise to phenomena that are actually observed in mammalian brains .the true origin of the behavior probably lies somewhere in - between the two limiting cases we have considered here .unfortunately , an analytic evaluation of eq .[ shotnoise ] when there is an explicit time dependence in both the extracellular field ( ) and the firing statistics ( ) is , in general , difficult .however , for a known set of and , a numerical evaluation of eq .[ shotnoise ] is straightforward .this formalism should , therefore , serve as a starting point in modeling power - law dependencies in the power spectrum of the lfp and in connecting this property to the underlying single neuron activity .funding was provided by a fellowship from the sloan - swartz foundation to j.n .milstein , by a marie curie fellowship from the european commission to f. mormann , and by ninds , darpa , nsf and the mathers foundation .b. gutenberg and c. richter , _ seismicity of the earth and associated phenomena _, princeton university press , princeton , n.j . , ( 1954 ) ; b. gutenberg and c. richter , bull .32 * , 163 ( 1942 ) .k. linkenkaer - hansen , v. v. nikouline , j. m. palva , and r. j. ilmoniemi , j. neurosci .* 21 * 1370 ( 2001 ) ; e. pareda , a. gamundi , r. rial , and j. gonzalez , neuro .* 250 * , 91 ( 1998 ) ; w. j. freeman , m. d. holmes , b. c. burke , and s. vanhatalo , clin . neurophys . * 114 * , 1053 ( 2003 ) . j. n. milstein and c. koch , neural comput .* 20 * , 2070 ( 2008 ) ; g. r. holt and c. koch , j. comput . neurosci .* 6 * , 169 ( 1999 ) ; c. gold , d. a. henze , c. koch , and g. buzsaki , j. neurophysiol . * 95 * , 3113 ( 2006 ) .for instance , one may limit the random walk to positive numbers and set a threshold .a binary sequence is generated by adding a at each timestep that the walker remains below the threshold .if the threshold is reached , a is added to the binary sequence , the walker is reset to the origin , and the process continues .unfortunately , the resetting procedure clears the memory of the random walker , and we again generate a flat , white noise spectrum .y. isomura , a. sirota , s. ozen , s. montgomery , k. mizuseki , d. a henze , and g. buzsaki , neuron * 52 * , 871 ( 2006 ) ; c. c. h. peterson , t. t. g. hahn , m. mehta , a. grinvald , and b. sakmann , proc .u s a. * 100 * , 13638 ( 2003 ) ; m. steriade , i. timofeev , and f. grenier , j. neurophysiol .* 85 * , 1969 ( 2001 ) .
|
we demonstrate that human electrophysiological recordings of the local field potential ( lfp ) from intracranial electrodes , acquired from a variety of cerebral regions , show a ubiquitous scaling within the power spectrum . we develop a quantitative model that treats the generation of these fields in an analogous way to that of electronic shot noise , and use this model to specifically address the cause of this brownian noise . the model gives way to two analytically tractable solutions , both displaying brownian noise : 1 ) uncorrelated cells that display sharp initial activity , whose extracellular fields slowly decay and 2 ) rapidly firing , temporally correlated cells that generate up - down states .
|
with the goal of better understanding the physics of glasses and of glass formation , there has been a continuing search for empirical correlations among various aspects of the phenomenology of glassformers .the most distinctive feature of glass formation being the rapid increase with decreasing temperature of the viscosity and relaxation times , correlations have essentially been looked for between the characteristics of the latter and other thermodynamic or dynamic quantities .angell coined the term `` fragility '' to describe the non - arrhenius temperature dependence of the viscosity or ( alpha ) relaxation time and the associated change of slope on an arrhenius plot .he noticed the correlation between fragility and amplitude of the heat - capacity jump at the glass transition .earlier , the adam - gibbs approach was a way to rationalize the correlation between the viscosity increase and the configurational or excess entropy decrease as one lowers the temperature .since then , a large number of empirical correlations between `` fragility '' and other properties of the liquid or of the glass have been found : for instance , larger fragility ( _ i.e. _ , stronger deviation from arrhenius behavior ) has been associated with ( i ) a stronger deviation of the relaxation functions from an exponential dependence on time ( a more important `` stretching '' ) , ( ii ) a lower relative intensity of the boson peak , ( iii ) a larger mean square displacement at , ( iv ) a smaller ratio of elastic to inelastic signal in the x - ray brillouin - spectra , ( v ) a larger poisson ratio and ( vi ) a stronger temperature dependence of the elastic shear modulus , , in the viscous liquid . for useful as they may be to put constraints on proposed models and theories of the glass transition , such correlations can also be misleading by suggesting causality relations where there are no such things .it seems therefore important to assess the robustness of empirically established correlations . in this respect, we would like to emphasize a number of points that are most often overlooked : \1 ) fragility involves a variation with temperature that _ a priori _ depends on the thermodynamic path chosen , namely constant pressure ( isobaric ) versus constant density ( isochoric ) conditions . on the other hand , many quantities that have been correlated to fragility only depend on the thermodynamic state at which they are considered : this is not the case for the variation of the excess entropy or of the shear modulus , nor for the jump in heat capacity measured in differential scanning calorimetry , which are all path dependent , but the other properties are measured either at tg , the glass - transition temperature , or in the glass , where they also relate to properties of the liquid as it falls out of equilibrium at tg ( there may be a residual path dependence due to the nonequilibrium nature of the glass , but it is quite different from that occuring in the liquid ) . _ which fragility then , isobaric or isochoric , should best be used in searching for correlations ? _\2 ) the quantities entering in the proposed correlations are virtually always considered at tg .this is the case for the commonly used measure of fragility , the `` steepness index '' , which is defined as the slope of the temperature dependence of the alpha - relaxation time on an arrhenius plot with t scaled by tg .tg is of course only operationally defined as the point at which the alpha - relaxation time ( or the viscosity ) reaches a given value , say 100 seconds for dielectric relaxation .the correlated properties are thus considered at a given relaxation time or viscosity ._ what is the fate of the proposed correlations when one studies a different value of the relaxation time ? _\3 ) almost invariably , comparisons involve properties measured at atmospheric pressure , for which the largest amount of data is available . since , as discussed in the preceding point , the properties are also considered at a given relaxation time , an obvious generalization consists in studying the validity of the reported correlations under `` isochronic '' ( _ i.e. _ , constant relaxation time ) conditions , by varying the control parameters such that the relaxation time stays constant . _how robust are then the correlations when one varies , say , the pressure along an isochrone ? _ in light of the above , our contention is that any putative correlation between fragility and another property should be tested , as far as possible , by varying the reference relaxation time , by varying the thermodynamic state along a given isochrone , and by changing the thermodynamic path along which variations , such as that defining the fragility , are measured a better solution would certainly be to correlate `` intrinsic '' properties of glassformers that do not depend on the chosen state point or relaxation time .a step toward defining such an `` intrinsic '' fragility has been made when it was realized that the temperature and the density dependences of the alpha - relaxation time and viscosity of a given liquid could be reduced to the dependence on a single scaling variable , , with an effective activation energy characteristic of the high - temperature liquid .evidence is merely empirical and is supported by the work of several groups for a variety of glassforming liquids and polymers .the direct consequence of this finding is that the fragility of a liquid defined along an isochoric path is independent of density : the isochoric fragility is thus an intrinsic property , contrary to the isobaric fragility .although one could devise ways to characterize the isochoric fragility in a truly intrinsic manner , independently of the relaxation time , the common measure through the steepness index ( see above ) still depends on the chosen isochrone . in looking for meaningful correlations to this isochoric steepness index , it is clear however that one should discard quantities that vary with pressure ( or equivalently with temperature ) under isochronic conditions .as we further elaborate in this article , the stretching parameter characterizing the shape of the relaxation function ( or spectrum ) is _ a priori _ a valid candidate , as there is some experimental evidence that it does not vary with pressure along isochrones .the aim of the present work is to use the knowledge about pressure and temperature dependences of the liquid dynamics to test the robustness of proposed correlations between fragility and other properties .this is a continuation of the work presented in reference , where the focus was mainly on correlations between fragility of the liquid and properties of the associated glass . in this paperwe specifically consider the correlation between fragility and stretching .the reported correlation between the two is indeed one of the bases of the common belief that both fragility and stretching are signatures of the cooperativity of the liquid dynamics .we present new dielectric spectroscopy data on the pressure dependence of the alpha relaxation of two molecular glassforming liquids , dibutyl phthalate ( dbp ) and m - toluidine .we express the alpha - relaxation time as a function of the scaling variable and evaluate the density dependence of as well as the isochoric fragility .we also study the spectral shape and its pressure dependence along isochronic lines .we spend some time discussing the methodological aspects of the evaluation of the fragility and of the stretching from experimental data , as well as that of the conversion from to data .this provides an estimate of the error bars that one should consider when studying correlations .finally , by combining our data with literature data we discuss the robustness of the correlation between fragility and stretching along the lines sketched above .the paper is structured as follows .section [ sec : back ] introduces some concepts and earlier developments that are central for the discussion . in section [ sec : exp ] we present the experimental technique .section [ sec : relax ] is devoted to the pressure , temperature and density dependence of the relaxation time . in section [ sec : spec ] we analyze the spectral shape and its pressure and temperature dependence .finally , in section [ sec : disc ] we combine the current results with literature data to assess the relation between fragility and stretching , stressing the need to disentangle temperature and density effects .two appendices discuss some methodological points .the fragility is a measure of how much the temperature dependence of the alpha - relaxation time ( or alternatively the shear viscosity ) deviates from an arrhenius form as the liquid approaches the glass transition .the most commonly used criterion is the so called steepness index , where the derivative is evaluated at and is expressed in seconds .conventionally , the liquid is referred to as strong if is small , that is , and fragile if is large , meaning roughly above 60 . in the original classification of fragilityit was implicitly assumed that the relaxation time ( or viscosity ) was monitored at constant ( atmospheric ) pressure , as this is how the vast majority of experiments are performed .the conventional fragility is therefore the ( atmospheric pressure ) isobaric fragility , and , as indicated in eq .[ eq : angel ] , the associated steepness index is evaluated at constant pressure . however , the relaxation time can also be measured as a function of temperature along other isobars , and this will generally lead to a change in .moreover , it is possible to define an isochoric fragility and the associated index , , obtained by taking the derivative at constant volume rather than at constant pressure .the two fragilities are straightforwardly related via the chain rule of differentiation , when both are evaluated at the same point .the isochoric fragility , , describes the intrinsic effect of temperature , while the second term on the right hand side incorporates the effect due to the change of density driven by temperature under isobaric conditions .it can be shown that the above relation can be rewritten as where the unconventional is the isochronic expansivity , _i.e. _ , the expansivity along a line of constant alpha - relaxation time ( the line being a specific isochrone ) .the above result is purely formal and contains no assumptions .the implication of the result is that is larger than if and .it is well known that in general .the fact that is negative arises from the empirical result that the liquid volume always decreases when heating while following an isochrone . within the last decadea substantial amount of relaxation - time and viscosity data has been collected at different temperatures and pressures / densities . on the basis of the existing data ,it is reasonably well established that the temperature and density dependences of the alpha - relaxation time can be expressed in a scaling form as . it is seen directly from eq .[ eq : scaling ] that , when evaluated at , has the same value at all densities ( ) if is defined as the temperature where the relaxation time has a given value ( e.g. , s ) . exploiting this fact , it is easy to show that the scaling law implies that the isochoric fragility is independent of density .for instance , the isochoric steepness index , when evaluated at a corresponding to a fixed relaxation time , is given by the fact that the relaxation time is constant when is constant means that the isochronic expansion coefficient is equal to the expansion coefficient at constant .using this and the general result , it follows that which inserted in eq .[ eq : mpmrho ] leads to where , and are evaluated at .when liquids have different isobaric fragilities , it can be thought of as due to two reasons : a difference in the intrinsic isochoric fragility , , or a difference in the relative effect of density , characterized by and the parameter .we analyze the data in this frame .the following considerations hold for isochoric and isobaric conditions alike .the or subscript are therefore omitted in this section .the fragility is usually characterized by a criterion evaluated at , _i. e. _ , the temperature at which the relaxation time reaches =100 s-1000 s. the same criterion , e.g. the steepness index , can however equally well be evaluated at a temperature corresponding to another relaxation time , and this is also found more often in literature , mainly to avoid the extrapolation to long times .so defined , the `` fragility '' for a given system can be considered as a quantity which is dependent of the relaxation time at which it is evaluated : where defines the temperature .( is a special case with s-1000s . )an ( extreme ) strong system is a system for which the relaxation time has an arrhenius behavior , where is a temperature and density independent activation energy ( measured in units of temperature ) .inserting this in the expression for the relaxation - time dependent steepness index ( eq . [ eq : mtau ] ) gives which gives the value ( assuming ) and decreases to as the relaxation time is decreased .this means that even for a strong system the steepness index is relaxation - time dependent . in order to get a proper measure of departure from arrhenius behaviorit could therefore be more adequate to use the steepness index normalized by that of a strong system : will take the value at all relaxation times in a system where the relaxation time has an arrhenius behavior .such a normalized measure of fragility has been suggested before .for instance , olsen and coworkers have introduced the index where is a temperature dependent activation energy defined by .the relation between the steepness index and the olsen index is takes the value 0 for strong systems at all relaxation times .typical glass forming liquids display an approximate arrhenius behavior at high temperatures and short relaxation times ; in this limit and it increases as the temperature dependence starts departing from the arrhenius behavior .typical values of i at are ranging from i=3 to i=8 , corresponding to steepness indices of m=47 to m=127 .finally , we note in passing that relaxation - time independent measures of fragility can be formulated through fitting formulae : this is the case for instance of the fragility parameter in the vogel - tammann - fulcher ( vtf ) formula or of the frustration parameter in the frustration - limited domain theory .the dielectric cell is composed of two gold - coated electrodes separated by small teflon spacers . the distance between the spacers is mm and the area is 5.44 giving an empty capacitance of 16 pf .the electrodes are totally immersed in the liquid sample , which is sealed from the outside by a teflon cell .the electric contacts are pinched through the teflon .the compression is performed using liquid pentane , which surrounds the teflon cell from all sides .the teflon cell has one end with a thickness of mm in order to insure that the pressure is well transmitted from the pentane to the liquid sample .the pressure is measured by using a strain gauge .the cooling is performed by a flow of thermostated cooling liquid running inside the autoclave . the temperature and the temperature stabilityare monitored by two pt100 sensors placed 2 cm and 0.3 cm from the sample .the temperature is held stable within degree for a given isotherm . the temperature during the time it takes to record a spectrum is stable within degree .the setup insures a hydrostatic pressure because the sample is compressed from all sides .it is moreover possible to take spectra both under compression and decompression . by doing so and returning to the same condition after several different thermodynamic paths, we have verified that there was no hysteresis in the pressure dependence of the dynamics .this serves to confirm that the liquid is kept at thermodynamic equilibrium at all stages .the capacitance was measured using a hp 4284a lcr - meter which covers the frequency range from 100 hz to 1 mhz .the low - frequency range from 100 hz to 1 hz is covered using a sr830 lockin .the samples , dibutyl phthalate ( dbp ) and m - toluidine , were acquired from sigma - aldrich .the m - toluidine was twice distilled before usage .the dbp was used as acquired .liquid m - toluidine was measured on one isotherm at 216.4 k. dbp was measured along 4 different isotherms , 205.5 k , 219.3 k , 236.3 k and 253.9 k , at pressures up to 4 kbar .dbp was moreover measured at different temperatures along two isobars : atmospheric pressure and 230 mpa .the pressure was continuously adjusted in order to compensate for the decrease of pressure which follows from the contraction of the sample due to decreasing temperature .it is of course always possible to reconstruct isobars based on experiments performed under isotherm conditions .however , such a procedure mostly involves interpolation of the data , which is avoided by performing a strictly isobaric measurement . for dbpwe have obtained relaxation - time data at times shorter than s by using the high - frequency part of the spectrum and assuming time - temperature and time - pressure superposition ( ttps ) . although ttps is not followed to a high precision ( see section [ sec : shapedbp ] ) ,the discrepancies lead to no significant error on the determination of the relaxation time .this is verified by comparison to atmospheric - pressure data from the literature ( see figure [ fig : dbpisob ] ) .the dbp data at atmospheric pressure is shown in figure [ fig : dbpisob ] along with literature results .k , when defined as the temperature at which s. we also present the data taken at mpa in this figure .it is clearly seen that increases with pressure .an extrapolation of the data to s gives k for mpa , corresponding to kmpa .this corresponds well to the pressure dependence of ( at s ) reported by sekula _ , based on measurements taken at pressures higher than mpa .the dependence is however stronger than that reported by fujimori _ based on isothermal calorimetry , for which kmpa .this indicates that the calorimetric and the dielectric relaxations may have somewhat different dependences on pressure . in figure[ fig : fragi ] we illustrate the determination of and of the steepness index for the atmospheric - pressure data , using the part of the data of figure [ fig : dbpisob ] whith a relaxation time longer than a millisecond . along with the data we show the vtf fit from sekula _ et al ._ extrapolated to low temperatures , which gives k and . we have also performed a new vtf fit restricted to the data in the s s region .the result of this fit yields k and . finally , we have made a simple linear estimate of as a function of in the temperature range shown in the figure .this linear slope fits the data close to better than any of the vtf fits .the corresponding glass transition temperature and steepness index are k and .this illustrates that the determination of is rather robust while this is less so for the steepness index .this latter depends on how it is obtained , and the use of extrapolated vtf fits can lead to an overestimation .( of course , a vtf fit made over a very narrow range , e.g. s , will agree with the linear fit , because it becomes essentially linear over the restricted range . ) the fragility of dbp has earlier been reported to be , based on the data of dixon _ et al .we take as a representative value .the relaxation - time data along four different isotherms are displayed as a function of pressure in figure [ fig : dbpp ] . in order to separate the relative effects of density and temperatureit is convenient to express the relaxation time as a function of density and temperature rather than pressure and temperature . to do this , we need the pressure and temperature dependences of the density .however , for liquid dbp such data is only available at high temperature . in order to extrapolate the equation of state to low temperaturewe have applied the following scheme . when calculated from the data in ref . , the expansion coefficient shows a weak decrease with decreasing temperature .we therefore assume that the temperature dependence of is linear over the whole temperature range and integrate with respect to temperature to obtain the density along the atmospheric - pressure isobar . in the whole temperature range of ref . , the pressure dependence of the density is well described by fits to the tait equation with temperature - dependent adjustable parameters `` c '' and `` b '' ( which are directly related to the compressibility and its first - order pressure derivative ) . we have linearly extrapolated the temperature dependence of these parameters and used the tait equation to calculate the pressure dependence along each isotherm . extrapolating the derivatives rather than the density itselfis expected to lead to smaller errors on the latter .in addition , we have checked that this procedure gives physically reasonable pressure and temperature dependences of the expansivity and of the compressibility .figure [ fig : dbprho ] shows the density dependence of the alpha - relaxation time along the four different isotherms , the atmospheric - pressure isobar and the 230 mpa isobar .we have also included the room - temperature dielectric data of paluch _for dbp the viscosity data and the dielectric relaxation time do not decouple under pressure , and we have therefore also included the room - temperature viscosity data of cook _ et al ._ . in figure[ fig : dbpscaling ] we show the data of figure [ fig : dbprho ] plotted as a function of the scaling variable , choosing for the value that gives the best collapse for the data of this work .this corresponds to testing the scaling in equation [ eq : scaling ] by assuming that is a power law .the data taken at low density collapse quite well with , while this is not true for the data of paluch taken at densities higher than approximately 1.2 g/ .it is possible to make all the data collapse by allowing to have a stronger density dependence at higher densities . in figure[ fig : dbpscaling2 ] we show the data as a function of , where we have constructed the density dependence of in order to get a good overlap of all the data ( we did not look for the best collapse , but merely evaluated the change of the isochronic expansivity : see section [ sec : back ] ) .the resulting density dependence of is shown in figure [ fig : dbpscaling2 ] along with the power law .note that the quality of the data collapse depends only on the density dependence of not on its absolute value .the constructed has an apparent `` power - law '' exponent that increases from 1.5 to 3.5 with density in the range considered . in any case , the absence of collapse in figure [ fig : dbpscaling ] can not be explained by errors in estimating the pvt data : this is discussed in more detail in appendix [ sec : densap ] . as a last note regarding the -scaling in figure [ fig : dbpscaling2 ] , we want to stress that we can not test the scaling ( eq . [ eq : scaling ] ) in the density range above where there is only one set of data .( this is why we did not attempt to fine tune to find the best collapse , see above . ) indeed , with a unique set of data in a given range of density it is always possible to construct in this range to make the data overlap with data taken in other density ranges .we have determined the ratio between the isochoric fragility and the isobaric fragility at atmospheric pressure by calculating along the isochrone of s and inserting it in eq .[ eq : mpmrho ] .this leads to , when . in figure[ fig : dbpisob2 ] we show the isobaric data taken at atmospheric pressure and at 230mpa scaled by their respective .no significant pressure dependence of the isobaric fragility is observed when going from atmospheric pressure to 230 mpa , which is consistent with the result of reference .the pressure independence of is connected to the relatively low value of ( typical values are 1.1 - 2 ) ; is pressure independent and the ratio can not be lower than one ( see eq .[ eq : mpmrho ] ) , so that can at most decrease by from its atmospheric - pressure value .such a change would almost be within the errorbar of the determination of from the data at ( see the discussion earlier in this section ) . ) of liquid dbp at atmospheric pressure and at 230 mpa ( arrhenius plot ) .data at atmospheric pressure from other groups are also included : unpublished data from nielsen , the vtf fit of shown in the range where it can be considered as an interpolation of the original data , and data taken from figure 2(a ) in reference . ] with relaxation times longer than a millisecond ( symbols ) . also shownare the vtf fit from reference extrapolated to low temperatures ( dashed - dotted line ) , a new vtf fit made by using data in the s region ( dashed line ) , and estimated slope of the data in the long - time region ( full line ) .the s estimated from these three methods are very similar , whereas the fragility varies significantly from to . ] ) as a function of pressure along 4 different isotherms ( log - linear plot ) . ] plotted as a function of . ]\a ) plotted as a function of , with increasing as increases .( b ) density - dependent activation energy ( dashed line ) used in the scaling variable for collapsing data in ( a ) ( the associated increases from 1.5 to 3.5 in the density range under study ) .we also display the power law giving the best scaling , , at low density ( full line ) ., title="fig : " ] b ) plotted as a function of , with increasing as increases .( b ) density - dependent activation energy ( dashed line ) used in the scaling variable for collapsing data in ( a ) ( the associated increases from 1.5 to 3.5 in the density range under study ) .we also display the power law giving the best scaling , , at low density ( full line ) ., title="fig : " ] , k and (230mpa)=200k . as in figure[ fig : dbpisob ] , data from other groups are also included : unpublished data from nielsen , the vtf fit of shown in the range where it can be considered as an interpolation of the original data , and data taken from figure 2 ( a ) in reference . ]the glass transition temperature at atmospheric pressure is k ( for s ) and the isobaric fragility based on dielectric spectra is reported to be .( there has been some controversy about the dielectric relaxation in m - toluidine , see reference and references therein . ) in the inset of figure [ fig : mtrho ] we show the pressure - dependent alpha - relaxation time at k. extrapolating the data to s leads to mpa , which is in agreement with the slope , kmap , reported for the calorimetric glass transition in .this indicates that the decoupling between the timescales of dipole relaxation and of calorimetric relaxation which appears under pressure in the case of dbp is not present in m - toluidine in this pressure range . as for dbp, we wish to convert the temperature and pressure dependences of the relaxation time to the temperature and density dependences .density data are available along four isotherms in the k k range for pressures up to 300 mpa .tait fits and thermal expansivity in this range were extrapolated by using the scheme described above for dbp in order to determine density both as a function of temperature down to , and as a function of pressure on the k isotherm . in figure[ fig : mtrho ] we show the alpha - relaxation time as a function of density .the data taken at atmospheric pressure and the data taken along the 216.4 k isotherm cover two different ranges in density .it is therefore not possible from this data to verify the validity of the scaling in .we therefore assume that the scaling is possible . moreover , due to the paucity of the data we describe by a simple power law , .we find the exponent by exploiting the fact that the scaling variable is uniquely fixed by the value of the relaxation time ; applying this at , namely setting = ) , leads to and gives a ratio of .( symbols ) .the vtf fit of the atmospheric - pressure data of reference is also shown in the range where the fit can be considered as an interpolation of the data ( dashed line ) .the inset shows the alpha - relaxation time of m - toluidine as a function of pressure along the isotherm t=216.4 k. ]the shape of the relaxation function ( or spectrum ) , most specifically its distinctly nonexponential ( or non - debye ) character in the viscous regime , is taken as one of the important features of glassforming materials . characterizing and quantifying this effect is however not fully straightforward and has led to diverging interpretations .first of all , the shape of the relaxation function or spectrum may change with the experimental probe considered .even when restricting comparison to a single probe , here , dielectric relaxation , there is no consensus on how to best characterize the shape .we discuss in appendix [ sec : shape ] various procedures that are commonly used and we test their validity on one representative spectrum . for reasons detailed in that appendix ,we focus in the following on the cole - davidson fitting form .the frequency - dependent dielectric loss for a selected set of different pressures and temperatures is shown in figure [ fig : dbpimag ] .the first observation is that cooling and compressing have a similar effect as both slow down the alpha relaxation and separate the alpha relaxation from higher - frequency beta processes .the data displayed are chosen so that different combinations of temperature and pressure give almost the same relaxation time . however , the correspondence is not perfect . in figure[ fig : dbpimag2 ] we have thus slightly shifted the data , by at most 0.2 decade , in order to make the peak positions overlap precisely .this allows us to compare the spectral shapes directly .it can be seen from the figure that the shape of the alpha peak itself is independent of pressure and temperature for a given value of the alpha - relaxation time ( _ i.e. _ , of the frequency of the peak maximum ) , while this is not true for the high - frequency part of the spectra which is strongly influenced by the beta - relaxation peak ( or high - frequency wing ) . when comparing datasets that have the same alpha - relaxation time one finds that the high - frequency intensity is higher for the pressure - temperature combination corresponding to high pressure and high temperature .in figure [ fig : dbpttszoom ] we show all the datasets of figure [ fig : dbpimag ] superimposed and we zoom in on the region of the peak maximum .the overall shape of the alpha relaxation is very similar at all pressures and temperatures .however , looking at the data in more detail , one finds a significantly larger degree of collapse between spectra which have the same relaxation time , whereas a small broadening of the alpha peak is visible as the relaxation time is increased . at long relaxation timesthere is a perfect overlap of the alpha - relaxation peaks which have the same relaxation time . at shorter relaxation time , , the collapse is not as good : the peak gets slightly broader when pressure and temperature are increased along the isochrone . in all cases ,the alpha peak is well described by a cole - davidson ( cd ) shape .the goes from 0.49 to 0.45 on the isochrone with shortest relaxation time and decreases to about 0.44 close to at all pressures . on the other hand , a kolraush - william - watts ( kww ) fit close to gives . a detailed discussion of the fitting procedures and of the relation between cd and kwww descriptions is given in appendix [ sec : shape ] .with a slight shift of the peak frequencies ( less than 0.2 decade ) to make the data taken under quasi isochronic conditions precisely coincide .the symbols are the same as in figure [ fig : dbpimag ] , but the data at t=206 k and p=206 mpa and 219.3 k and p=392 mpa are not shown . ] and [ fig : dbpimag2 ] , with the frequency and intensity now scaled by the values at the maximum .we show only a decade in frequency in order to magnify the details .notice a small broadening as the characteristic relaxation time increases : blue dashed - dotted line are three different data sets with ( p=320 mpa , t=253.9 k and p=153 mpa , t=236.3 k and p=0 mpa , t=219.3 k ) .red full lines are three data sets with ( p=251 mpa , t=236.3 k and p=108 mpa , t=219.3 k and p=0 mpa , t=205.6 k ) .green dashed lines are three data sets with ( p=339 mpa , t=236.3 k and p=200 mpa , t=219.3 k and p=85 mpa ,t=205.6 k ) . ]the frequency - dependent dielectric loss of m - toluidine for several pressures along the t=216.4 k isotherm is shown in figure [ fig : mtolimag ] .the data are then superimposed by scaling the intensity and the frequency by the intensity and the frequency of the peak maximum , respectively : this is displayed in figure [ fig : mtoltts ] .when zooming in ( figure [ fig : mtoltts ] ( b ) we still see almost no variation of the peak shape . for the present set of data , pressure - time - superpositionis thus obeyed to a higher degree than in dbp , and the changes are too small to give any pressure dependence in the parameters when fitting the spectra .the cole - davidson fit to the m - toluidine gives ( see also appendix [ sec : shape ] ) .mandanici and coworkers have reported a temperature independent value of for data taken at atmospheric pressure in the temperature range 190 k-215 k , a value that is compatible with ours .\(a ) , now with the intensity and the frequency scaled by the values of the peak maximum . figure ( b ) shows a zooming in of the data in ( a ) to focus on the alpha - relaxation region near the peak maximum.,title="fig : " ] ( b ) , now with the intensity and the frequency scaled by the values of the peak maximum .figure ( b ) shows a zooming in of the data in ( a ) to focus on the alpha - relaxation region near the peak maximum.,title="fig : " ]as discussed in the introduction , the temperature dependence of the alpha - relaxation time ( or of the viscosity ) is usually considered as the most important phenomenon to understand in glass science .isobaric fragility is then often used to characterize the viscous slowing down and its measures , such as the steepness index , are then considered as fundamental parameters .many studies have been aimed at investigating which other properties of the liquid and of the associated glass correlate to fragility .such correlations have been empirically established by comparing rather large sets of systems covering a wide spectrum of fragilities . in the literature ,the finding of a correlation between fragility and some other property is always interpreted as indicating that the property in question is related to the effect of _ temperature _ on the structural relaxation . however , when cooling a liquid isobarically two effects contribute to the slowing down of the dynamics : the decrease of temperature and the associated increase of density .hence , the isobaric fragility is a combined measure of the two effects .it is of course the underlying goal that the proposed correlations be used as guidelines and tests in the development of theories and models for the glass transition .it is therefore important to clarify if the correlations result from , and consequently unveil information on , the intrinsic effect of temperature on the relaxation time , the effect of density , or a balanced combination of the two .eq.s [ eq : mpmrho2 ] and [ eq : mpmrho ] show how isobaric fragility can be decomposed into two contributions , that of temperature being given by and the relative effect of density on relaxation time characterized by .isobaric measurements do not give access to nor to independently , but the relevant information can be obtained from data taken under pressure , as we have shown for the data presented here . from this information it becomes possible to revisit the correlations between fragility and other properties .the underlying idea is that a property supposed to correlate to the effect of temperature on the relaxation time should more specifically correlate to the isochoric fragility , , than to the isobaric one , .as also stressed in the introduction , it is instructive to consider the evolution of the empirically established correlations with pressure . as shown in section [ sec : iso ] , is constant , _i.e. _ , is independent of density and pressure , when it is evaluated at a pressure ( or density ) dependent corresponding to a given relaxation time .nonetheless , it follows from eq .[ eq : mpmrho2 ] , that the isobaric fragility will in general change due to the pressure dependence of . increases with pressure , decreases , whereas is often to a good approximation constant ( the dbp case at high pressure discussed in section [ sec : relaxdbp ] is one exception ) . as a result, the pressure dependence of is nontrivial .dbp , which we have studied here , shows no significant pressure dependence of the isobaric fragility , while the general behavior seen from the data compiled by roland _ is that the isobaric fragility decreases or stays constant with pressure , with few exceptions .this seems to indicate that the decrease of usually dominates over the other factors .the properties that are correlated to fragility will _ a priori _ also depend on pressure or density .however if a property is related to the pure effect of temperature on the relaxation time , and therefore correlates to , then it should be independent of density when evaluated along an isochrone ( usually the glass transition line tg ) , as itself does not depend on density .one of the properties that has been suggested to correlate to the fragility is the nonexponential character of the relaxation function , usually expressed in terms of the stretching parameter .the data we have reported here confirm the earlier finding that the spectral shape of the alpha relaxation does not vary when pressure is increased while keeping the relaxation time constant .this leads us to suggest that , if a correlation between fragility and stretching does exist , this latter should better correlate to the isochoric fragility which is also independent of pressure than to the isobaric fragility . to test this hypothesis we have collected data from literature reporting isobaric fragility and stretching of the relaxation at .we consider here a description of the shape of the relaxation function in terms of the kww stretching parameter .this choice is made because it is convenient to use a characterization with only one parameter for the shape ( see appendix [ sec : shape ] for a discussion and the connection with the cole - davidson description used above ) and because is the most reported of the liquids where is also available .the compilation of this data is shown in table i and in figures [ fig : mp ] and [ fig : mrho ] where both the isobaric fragility at atmospheric pressure ( fig .[ fig : mp ] ) and isochoric fragility ( fig .[ fig : mrho ] ) are plotted against the stretching parameter .there is a great deal of scatter in both figures .there is however an observable trend , the fragilities appearing to decrease as the stretching increases .the relative effect of density ( over that of temperature ) on the slowing down of the relaxation is characterized by the term . in figure[ fig : ratio ] we show the ratio as a function of .clearly , no correlation is found between this ratio and the stretching .[ table ] .fragilites and kww stretching exponents of molecular liquids and polymers .the indicates that the value is not given in the corresponding reference but is calculated from the data therein .the following abbreviations are used for the names of the liquids , pc = propylene carbonate , bmpc = 1,1-bis(p - methoxyphenyl)cyclohexane , bmmpc = 1,1-di(4-methoxy-5-methylphenyl)cyclohexane , kde = cresolphtalein - dimethyl - ether , dep a = diglycidylether of bisphenol a , and dhiq = decahydroisoquinoline . [ cols="^,^,^,^,^,^,^",options="header " , ] the correlation between stretching and fragility is not strikingly different in figures [ fig : mp ] and [ fig : mrho ] .however , both on theoretical ground ( focusing on the intrinsic effect of temperature ) and on phenomenological one ( isochoric fragility and stretching do not appear to vary as one changes pressure along an isochrone ) , our contention is that one should prefer using the isochoric fragility . in the above wehave considered only fragility and stretching at the conventional glass transition temperature , that is around s. however , we have pointed out in the introduction that both the steepness index characterizing fragility and the stretching parameter depend on the relaxation time .although still debated , there seems to be a qualitative trend toward a decrease of the stretching ( an increase in ) and of the steepness index as the relaxation time decreases and one approaches the `` normal '' liquid regime .it would certainly be very valuable to obtain more data in order to study how the correlation between fragility and stretching evolves as a function of the relaxation time . for numerical values and references .] as a function of stretching parameter .diamonds : molecular liquids , circles : polymers .see table [ table ] for numerical values and references . ] for numerical values and references . ]in this article we have stressed the constraints that one should put on the search for ( meaningful ) empirical correlations between the fragility of a glassformer , which characterizes the temperature dependence of the slowing down , and other dynamic or thermodynamic properties . among such constraintsis the check that the proposed correlations , often established at tg and at atmospheric pressure , are robust when one changes the reference relaxation time ( in place of the characteristic of tg ) as well as when one varies the pressure under isochronic conditions .important also is the fact that fragility depends on the thermodynamic path considered ( constant pressure versus constant density ) and that , contrary to the isobaric fragility , the isochoric one appears as an intrinsic property of the glassformer , characterizing the pure effect of temperature .we have reported dielectric relaxation spectra under pressure for two molecular liquids , m - toluidine and dbp .we have combined these data with the available thermodynamic data and analyzed the respective effect of density and temperature on the dynamics .our results are consistent with a general picture in which the isochoric fragility is constant on an isochrone . the shape of the relaxation function , as e.g. expressed by the stretching parameter , has also been found constant along isochrones .we have finally discussed the possible correlation between fragility and stretching , suggesting that a meaningful correlation is to be looked for between stretching and isochoric fragility , as both seem to be constant under isochronic conditions and thereby reflect the intrinsic effect of temperature . on the practical side ,the correlation is however no stronger with the isochoric fragility than with the isobaric one .one top of large error bars that may be present and that we have addressed in some detail , this reflects the fact that correlations are rather statistical in nature , emerging from a comparison of a large number of glassformers , rather than one - to - one correspondences between properties of the materials . .we would like to thank a. wrflinger for the pvt data on m - toluidine and albena nielsen and coworkers for making available her dielectric data on dbp prior to publishing .we are grateful to denis lhte and franc cois ladieu for having lent us the sr830 lockin .moreover we acknowledge the work of jol jaffr who built the autoclave for the dielectric measurements under pressure .this work was supported by the cnrs ( france ) and grant no .645 - 03 - 0230 from forskeruddannelsesraadet ( denmark ) . 10 c. a. angell. strong and fragile liquids . in k.l. ngai and g. b. wright , editors , _ relaxations in complex systems _, page 3 , 1984 . g. adam and j. h. gibbs . on temperature dependence of cooperative relaxation properties in glass - forming liquids ., 43:139 , 1965 .r. bhmer , k. l. ngai , c. a. angell , and d. j. plazek .nonexponential relaxations in strong and fragile glass formers . , 99:42014209 , 1993 .a. p. sokolov , e. rssler , a. kisliuk , and d. quitmann .dynamics of strong and fragile glass formers - differences and correlation with low - temperature properties . , 71:20622065 , 1993 . k. l. ngai .dynamic and thermodynamic properties of glass - forming substances . , 275:751 , 2000 .t. scopigno , g. ruocco , f. sette , and g. monaco . is the fragility of a liquid embedded in the properties of its glass ?, 302:849852 , 2003 .v. n. novikov and a. p. sokolov .poisson s ratio and the fragility of glass - forming liquids ., 431:961963 , 2004 .j. c. dyre .the glass transition and elastic models of glass - forming liquids ., 78:953972 , 2006 .r. richert and c. a. angell .dynamics of glass - forming liquids .v. on the link between molecular dynamics and configurational entropy ., 108:90169026 , 1998 . c. alba - simionesco , d. kivelson , and g. tarjus .temperature , density , and pressure dependence of relaxation times in supercooled liquids ., 116:50335038 , 2002 .g. tarjus , d. kivelson , s. mossa , and c. alba - simionesco . disentangling density and temperature effects in the viscous slowing down of glassforming liquids . , 120:61356141 , 2004 .r. casalini and c. m. roland .thermodynamical scaling of the glass transition dynamics ., 69:062501 , 2004 . c. m. roland , s. hensel - bielowka , m. paluch , and r. casalini .supercooled dynamics of glass - forming liquids and polymers under hydrostatic pressure ., 68:14051478 , 2005 . c. dreyfus , a. le grand , j. gapinski , w. steffen , and a. patkowski .scaling the alpha - relaxation time of supercooled fragile organic liquids ., 42:309319 , 2004 .a. reiser , g. kasper , and s. hunklinger .pressure - induced isothermal glass transition of small organic molecules ., 72:094204 , 2005 .g. floudas , k. mpoukouvalas , and p. papadopoulos .the role of temperature and density on the glass - transition dynamics of glass formers ., 124:074905 , 2006 .k. l. ngai , r. casalini , s. capaccioli , m. paluch , and c. m. roland .do theories of the glass transition , in which the structural relaxation time does not define the dispersion of the structural relaxation , need revision ?, 109:1735617360 , 2005 .k. niss and c. alba - simionesco .effects of density and temperature on correlations between fragility and glassy properties . , 74:024205 , 2006 .m. l. ferrer , c. lawrence , b. g. demirjian , d. kivelson , c. alba - simionesco , and g. tarjus .supercooled liquids and the glass transition : temperature as the control variable . , 109:80108015 , 1998 . c. alba - simionesco and g. tarjus .temperature versus density effects in glassforming liquids and polymers : a scaling hypothesis and its consequences . , 2006 .k. u. schug , h. e. king , and r. bhmer .fragility under pressure : diamond anvil cell viscometry of ortho - terphenyl and salol ., 109:1472 , 1998 .a. v. granato .the specific heat of simple liquids ., 307:376386 , 2002 .j. c. dyre and n. b. olsen .landscape equivalent of the shoving model ., 69:042501 , 2004 .d. kivelson and g. tarjus .superarrhenius character of supercooled glass - forming liquids ., 235:86 , 1998 .m. sekula , s. pawlus , s. hensel - bielowka , j. ziolo , m. paluch , and c. m. roland .structural and secondary relaxations in supercooled di - n - butyl phthalate and diisobutyl phthalate at elevated pressure ., 108:49975003 , 2004 .h. fujimori , m. oguni , and c. alba - simionesco .pressure effect as referred to the temperature effect on irreversible structural relaxations in liquid dibutylphthalate . ,supplement:235238 , 1997 .k. dixon , l. wu , s. r. nagel , b. d. williams , and j. p. carini .scaling in the relaxation of supercooled liquids ., 65:11081111 , 1990 .p. w. bridgman .volume - temperature - pressure relations for several non - volatile liquids . , 67:127 , 1932 .r. l. cook , c. a. herbst , and h. e. king .high - pressure viscosity of glass - forming liquids measured by the centrifugal force diamond anvil cell viscometer ., 97:23552361 , 1993 .l. terminassian , k. bouzar , and c. alba .thermodynamic properties of liquid toluene ., 92:487493 , 1988 .m. paluch , m. sekula , s. pawlus , s. j. rzoska , j. ziolo , and c. m. roland . test of the einstein - debye relation in supercooled dibutylphthalate at pressures up to 1.4 gpa ., 90:175702 , 2003 .a. nielsen , t. cristensen , n. b. olsen , and j. c. dyre .data . unpublished .a. mandanici , m. cutroni , and r. richert .dynamics of glassy and liquid m - toluidine investigated by high - resolution dielectric spectroscopy . , 122:084508 , 2005 . c. alba - simionesco , j. fan , and c. a. angell .thermodynamic aspects of the glass transition phenomenon .ii . molecular liquids with variable interactions . , 110:52625272 , 1999 . c. alba - simionesco , h. fujimori , d. morineau , and b. frick .a study of the glass transition of molecular liquids as a function of pressure and temperature ., 126:229233 , 1997 .a. wrflinger . data . private communication . c. alba - simionesco , a. cailliaux , a. alegria , and g. tarjus . scaling out the density dependence of the a relaxation in glass - forming polymers . , 68:5864 , 2004 .p. k. dixon and s. r. nagel .frequency - dependent specific - heat and thermal - conductivity at the glass - transition in ortho - terphenyl mixtures .61:341344 , 1988 . d. h. huang and g. b. mckenna .new insights into the fragility dilemma in liquids . ,114:56215630 , 2001 . m. paluch , k. l. ngai , and s. hensel - bielowka . pressure and temperature dependences of the relaxation dynamics of cresolphthalein - dimethylether : evidence of contributions from thermodynamics and molecular interactions . , 114:1087210883 , 2001 .a. tlle .neutron scattering studies of the model glass former ortho - terphenyl ., 64:14731532 , 2001 .q. qin and g. b. mckenna .correlation between dynamic fragility and glass transition temperature for different classes of glass forming liquids ., 352:29772985 , 2006 .r. richert , k. duvvuri , and l. t. duong .dynamics of glass - forming liquids .dielectric relaxation of supercooled _ tris_-naphthylbenzene , squalane , and decahydroisoquinoline ., 118:1828 , 2003 .r. casalini and c. m. roland .scaling of the supercooled dynamics and its relation to the pressure dependences of the dynamic crossover and the fragility of glass formers ., 71:014210 , 2005 .r. casalini and c. m. roland . temperature and density effects on the local segmental and global chain dynamics of poly(oxybutylene ) ., 38:17791788 , 2005 .s. hensel - bielowka , j. ziolo , m. paluch , and c. m. roland .the effect of pressure on the structural and secondary relaxations in 1,1 -bis ( p - methoxyphenyl ) cyclohexane .117:23172323 , 2002 . r. casalini , m. paluch , and c. m. roland . influence of molecular structure on the dynamics of supercooled van der waals liquids . , 67:031505 , 2003 . c. m. roland , m. paluch , t. pakula , and r. casalini .volume and temperature as control parameters for the dielectric a relaxation of polymers and molecular glass formers ., 84:15731581 , 2004 .m. paluch , c. m. roland , j. gapinski , and a. patkowski .pressure and temperature dependence of structural relaxation in diglycidylether of bisphenol a. , 118:31773186 , 2003 . c. m. roland and r. casalini .temperature dependence of local segmental motion in polystyrene and its variation with molecular weight ., 119:18381842 , 2003 .r. casalini , k. j. mcgrace , and c. m. roland .isobaric and isochoric properties of decahydroisoquinoline and extremly fragile glass former . , 2006 . in press .a. j. barlow , j. lamb , and a. j. matheson .viscous behaviour of supercooled liquids . , 292:322 , 1966 .p. w. bridgman .viscosities to 30,000 kg/ . , 77:129 , 1949 .k. niss , c. dalle - ferrier , and c. alba - simionesco . data .unpublished .w. t. laughlin and d. r. uhlmann .viscous flow in simple organic liquids ., 76:2317 , 1972 .d. l. sidebottom and c. m. sorensen .light - scattering study of the glass - transition in salol ., 40:461466 , 1989 .n. o. birge .specific - heat spectroscopy of glycerol and propylene - glycol near the glass - transition ., 34:16311642 , 1986 .k. l. ngai and r. w. rendell .comparison between frequency - dependent specific - heat and dielectric - relaxation of glycerol and propylene - glycol ., 41:754756 , 1990 .k. l. ngai , r. w. rendell , and d. j. plazek .couplings between the cooperatively rearranging regions of the adam - gibbs theory of relaxations in glass - forming liquids ., 94:30183029 , 1991 .m. cutroni , p. migliardo , p. a. piccolo , and c. alba - simionesco . the dynamic glass - transition of a fragile molecular liquid in the megahertz domain ., 6:52835293 , 1994 . s. hensel - bielowka , m. paluch , and k. l. ngai . emergence of the genuine johari - goldstein secondary relaxation in m - fluoroaniline after suppression of hydrogen - bond - induced clusters by elevating temperature and pressure ., 123:014502 , 2005 .d. j. plazek and k. l. ngai .correlation of polymer segmental chain dynamics with temperature - dependent time - scale shifts ., 24:12221224 , 1991 .a. chauty - cailliaux . .phd thesis , universite paris xi , 2003 .d. h. huang , d. m. colucci , and g. b. mckenna .dynamic fragility in polymers : a comparison in isobaric and isochoric conditions ( vol 116 , pg 3925 , 2002 ) ., 117:73907390 , 2002 .c. p. lindsey and g. d. patterson .detailed comparison of the williams - watts and cole - davidson functions ., 73:33483357 , 1980 . f. alvarez , a. alegria , and j. colmenero .relationship between the time - domain kohlrausch - williams - watts and frequency - domain havriliak - negami relaxation functions ., 44:73067312 , 1991 . d. gomez and a. alegria . on the empirical functions describing the alpha - relaxation of glass - forming systems ., 287:246251 , 2001 .t. blochowicz , c. tschirwitz , s. benkhof , and e. a. rssler .susceptibility functions for slow relaxation processes in supercooled liquids and the search for universal relaxation patterns . , 118:75447555 , 2003 .n. b. olsen , t. christensen , and j. c. dyre .time - temperature superposition in viscous liquids ., 86:1271 , 2001 .t. blochowicz , c. gainaru , p. medick , c. tschirwitz , and e. a. rossler . the dynamic susceptibility in glass forming molecular liquids : the search for universal relaxation patterns ii ., 124:134503 , 2006 .the pressure and temperature dependences of the density are of course a crucial input to the scaling shown in section [ sec : relaxdbp ] . in order to evaluate the effect of the extrapolations we have performed , we focus on the scaling for the high - pressure room - temperature data of paluch and the data at atmospheric pressure , because the extrapolation of the density is smallest in these cases .the discrepancies seen in figure [ fig : dbpscaling ] could be accounted for , if the density at high pressure and room temperature were higher than what we have estimated or if the density at low - temperature were lower than what we have estimated .the high - density dynamical data are taken at room temperature .the experimental density data are also taken at room temperature and they are only extrapolated above 1.2 gpa . if the actual density is higher than what we have estimated , then it means that the compressibility is larger than what we taken .however , the compressibility at 1.2 gpa is already in the high - pressure domain where it is very low and almost pressure independent ( it is slightly decreasing with increasing pressure ) . the most conservative estimate we could make is to keep the compressibility constant for pressures above the last experimental point at 1.2 gpa .such an approach changes the ratio by less than one percent , and , therefore , can not account for the discrepancy seen in figure [ fig : dbpscaling ] .an alternative explanation would be that the actual low temperature density is higher than we have estimated , meaning that we have overestimated the expansion coefficient .this latter has been calculated at two different high temperatures based on the data in reference .this leads to a slight decrease in expansion coefficient with decreasing temperature .if the expansion coefficient is to be smaller than the estimate from this temperature dependence , then it would mean that the temperature dependence of the expansion coefficient should increase as temperature decreases .this is the opposite of what is seen in real liquids , where at atmospheric pressure tends to a constant at low temperatures .it is actually most common to assume that the of molecular liquids is constant below room temperature ( e.g. ref .this type of assumption would enhance the discrepancy in figure [ fig : dbpscaling ] .we therefore conclude that the absence of collapse of the high - pressure data in fig .[ fig : dbpscaling ] using a simple power law form for can not be explained by errors made in the estimating the pvt data .in the following , we shortly review the procedures commonly used to characterize the shape of the relaxation spectrum of viscous liquids and test different descriptions on one of our spectra .we more specifically look at schemes for converting one type of description to another . this analysis is important for the present work because we compile literature data in section [ sec : betam ] in order to look at possible general connections between relaxation shape and temperature dependence of the relaxation time .the low - frequency behavior of this function is always a power law with exponent 1 .the high frequency behavior is a power law with exponent . is the only parameter describing the shape of the relaxation function .hence it controls both the exponent of the high frequency power law and the width of the relaxation function .the havriliak - negami ( hn ) function , ^\gamma}\;\;,\end{aligned}\ ] ] gives a power law with exponent in the high - frequency limit and a power law of exponent in the low frequency - limit of its imaginary part .the hn function reduces to cole - davidson ( cd ) one when .( in the case of a cd function we follow the convention and refer to the above as . )the cd spectrum has the same general characteristics as the kww one : a high - frequency power law with exponent given by and a low - frequency power law with exponent one .however , the shape of the two functions is not the same . the cd function is narrower for a given high frequency exponent ( given ) than the kww function .the best overall correspondence between the cd - function and the kww function has been determined by lindsey and patterson .no good correspondence exists in general between the hn and the kww functions .first of all because the former involves two adjustable shape parameters and the latter only one ( plus in both cases a parameter for the intensity and one for the time scale ) .the kww function always has a slope of one at low frequencies while the hn function has a generally nontrivial . numerically found that the two functions can nonetheless be put in correspondence by fixing the relation between the two hn parameters and choosing .this restricted version of the hn function is sometimes referred to as the aac function .the shape is described by one parameter .however , it is clear that this function can not correspond to the kww function in the frequency range where the loss can be described by power laws , as it was also noted by gomez and alegria .the aac function inherits the behavior of the hn function ; as a result it has a nontrivial exponent at low frequencies and an exponent at high frequencies , while the associated kww function has exponents one and at low and high frequencies , respectively .another approach is to describe the dielectric spectrum by a distribution of debye relaxations and to fit the shape of the distribution rather than the spectral shape directly .the following form has been suggested for the distribution function , where is a normalization factor .the function above is known as the extended generalized gamma distribution , gge . the last term ( and the parameters and ) describes a high - frequency wing , corresponding to a change from one power law behavior ( - ) to another ( - ) .this term can therefore be omitted if no wing is observed in the spectrum .this results in a simpler distribution ; the generalized gamma distribution ( gg ) whose shape is described by two parameters : determines the width and gives the exponent of the high - frequency power law .the low frequency is always a power law with exponent one . finally , it is possible to describe the spectra phenomenologically in terms of the full width at half maximum , usually normalized to the full width at half maximum of a debye peak ( , with decade ) , and by the exponent of the power law describing the high - frequency side .the power law exponent is not always well defined , as there can be a high - frequency wing or a secondary process appearing at high - frequencies .olsen _ et al ._ therefore suggest to characterize the alpha peak by the minimal slope found in a double logarithmic plot of the dielectric loss as a function of frequency .note that this phenomenological description requires two parameters to describe the shape , while the commonly used cd and the kww functions use only one parameter to describe the spectrum . in figure[ fig : mtolfit ] we show one of the dielectric spectra of m - toluidine along with fits to the functions described above .the minimal slope is and .the best fits to the different functions are displayed in figure [ fig : mtolfit ] .the cd - fit gives , which with the lindsey - patterson scheme corresponds to .the direct fit with the fourier transform of the kww gives .the best aac fit gives leading to and .this shows that both the patterson and the aac approximations reasonably well reproduce the value found from using kww directly .another point worth noticing is that the value does not correspond to the actual high - frequency slope .this is because the overall agreement between the fit and the data is much more governed by the width of the relaxation function than by its high - frequency slope , as it is also clearly seen for the kww fit in figure [ fig : mtolfit ] .note that the aac approximation for the relation between the hn parameters and only holds when the hn parameters are fixed according to .the original hn function has two adjustable parameters to describe the shape .the best hn fit gives , and .the gamma distribution which also has two free parameters gives and .finally we have fitted with the gge using the constraint ( see reference ) , meaning that the function has 3 free parameters to discribe the shape , the values being , and .it is not surprising that the gge with 3 free parameters gives by far the best fit .however it is also striking that the cd with only one parameter describing the shape gives a good fit over the whole peak , whereas this is not true for the kww nor for the aac . from the above we conclude the cd - function gives a good description of the shape of the relaxation using only one parameter to describe the shape .we therefore use this function to fit our data . the kww exponent , , does not give a proper measure of the high frequency slope , but that it does give a reasonable one - parameter measure of the overall shape of the dispersion .the kww function is moreover the function most commonly used in literature , which is the main reason for using it in the discussion ( section [ sec : disc ] ) .
|
we study the pressure and temperature dependences of the dielectric relaxation of two molecular glassforming liquids , dibutyl phtalate and m - toluidine . we focus on two characteristics of the slowing down of relaxation , the fragility associated with the temperature dependence and the stretching characterizing the relaxation function . we combine our data with data from the literature to revisit the proposed correlation between these two quantities . we do this in light of constraints that we suggest to put on the search for empirical correlations among properties of glassformers . in particular , argue that a meaningful correlation is to be looked for between stretching and _ isochoric _ fragility , as both seem to be constant under isochronic conditions and thereby reflect the intrinsic effect of temperature .
|
interference channel problem describes a setup where multiple pairs of transmitters and receivers share a communication medium . each receiver is only interested in decoding the message from its corresponding transmitter .however , since the channel is shared , signals from other senders interfere with the desired signal at each decoder .the presence of interfering signals adds new dimensions to this problem in terms of strategies that can be used as compared to point - to - point ( ptp ) communication .for example , the encoders can _ cooperate _ with each other by choosing their channel inputs in a way that would facilitate their joint communication .it turns out that , often , this cooperation requires an encoder to employ a strategy which may be sub - optimal from its own ptp communications perspective . in this paper, we investigate this tradeoff and develop a new class of codes which allow for more efficient cooperation between the transmitters . characterizing the capacity region for the general ichas been a challenge for decades . even in the simplest case of the two user ic , the capacity regionis only known in special cases .the best known achievable region for the ic was due to han and kobayashi .however , recently it was shown that the han - kobayashi ( hk ) rate region is suboptimal .particularly , when there are more than two transmitter - receiver pairs , the natural generalization of the hk strategy can be improved upon by inducing structure in the codebooks used in the scheme .structured codes such as linear codes and group codes enable the encoders to align their signals more efficiently .this in turn reduces interference at the decoders .such codebook structures have also proven to give gains in other multi - terminal communication problems - .the idea of interference alignment was proposed for managing interference when there are three or more users .initially , the technique was proposed by maddah - ali et . for the mimo x channel , and for the multi - user ic by jafar and cadambe .the interference alignment strategy was developed for cases of additive interference and uniform channel inputs over finite fields .this was extended to arbitrary interference settings and input distributions in .however , it turns out that alignment is not always beneficial to the users in terms of achievable rates .consider the example in figure [ fig:1 ] .intuitively , it would be beneficial to align the input from users 1 and 2 to reduce interference at decoder 3 .however , if users 1 and 2 align their signals , it becomes harder for decoder 2 to distinguish between the two inputs .one might suggest that the problem could be alleviated if users 1 and 2 designed their codebooks in a way that they would `` look '' aligned at decoder 3 based on , but at the same time they would seem different at decoder 2 based on . in this paperwe show that linear codes lack the necessary flexibility for such a strategy . based on this intuition, we propose a new class of structured codes .using these codes we derive an achievable rate region which improves upon the best known achievable region for the three user ic given in .the rest of the paper is organized as follows : section [ sec : not ] gives the notation used in the paper as well as the problem statement . in section [ sec : trade ] , we consider two different examples of three user ic . in the first example - where interference alignment is strictly beneficial- we prove that not only structured codes are useful for alignment but that any arbitrary coding scheme which achieves optimality must possess certain linearity properties . in the second example , we show the existence of the tradeoff discussed above and prove that linear codes are suboptimal for that example .section [ sec : code ] , gives the new codebook constructions and proves that these new codes outperform the linear coding scheme in . in section[ sec : rr ] , we provide a new general achievable rate region for this problem .section [ sec : conclude ] , concludes the paper .in this section , we give the notation used in the paper and provide the problem statement . throughout the paper ,we denote random variables by capital letters such as , their realizations by small letters , and their corresponding alphabets ( finite ) by sans - serif typeface , , respectively .small letters such as are used to represent numbers .the field of size is denoted by .we represent the field addition by and the addition on real numbers by . for , we define the set of numbers \triangleq \{1,2,\dotsb , m\} ] i.i.d random variables with distribution defined on the field . denotes a random variable which has the same distribution as }u_i ] , without loss of generality , these maps are assumed to be injective ( 3 ) and three decoding functions ] .the rate of user is defined as .a rate - triple is said to be achievable if for every , there exists a code such that ] and ] be matrices , and dithers all defined on . also , let and be a pair of random vectors on .the pair of qlc s characterized by the matrices ] for large with high probability .the proof follows from the injectiveness of the s and the usual typicality arguments and is omitted .having defined qlc s , we return to our interference channel setup in example 2 .we claim that nqlc s can achieve a sum - rate which is higher than .[ lem : newrate ] there exists achievable rate - triples such that .refer to appendix [ app : newrate ] .so far we have proved that nqlc s outperform nlc s in this specific example .it is straightforward to show that nqlc s are a generalization of nlc s . to see this ,consider an arbitrary pair of nlc s with the parameters as in definition [ def : pnlc ] .these two codes are a pair of nqlc s with parameters , and uniform , and constants and and .so , any rate region achievable by nlc s is also achievable using nqlc s .in this section , we provide a general achievable rate region for the three user ic .the scheme is similar to the one presented in ( theorem 2 ) .the main difference is that here instead of nlc s we use nqlc s .the random variables involved in the coding scheme are depicted in figure [ fig : rvs ] .note that in contrast with the scheme in , decoder 2 reconstructs a linear combination of and . by setting , we recover the random variables in .the next theorem provides the achievable rate region . for a given three user ic problem with -ary inputs and outputs ,define the set as the set of rate triples such that there exist 1 ) a joint probability distribution , 2 ) a vector of positive reals , and 3 ) a vector of parameters and pair of vectors of random variables } , i\in\{1,2\} ] .a rate triple is achievable if it belongs to .we provide an outline of the proof .the coding scheme is similar to the one in . except that 1 ) decoder 2also decodes a linear combination , 2 ) the underlying codes for and are qnlc s instead of nested coset codes , and 2 ) there is an outer code on which allows decoder to decode from . as a resultthe rate region is similar to the one in except for a few changes .bounds ( [ eq : packs])-([eq : packf ] ) ensure the existence of jointly typical codewords at each encoder .these bounds are the same with the ones in . bounds ( [ eq : cov1s])-([eq : cov1f ] ) ensure errorless decoding at decoder , they also remain the same. inequalities ( [ eq : cov2s])-([eq : cov2f ] ) correspond to the error events at decoder 2 , these bounds are altered to ensure reconstruction of , also the rate is changed and the linear coding rate is multiplied by , which is due to the outer code .lastly , ( [ eq : cov3s])-([eq : cov3f ] ) are for the error events at decoder 3 , which is also similar to the ones in . for ease of notation ,we have dropped the time - sharing random variable .the scheme can be enhanced by adding the variable in the standard way . by taking and and choosing the nqlc parameters so that the codes become a pair of nlc s we recover the bound in as expected . following the generalizations in ,this coding scheme can be enhanced by adding additional layers containing the public message codebooks corresponding to the hk strategy .the problem of three user ic was considered .we showed that there is an inherent tradeoff in the general ic .the users can choose to communicate their messages by using optimal ptp strategies or cooperate with other users to facilitate their communication .it was shown that the previously used coding structures are unable to optimize this tradeoff .new coding structures were proposed .it was shown through an example that these new structures give strict improvements . using these new codebooks , an achievable region for the three user icwas derived which improves upon the previous known inner bounds for this problem .assume the family achieves the rate - triple .let be uniform random variables defined on sets .in the first step , we argue that the size of the set is close to .more precisely , we prove the following claim : intuitively , if the size of is much smaller than , that means there exists a large number of sets of vectors , with different s for which the sum is equal .this causes a large error probability in decoder 3 since the decoder is unable to distinguish between these sets of vectors .more precisely , let be a type on vectors in , and let , define as follows , where is the set of all vectors with type .that is is the set of s for which the decoder has non - zero error probability for decoding or another codeword . from set theory, we have the following : on the other hand , as , the error probability at decoder goes to 0 .this means that goes to 0 .consequently , there exists a family of types type such that goes to 0 .there exists a sequence which approaches 0 at the limit such that : observe that and have the same exponential rate .note that and since decoder 1 can decode with probability of error approaching 0 , we can use the same argument to show the following : we provide a coding scheme based on nqlc s which achieves the rate vector . consider two ternary random variables and such that .we will show the achievability of the following rate - triple : note that in this case .choose random variable such that .+ * codebook generation : * construct a family of pairs nqlc s with length and parameters , , , and by choosing the dither and generator matrix randomly and uniformly on . for a fixed ,let and be the corresponding pair of nqlc s .let for .choose of the codewords in randomly and uniformly , and index these sequences using the indices ] . +* encoding : * upon receiving message index encoder sends the sequence in which is indexed for .let the codewords sent by encoder be denoted by .encoder 3 sends the codeword in indexed by .let the codeword sent by the third decoder be denoted by .+ * decoding : * decoder 1 receives . using typicality decoding, the decoder can decode the message as long as .decoder 2 receives .it can decode jointly as long as 1 ) , and 2 ) .the first condition ensures that can be recovered with probability of error going to 0 as . after recovering , the decoder needs to jointly decode ( for reasons explained in lemma [ lem : lem2 ] ) .this is a noiseless additive mac problem and condition 2 ensures errorless decoding .note that in condition 2 , the coefficient is present since is of length . also , the term is the capacity of the mac channel .decoder 3 receives .the decoder can recover as long as .then , the decoder subtracts to get .it can decode as long as .it is straightforward to check the rate given at the beginning satisfy all of these bounds. 1 h. sato , `` the capacity of the gaussian interference channel under strong interference , '' ieee trans .theory , vol .27 , no . 6 ,786 - 788 , nov .1981 . m. h. m. costa , and a. el gamal , `` the capacity region of the discrete memoryless interference channel with strong interference , '' ieee trans .theory , vol .710 - 711 , 1987 .t. s. han , and k. kobayashi , `` a new achievable rate region for the interference channel , '' ieee trans .theory , vol .49 - 60 , 1981 . c. nair , k. xia , and m. yazdanpanah , `` sub - optimality of the han kobayashi achievable region for interference channels '' , arxiv preprint arxiv:1502.02589 , 2015 m. maddah - ali , a. motahari , and a. khandani , `` communication over mimo x channels : interference alignment , decomposition , and performance analysis , '' information theory , ieee transactions on , vol .3457 - 3470 , aug 2008 . v. cadambe , and s. jafar , `` interference alignment and degrees of freedom of the k -user interference channel , '' ieee trans . on info .3425 - 3441 , 2008. j. korner , k. marton , how to encode the modulo - two sum of binary sources , ieee transactions on information theory , vol .219 - 221 , 1979 .a. padakandla , s.s .pradhan , achievable rate region for three user discrete broadcast channel based on coset codes , in ieee international symp . on information theory .ieee , pp.1277,1281 , 7 - 12 july 2013 a. padakandla , a.g .sahebi , s.s .pradhan , a new achievable rate region for the 3-user discrete memoryless interference channel , proceedings of ieee international symposium on information theory , july , 2012 f. shirani , s.s .pradhan , `` an achievable rate - distortion region for multiple descriptions source coding based on coset codes , '' arxiv:1602.01911 , 2016
|
we consider the problem of coding over the multi - user interference channel ( ic ) . it is well - known that aligning the interfering signals results in improved achievable rates in certain setups involving more than two users . we argue that in the general interference problem , senders face a tradeoff between communicating their message to their corresponding decoder or cooperating with other users by aligning their signals . traditionally , interference alignment is carried out using structured codes such as linear codes and group codes . we show through an example that the usual structured coding schemes used for interference neutralization lack the necessary flexibility to optimize this tradeoff . based on this intuition , we propose a new class of codes for this problem . we use the example to show that the application of these codes gives strict improvements in terms of achievable rates . finally , we derive a new achievable region for the three user ic which strictly improves upon the previously known inner bounds for this problem .
|
the subject of this article is quantum objects , and they have been introduced by the author in the course of his attempts to construct a local causal model of quantum theory ( qt ) that includes quantum field theory ( qft ) .quantum physicists consider bell s famous inequality ( see ) and its violation in experiments to be a strong indication that local causal models of qt are not possible . based on the formal definition of a local causal model , the author came to the conclusion that local causal models of qt / qft are indeed impossible if the notion of a locality is understood as a space - point locality ( see ) .however , if certain qt deficiencies are removed , causal models of qt / qft where the non - localities are confined to elementary units called quantum objects , appear to be feasible . in the present article ,quantum objects are discussed in more detail .quantum objects were introduced in as part of a causal model of qt / qft .therefore , the article starts with a description of causal models in general and of the proposed causal model of qt / qft , which surrounds quantum objects .section 2 mainly summarizes material that is described in more detail in .as described in ( and summarized in section 2 ) , a formal local causal model of an area of physics has three types of constituents : ( 1 ) the physics engine , ( 2 ) the laws of physics , and ( 3 ) the system state referred to by the laws of physics .the composition of these three components determines whether a local causal model is feasible , or more specifically , which type of locality and causality can be achieved . in , it has already been concluded that a local causal model of qt / qft is not possible if a locality is understood as a space - point locality .thus , a local causal model of qt / qft is not possible if the system state consists solely of space points and their associated attributes .in addition , the ( causal ) laws of qt / qft need to refer to aggregate objects and parameters , such as particles , waves , and fields ; moreover , the system state needs to contain these objects and parameters .quantum objects have been introduced as a generalization of the types of objects that are known in standard qt .the assumption that quantum objects are autonomous and depend as little as possible on the external system state parameters enables the confinement of nonlocal causal state progressions to quantum objects internal processes . the tolerance of non - localities within quantum objects results in deviations from relativity theory within quantum objects .this deviance has implications for the objects internal space - time concept that are addressed in section 6 .although the focus of this article is on quantum objects internal processes , it is also necessary to consider the global interrelationships among quantum objects .these interrelationships are primarily determined by the interactions between quantum objects and the interactions between quantum objects and fields .this subject is addressed in section 4 .the causal model of quantum objects may be viewed as another representation of qt / qft with a focus on specific aspects of qt .ideally , another view or model would not provide any new knowledge if the underlying theory ( i.e. , qt ) were properly and completely defined in all areas .however , because of certain qt / qft deficiencies ( see section 2.2 ) , the proposed causal model of qt / qft necessitates specifications that can not be derived from standard qt / qft . in part , the proposed causal model has been developed with the goal of removing some of the known qt deficiencies .similarly , other qt / qft limitations have been detected in the course of the model s development , and solutions for the removal of these limitations have been included .in , a causal model of an area of physics is defined as consisting of three types of constituents : 1 .the `` physics engine '' that defines the overall ( i.e. , subject - independent ) interpretation of the ( subject - dependent ) laws of physics ; 2 . the laws of physics that describe for the subject area of physics how the state of the system evolves under various circumstances ; and 3 . the system state , i.e. , the objects and elements that are referenced by the laws of physics .the physics engine acts upon the state of the physical system and continuously determines new states in uniform time steps . for the formal definition of a causal model of a physical theory , the continuous repeated invocation of the physics engine to realize the progression of the state of the systemis assumed .+ + physics engine + } + } + + physics engine + } + + the refinement of the statement provides the laws of physics for the given subject area of physics ; it consists of a list of laws that define how an `` in '' state s evolves into an `` out '' state s. ... + the system state specifies in more detail the objects that are referenced by the laws of physics , their internal structure and their interrelationships . in the simplest case ,the system state consists of a set of space points with their associated contents .+ + a causal model of a theory of physics is called a spatial causal model if ( 1 ) the system state contains a component that represents a space , and ( 2 ) all of the other components of the system state can be mapped to the space . there exist numerous textbooks on physics ( mostly in the context of relativity theory ) and on mathematics that define the essential features of a `` space '' . for the purpose of this article and the subject locality ,it is sufficient to require that the space ( which is assumed to be part of the model ) supports the notions of position , coordinates , distance , and neighborhood .the definition of a local causal model presupposes a spatially causal model .a causal model is understood to be a local model if changes in the state of the system depend entirely on the local state and only affect the local state .the local state s changes can propagate to neighboring locations .the propagation of the state changes can reach distant locations ; however , these changes must always be accomplished through a series of state changes to neighboring locations . based on the formal definition of `` causal model '' , a formal definition of `` locality '' can be given .we are given a physical theory and a related spatially causal model with position coordinates x and position neighborhood dx ( or in the case of discrete space points ) .a causal model is called a `` local causal model '' if each of the laws applies to no more than a single position x and/or to the neighborhood of this position . the position reference can be explicit or implicit with respect to a state component that has a well - defined position in space . whereas references to the complete space of a spatially extended object are considered as violations of the notion of a locality , references to specific properties of spatially extended objects do not violate this notion . in this article , the above definition of a locality , which strictly refers to space points ( i.e. , x and )is called a `` space - point locality '' . to enable the construction of causal models where a space - point locality is not achievable but the environment should not be classified as completely non - local , a weaker form of locality , which we call `` object - locality '' ,is defined .a causal model is called an `` object - local '' causal model if it is a space - point - local causal model , with the exception of object - internal processes where the causal laws may refer to complete objects or sub - objects . for a given causal model ,`` object '' has to be replaced with a specific type of object belonging to the system state .for example , the causal model of qt / qft ( see below ) is called a `` quantum - object - local causal model '' because the non - space- point localities are confined to quantum objects .the present article is a result of the author s attempt to analyze in more detail the problems that prevent the construction of local causal models of qt / qft ; furthermore , the author intends to overcome these problems .it is reasonable to first consider the feasibility of a causal model of qt / qft , and in the next step ( if the first step was successful ) , we analyze the feasibility of a local ( causal ) model . the primary problem with a causal model of qt / qftis not the peculiarity of qt / qft , but the state of the theory in certain areas .four main `` problem areas '' have been identified , as shown in table 1 .the problem areas ( the more detailed problems and proposed solutions ) have been described in several papers by the author ( see and ) .the present section and table 1 give a summary of the findings .the four problem areas with respect to causal models also represent the major problem areas with respect to a local ( causal ) model .this set of issues is summarized in column `` locality problems '' in table 1 .| c | c | c | c | problem areas & experiments & problems impeding & locality + & & a causal model & problems + measurement , & all kinds of & lack of & instant reduction of + interpretation of qt & experiments & agreed - upon theory & complete wave + interference collapse & double - slit & ill - defined interference & instant collapse of + & & collapse rule & interference + qft - interaction & scatterings & lack of & integrals ranging + & & `` equation of motion '' & over + & & & complete space + entanglement & scatterings & lack of & none if + & & correlation model & common paths + & epr - perfect & ( `` hidden variables '' ) & for both + & correlation & & particles + entanglement & epr - imperfect & lack of & `` action at + & correlation & ' ' communication model `` & a distance '' + some explanations for the abbreviated formulations contained in table 1 are appropriate . more details on the subject can be found in and .* measurement , interpretation of qt + the measurement problem of qt must still be considered unsolved .measurements are mandatory ingredients of all types of experiments .a number of alternative `` interpretations '' of qt have been proposed as solutions to the measurement problem . without mentioning that there is no generally agreed interpretation of qt ( and no solution to the measurement problem ) ,most of the proposed interpretations are not suitable bases for a causal model .even if we have a causal model of the qt measurement ( such as the causal model proposed by the author in and in section 2.3 ) , we still have the `` locality problem '' .the fact that qt measurements instantly affect the complete spatially distributed wave prevents the construction of a space - point - local model .* interference collapse + the standard explanation of the double - slit experiment found in most textbooks of qt refers to conditional statements such as `` if it is possible to determine ... '' rather than to the physical objects and state parameters ( see ) .this type of explanation makes it impossible to construct a causal model . the causal model proposed by the author in and in section 4.4 associates the interference collapse ( as well as the wave function collapse ) with the occurrence of qft interactions . *qft - interactions + these interactions are particle interactions such as scatterings that require qft ( e.g. , a scattering matrix or feynman diagrams ) in their description and result calculation .although qft provides an extensive and powerful framework for the treatment of this type of interaction , the present theory does not support the translation of this framework into a causal model . * entanglement + entanglement was the original area where the impossibility of a local causal model of qt was inferred .while it appears feasible to use `` hidden variables '' to construct a causal model that at least supports perfect correlations , the construction of a local causal model is not feasible . the causal model of qt / qftis formulated partly in terms of discrete system state parameters ( space , time , and particle paths ) that support the mapping of the causal model to a cellular automaton ; for example , this type of mapping was described in .in addition to quantum objects , the system state of the causal model of qt / qft includes the space and fields . although time is not considered to be a part of the system state , it is implied by the physics engine .+ + ; ; ; ; + } + + the physics engine ( as described in section 2 ) continuously interprets the laws of physics to advance the system state . as a necessary extension to the general causal model given in section 2 , the physics engine for the causal model of qt / qft implements a more sophisticated process structure . in section 2, the physics engine is described as a single global engine that proceeds the complete system state simultaneously in unique time steps ( see fig .1 ) . to confine the non - localities to quantum objects and to support relativistic , proper time intervals ,multiple object - related physics engines are assumed ( see fig .in addition to the quantum objects 1-n , a further object `` space '' with an associated physics engine 0 is shown in fig .2 . see section 6.2 for additional details .+ + + for ( all quantum objects ] + } + for ( all otherobjects ] ) ; + } + } + + qt / qft - physics engine ( qobj ) : = \ { + } + + the physics - engine invokes the function applylawsofqt / qft ( ) periodically with proper time intervals associated with the respective quantum object . at the highest level of specification ,applylawsofqt / qft()has to determine whether the interactions between quantum objects have to be processed first .the `` normal '' progression of quantum objects is deferred until all of the possible interactions have been processed .+ + : = \ { interaction - processing ; if ( not - destructed ( qobject ) ) proceed - quantum - object ( qobject ) ; + } + for a complete causal model of qt / qft , the functions interaction - processing and proceed - quantum - object would have to be specified in more detail . in this article , only interaction - processing is discussed in more detail ( see section 5.2 ) .the quantum object is the most important entity for the description of the causal model of qt / qft . a particle may occur as a separate quantum object or be part of a quantum object .the following three properties distinguish quantum objects from other objects that typically occur in physics : 1 .quantum objects are composed of multiple alternative paths with associated probability amplitudes . with the interactions ( including the measurements ) ,the multiple paths may be reduced to a single path .2 . quantum objects may consist of multiple spatially separated particles .3 . quantum objects have global attributes that apply to all of the paths and particles of the quantum object .the combination of these three properties make quantum objects special within physics .a quantum object may be viewed as having a two - dimensional structure .one of the dimensions represents the collection of quantum object elements , which typically consists of 1 to n particles . in the second dimension ,the quantum object consists of the set of alternatives that may be selected during the evolution of the quantum object , for example , by a measurement . in this paper , these alternatives are called `` paths '' . the two - dimensional structure is supplemented by global attributes . whereas global quantum - object attributes are attributes that apply to the complete quantum object, particle attributes apply to the complete particle .`` amplitude '' is the single attribute that applies to a complete path .the only space - point - local attributes are the attributes labeled `` particle-i.path-j '' in fig .3 . global attributes disturb space - point localities .similarly , the inclusion of global attributes may be unavoidable for the construction of causal models in theories that contain non - localities .the confinement of the non - localities of qt / qft within quantum objects ( by assuming the global attributes ) supports the view of quantum objects as the elementary units of causality and locality .further details on the global attributes of the quantum object are given in section 3.3 .different types of quantum objects can be distinguished : * a single particle + a single particle constitutes the simplest type of quantum object .the idea of representing a particle by a set of paths was introduced by feynman ( see ) with the formulation of quantum electrodynamics ( qed ) .* collections of ( entangled ) particles + collections of particles that can be described by a common wave function where only specific attribute combinations can occur as measurement results represent a quantum object .thus , the particle collection is represented by a set of paths , and each path contains the attribute combinations for all of the particles and an associated probability amplitude ( see fig .arbitrary particle collections whose common wave function would be the product of the individual wave functions do not constitute quantum objects .( as a consequence , considering the whole universe as a single large quantum object would not be in accordance with the definition of the term `` quantum object '' given in this paper . )+ the following additional types of quantum objects are special examples of particle collections .* interaction object + the interaction object is a type of quantum object that was described in as part of a functional model of qft interactions .it is created at the beginning of an interaction . at the end of the interaction , the interaction object is transformed into an interaction - result quantum object .* interaction result object + the result of a qft interaction ( see section 4.2 ) is a quantum object containing all of the particles resulting from the interaction and the probability amplitudes for the resulting paths .the causal model of qt / qft assumes that the interaction object develops into the interaction result object ( see section 5.2 ) . *bound system quantum object + composite objects such as hadrons , nuclei , and atoms that are built from ( elementary ) particles are incorporated into the concept of a quantum object .the elements of the composite quantum object may be grouped to form an internal structure .for example , the atom consists of the nucleus and electrons , and the nucleus consists of hadrons .if this type of a hierarchical structure is given , only the complete ( outermost ) entity is called a quantum object within this paper .quantum objects are dynamically created , separated and combined in specific processes such as interactions and decays ( see sections 4 and 5 ) .the existence of global state information that is not merely an aggregation of local information , and therefore , can not be broken down into finer local information is one of the special properties of quantum objects .the quantum - object - system state described in section 3.1 contains attributes ( i.e. , state parameters ) that are associated with differing entities such as the total quantum object , the particle or the path .a global attribute that is not an aggregation of space - point - local information implies that changes in the global attribute value occur instantaneously for the complete quantum object ; the alternative would be the propagation of the changes through the quantum object .thus , global attributes are required whenever instantaneous actions that affect the complete quantum object ( or the complete particle or path ) occur within the causal model .table 2 shows the types of instantaneous actions for the four `` problem areas '' of the causal model of qt / qft .| c | c | c | problem areas & instant global & locality scope + & action & + measurement , & instant reduction of & single particle + interpretation of qt & complete wave & path set + interference collapse & instant collapse of & single particle + model & interference & path set + qft - interaction & generation of & interaction object + & interaction result & ( = quantum object ) + entanglement & reduction to single & quantum object + with qft - interaction & common path & + entanglement & reduction to single & quantum object + perfect correlation & common path & + entanglement & change of global & quantum object + imperfect correlation & angular momentum & + a further reason for the inclusion of the global attributes in the system state of the quantum object is to support interactions between autonomous quantum objects according to the laws of qft . when quantum object 1 interacts with quantum object 2 , the result of the interaction depends on quantum - object parameters such as the energy , momentum and spin .the assumption of autonomous quantum objects implies that the respective parameters must be provided by the interacting quantum objects rather than by the environment .this subject is elaborated on in section 4 . for a model of an area of physics ( e.g. , qt ) , global attributes such as local ( or space - point - local ) attributes must have a physical representation .the definition of `` global information '' implies that it is not possible to map the global attributes and parameters to space points or areas of space .thus , the question arises of how the global object attributes may be physically represented if we do not employ a spatial representation .although it is not necessary to identify the physical representation of all of the parameters and attributes for the abstract formulation of the causal model of qt / qft , it certainly increases the plausibility of the model if at least some possible mapping of the essential state parameters can be provided . in the causal model of qt/ qft , the following possible representations of global ( i.e. , space - independent ) information are included : 1 . the simplest model for the global attributesis given when the specific global attribute can be equated with an attribute that exists in classical physics , such as the mass or angular momentum of a quantum object .however , the remaining problem is that the global attributes in the local causal model are typically requested to support instantaneous value changes ; this feature is not supported in classical physics .thus , it would be a deviation from ( or an extension of ) qt to assume the possibility of instantaneous value changes for a specific global attribute .a special type of quantum - object - global attribute is the proper time clock rate . in the causal model of qt/ qft , this attribute is assigned to the physics engine of the quantum object .it is conceivable that additional global attributes such as the ( global ) momentum and angular momentum are directly assigned to the physics engine .3 . from the point of view of qft, there are several quantum fields associated with quantum objects .in addition to the field quanta that are associated with the individual particles that belong to the quantum object , there could be object - global fields that span the overall quantum object , and it may be possible to instantly change the global attributes of such fields .the respective fields may be fields known in existing qft , for example , the gauge field .similar to most theories of physics , the descriptions of quantum theory and quantum field theories can be based on the lagrangian(s ) for the theory .therefore , it is reasonable to ask how global parameters would be reflected in the lagrangians of qt and qft .the authors finding on this question is that , if ( 1 ) the causal model contains compound objects such as quantum objects which are not merely aggregations of smaller objects ( for which the existence of global parameters is an indication ) and ( 2 ) it is reasonable to represent the related laws of physics in the lagrangian of the theory , the compound objects and the related global parameters also have to be represented explicitly in the lagrangian . in general, it may not be possible to include the compound , i.e. , higher level objects in the lagrangian that corresponds to the lower level entities . therefore , multiple lagrangians related to different levels of view may be required .for two reasons , the interactions between quantum objects are key for the local causal model of qt / qft : 1 . the causal model of the problem areas described in section 2.2 ( the measurement problem , the `` interference collapse rule '' and entanglement ) is based on the model of interactions between quantum objects. 2 . because quantum objects are assumed to run autonomously , their global relationships are mainly determined by the information exchange with interactions . in the causal model of qt / qft ,a quantum object is considered to interact with another quantum object , if shares some spacetime points with . in terms of wave equations( i.e. , the equations of motion for the particles waves ) ( see ) , an interaction between two waves and resulting in a third wave is described by an equation of motion in which the product of waves and is related to , for example , in .+ typical examples of interactions are particle scatterings such as , for example , electron - photon scattering. interactions between quantum objects consisting of particle collections ( e.g. , entangled particles or bound - system quantum objects ) must be considered as well . in general , an interaction between two quantum objects may change the interacting quantum objects to differing extents .the changes may range from changed attributes ( e.g. , momenta and spins ) to changes in the numbers and types of the particles .4 shows some simple types of interactions where it is assumed that only the attributes of the particles change ; the numbers and types of the outgoing particles are the same as the numbers and types of the ingoing particles .the interactions shown in fig .4 can be described by the laws of ( classical ) quantum theory and are called `` volatile interactions '' in this paper .in contrast to volatile interactions , there are also more complex interactions where the `` out '' quantum objects and/or the `` out '' particles ( which are contained in the `` out '' quantum objects ) may differ from the `` in '' quantum objects and particles ( see fig . 5 ) . because such interactions require qft ( e.g. , a scattering matrix and feynman diagrams ) in their description , they are called `` qft interactions '' within this paper . with qft interactions ,only a single path of the `` in '' particles determines the interaction result .+ the overall causal model for the treatment of interactions is as follows : .... interaction - processing : = { overall - ialist = determine - potential - interactions ( ) ; if not - empty ( overall - ialist ) { qft - ialist= determine - qft - interactions(overall - ialist ) ; if is - empty ( qft - ialist ) { for ( all interaction[i ] from overall - ialist } ) { perform - volatile - interaction ( interaction[i ] ) ; } } else { // perform qft - interaction if size(qft - ialist ) > 1 { qft - interaction = random(qft - ialist ) ; } else { qft - interaction = qft - ialist[1 ] ; } perform - qft - interaction(qft - interaction ) ; } } } .... the functions determine - potential - interactions ( ) , determine - qft - interactions ( ) , perform - volatile - interaction ( ) and perform - qft - interaction ( ) must be specified in more detail .the potential interactions are identified by space points belonging to the processed particle ; furthermore , they are shared by another particle .+ the proposed causal model of qt / qft assumes that the physics engines of the individual quantum objects determine the occurrence of a ( potential ) interaction by checking if the space points occupied by the corresponding quantum object are also occupied by other quantum objects .this checking process is feasible only if all of the ( autonomous ) quantum objects agree on a common global space coordinate system , or alternatively , if all of the space points have associated the quantum objects and fields that occupy the space point .the criteria for the determination of qft interactions are not quite clear to the author .the following criteria influence the determination of qft interactions : * the particle types , i.e. , whether qft supports interactions between the particles . * the particles probability amplitudes at the interaction position .* the particles energy . are discussed below .volatile interactions are interactions between particles known from classical quantum mechanics that change neither the type nor the number of the interacting particles .volatile interactions initially can change only attributes which are related to the space - points of the affected path of the interacting particle .the space - point - related changes may propagate to the complete quantum object or to subunits ( e.g. , particles ) belonging to the quantum object .4 shows some examples of volatile interactions .because the laws of qt concerning volatile interactions are relatively straightforward , this type of interaction is not discussed within this paper . with qft interactions , the paths ( i.e. , space points ) that triggered the interactionexclusively determine the outcome of the interaction . in the causal model of qt / qft, the paths that do not participate in the interaction are discarded , and the interacting particles ( not necessarily the interacting quantum objects ) are replaced by a single new quantum object called the interaction object ( " ia - object in fig . 5 ) .the destruction of the interacting particles and their replacement by the interaction object may viewed as the collapse of the wave functions .in contrast to volatile interactions ( see above ) , with qft interactions the numbers and types of the `` out '' particles may differ from the `` in '' particles .the detailed laws of physics for the treatment of qft interactions are given by qft .the overall causal model for the treatment of qft interactions is as follows : .... perform - qft - interaction ( interaction ) : = { ia - object = create - interaction - object(interaction ) ; drop - particle ( interaction.particle1 ) ; drop - particle ( interaction.particle2 ) ; for ( all other paths in quantumobject.path[i ] ) { eliminate - unaffected - paths(quantumobject.path[i ] ) ; } ia - resultobject = process - interaction - object(ia - object ) ; } .... fig .5 shows some examples of qft interactions : ( a ) the interaction between two single particles , ( b ) the interaction between a single particle and a quantum object containing two particles , and ( c ) the interaction between two quantum objects each containing two particles . in each case , the two interacting particles are first replaced by the interaction object ( `` ia - object '' in fig .the processing of the interaction object is determined by the rules and equations of qft ( e.g. , feynman diagrams , feynman rules , and fermion chains ) . however, these qft rules must be mapped to a causal model .the details of this mapping are described in and .the overall result of the qft interaction is embraced in a single particle collection ( i.e. , a quantum object ) , which we call the ia - resultobject .the ia - resultobject typically contains two particles ( which can be of the same type as the ingoing particles ) and multiple paths .the generation of a single particle collection ensures proper correlations with the alternative outcomes of the interaction .more details on process - interactionobject(ia - object ) are given in section 5.2 . in cases where one of the `` in '' quantum objects consists of multiple particles ,the `` ia - resultobject '' may further merge with the ( remainder of the ) `` in '' quantum objects as indicated by ( b.1 ) , ( c.1 ) and ( c.2 ) in fig .5 . in cases where one or both of the interacting quantum objects contain more than a single particle ( see fig . 5, cases ( b ) and ( c ) ) , the question arises whether the ia - resultobject may contain more particles in addition to the ( replacement of ) the directly interacting particles . the causal model of qt / qft supports this possibility by a continued processing following the creation of the ia - resultobject .5 , this event is indicated by the steps labeled ( b.1 ) , ( c.1 ) and ( c.2 ) .the major features of qft interactions can be summarized as follows : * a qft interaction is caused by single definite space - time position ( i.e. , single paths of interacting quantum objects ) .* among interacting quantum objects , at most a single qft interaction can occur at a specific point in time .this point in time is locally determined .local simultaneity is well defined for the quantum object . *although each interaction starts as a path - path interaction , it may evolve into an interaction involving larger scopes .* qft interactions are always path - related and particle - related ( at least initially ) . * measurements typically require at least one qft interaction between the measured quantum object and a quantum object belonging to the measurement apparatus . table 3 summarizes the major interaction cases .| c | c | c | c | c | c | interaction type & object-1 & object-2 & causal model actions & examples & qt / qft + volatile & particle & atom & no path reduction , & quantum optics & qm + int ., case ( b ) & & & no separation & & + volatile & atom & atom & no path reduction , & thermodynamics , & qm + int . , case(c ) & & & no separation & atom scattering & + qft & particle & particle & reduction , & double - slit , & qft + interaction & & & disentanglement , & measurement , & + case ( a ) & & & interaction object & & + & & & ( type change ) & & + qft , & particle & atom , & reduction , & particle absorption & qft + interaction & & entangled & disentanglement , & by atom , measurement & ext . + case ( b ) & & particles & interaction object & of entangled part . & + qft , & atom & atom & reduction , & scattering of & qft + interaction & & & disentanglement , & atoms & ext .+ case ( c ) & & & interaction object & & + + comments on table 3 : * the interaction types `` volatile , case(b ) '' and `` volatile , case ( c ) '' correspond to the examples shown in fig .4 . the interaction types `` qft , case(a ) '' , `` qft , case(b ) '' and `` qft , case ( c ) '' correspond to the examples shown in fig`` qm '' means quantum mechanics .qft techniques ( e.g. , feynman diagrams ) are not required to specify the processing .* `` qft ext . '' stands for qft extended and means that the existing standard qft does not enable the derivation of a causal model for these types of interactions .is mainly based on the assumption that to obtain information about the measured quantum object , a qt measurement involves at least one qft interaction between the measured quantum object and the measurement apparatus .all of the peculiarities reflected in the qt measurement problem ( e.g. , the collapse of the wave function and the inability to measure certain observables concurrently ) can be explained by the causal model of qft interactions ( see ) . the epr experiment ( see ) , which is illustrated in fig .6 , measures the spins of two entangled particles ( photons assumed in fig .actual experiments such as verified that the measurement result obtained with particle 1 influences the measurement result of particle 2 . with the causal model of qt/ qft , this correlation of the measurement results can partly be explained by the assumption that the two photons belong to a common quantum object with common paths for photon 1 and photon 2 .the complete causal model of the epr experiment includes a model of imperfect correlations and assumes that the spin direction is a ( quantum - object- ) global parameter and the change of the global spin by the polarizers ( see ) . which is similar to the causal model of measurement, assumes that the collapse of the interference , which occurs with special variants of the experiment , results from a qft interaction occurring within one of the paths of the electron ( see ) .a quantum object s internal dynamics can only partly be described in the form of a local causal model with precisely defined state transitions , a uniform state update frequency and a locality restricted to a space - point locality . nevertheless , it is possible to predict the final states of processes ( such as particle scattering ) , and it should be possible to identify certain intermediate states associated to sub - processes . as the first level of subdivision , the proposed local causal model distinguishes a number of phases that a quantum object may pass or remain in : 1 .creation of the quantum object 2 .exchange of virtual particles ( `` feynman phase '' ) 3 .entangled real particles ( `` epr phase '' ) 4 .non - entangled real particles ( `` heisenberg - schrdinger phase '' ) depending on the type of quantum object , only part of the above list of phases is applicable . for the most general case ,i.e. , the interaction object , the complete list of phases applies ( see section 5.2 ) .the next important type of quantum object is the bound - system quantum object , where mainly the phase `` exchange of virtual particles ( feynman phase ) '' is of interest .quantum objects are created , transformed and extended as the result of interactions between quantum objects or decays of quantum objects .the creation of the interaction object is the most important case of a quantum object s creation . with the creation of the interaction object ,the information from the two interacting particles is merged into the new interaction object . after a quantum object that consists of multiple particles ( such as an interaction object )is created , the quantum object s internal dynamics start with processes that are described by the laws of qft. however , with qft the rules that govern these processes are defined in terms of ( external and internal ) lines and vertices of feynman diagrams . for the causal model of qt / qft , the qft rules have to be mapped to rules regarding the components and attributes of the interaction object .as described in section 2.3 , the causal model of qt / qft is formulated largely in terms of discrete system state parameters ( space , time , and paths ) .this formulation enables the utilization ( and sometimes requires the application ) of some of the findings of lattice gauge theory ( see , for example , ) . in standard qft , bound systems ( such as collections of quarks ) can best be handled by the application of the concepts of lattice gauge theory . in , a more detailed description of a causal model of qft interactionsis given with a focus on the feynman phase . during the feynman phase ,the distances between the ( virtual ) particles that compose the interaction object are assumed to be sufficiently short that permanent interactions between the ( virtual ) particles occur .this phase ( the feynman phase ) ends when a portion of the particles leaves the scope of the local interactions . after leaving the feynman phase , the causal model of qt / qftassumes that the collection of particles remains a quantum object , which means that the particles continue to be entangled through common alternative paths . here, the classical behavior of a non - entangled collection of particles is called the `` heisenberg - schrdinger phase '' .termination of the entanglement occurs when a particle that is part of a quantum object interacts with another quantum object via a qft interaction .the termination of the entanglement typically implies the termination of the original quantum object .the termination of an existing quantum object always results in the formation of at least one new quantum object . in section 4 ,various cases of interactions between quantum objects with different resulting new quantum objects are discussed .the processing of qft - interactions ( see section 4.2 ) includes the following three phases : 1 .creation of the interaction object , 2 .processing of the interaction object ( feynman phase ) , 3 . interaction result object ( epr phase ) .the major phase is the processing of the interaction object ( the feynman phase ) .the transition from the feynman phase to the epr phase is a statistical process that could possibly be better understood by studying the discreteness of the causal model and its relationship to lattice gauge theory .the evolution of the bound - system quantum object is mainly within the feynman phase , i.e. , internal interactions between adjacent ( virtual ) particles belonging to the bound - system quantum object. the feynman phase may be ( temporarily or permanently ) interrupted by interactions with other quantum objects or by the decay of the bound - system quantum object . in standard qft , bound systems ( such as collections of quarks )can best be handled by the application of the concepts of lattice gauge theory .although the integration of space and time as introduced with the theories of relativity remains valid in general , in the causal model of qt / qft there are situations where the treatment of time differs from that of space . as the major consequence of the autonomy of quantum objects , the time structure is composed of local time units based on the quantum object s proper time .in contrast , space serves as the global medium for all inter - object relationships . in the proposed causal model ,the progression of time is determined by the update frequency of the physics engine of the quantum object . because each quantum object is associated with its own physics engine , the quantum object is the smallest unit of simultaneity .inertial systems composed of quantum objects with equal velocity are compound units of simultaneity .the proper time clock rate ( i.e. , the update frequency of the physics engine ) of the quantum object is initially set when the quantum object is created as the result of a qft interaction or a decay .the proper time interval is only modified due to interactions with other quantum objects or due to interactions with fields . the quantum object sproper time determines both the speed of all object - local state changes and the speed of the position changes within space . by determining the speed of the state changes and the speed of the position changes ,the proper time interval in conjunction with the mass of the quantum object determines completely the quantum object s energy ._ physics engine clock rate = 1/proper time interval energy _ under the assumption that quantum objects are autonomous , a global medium must support the implementation of global physical processes and object interrelationships . in physics in general , this role is given to space . to support this role, the quantum objects must refer to space - related parameters in terms of a globally agreed addressing scheme .this setup includes references to spatial positions ( i.e. , the coordinates ) , the speed of the spatial position changes ( i.e. , velocities ) , and directions in space . in the proposed causal model , the following assumptions are made concerning space : * space is an active object , i.e. , its evolution is driven by a physics engine with a global proper time interval .* space is a superposition of all fields , i.e. , fields are mapped to space and are updated by the global physics engine of the space .* a field expansion may imply spatial expansion or may occur within the existing space . *mathematically , the space may be represented by a manifold ( including the curvature ) .* space changes dynamically , and it expands according to the expansions of the embedded fields .the spatial structure changes as the result of changes ( e.g. , position changes ) of the embedded quantum objects . as required by general relativity, the energy ( including masses ) distribution of the clusters of quantum objects determines the structure of the space .* space does not shrink or disappear ( even if the originating fields disappear ) .in terms of the causal model , the phrase `` a quantum object is autonomous '' means that each quantum object is driven by its individual , private physics engine .thus , each quantum object has its individual proper time clock rate , and the laws of physics ( i.e. , the laws of qt and qft ) that determine the quantum object s dynamics must as much as possible depend on quantum - object - local system state only .the autonomy of the quantum object ends where the global relationships between quantum objects and the relationships between quantum objects and space ( with the associated fields ) are considered : * the space , which is the single global - system state component , may affect the dynamics of the quantum object .* the interactions between quantum objects and the interactions between quantum objects and fields may terminate a quantum object , change the state of a quantum object or result in the creation of new quantum objects .if a black hole has an internal structure and internal dynamics , it appears obvious that the laws of qt must apply to these internal relationships .therefore , a black hole should be viewed as a quantum object .if a black hole is considered to be a quantum object , then it would adhere to the space - time concept described above ( section 6 ) .consequently , a black hole represents not a space - time singularity , but an elementary unit of space - time ( in common with quantum objects in general ) .in addition , new thoughts on the subject of black hole evaporation may be appropriate .it is an open question whether a black hole can have multiple paths and entangled particles .if the black hole is considered to be a quantum object , it would be a kind of bound system quantum object . as described in section 5.3 ,bound system quantum objects remain in the feynman phase . in the literature on qt , particles are frequently called quanta of fields ( see ) .fields have much in common with particles and particle collections ( i.e. , quantum objects ) .therefore , we must ask whether it is reasonable to consider a field as a type of quantum object . for the causal model of qt / qft and the quantum - object model described in this paper ,it has been decided not to extend the definition of a quantum object to include fields , but rather to consider fields as state components that are associated to space ( see section 6.2 ) .the authors attempt to construct a local causal model of qt / qft resulted in the identification of quantum objects as suitable elementary objects for this type of model .the quantum object is not merely a suitable construct for the description of the proposed model .instead , the assumption of quantum objects has implications that enable new concepts and solutions in a number of ( problem ) areas in qt / qft , which is especially true if quantum objects are assumed to be autonomous entities ( see section 7.1 ) . for the model definition ( and description ), the assumption of autonomous quantum objects requires strict separation of ( a ) quantum objects internal state components , attributes and processes and ( b ) inter - quantum - object physical relationships ; the latter are represented in interactions between quantum objects and interactions between quantum objects and space .the introduction of quantum objects has also been motivated by the finding that a local causal model of qt / qft is not feasible if a strong interpretation of causality ( i.e. , precisely defined state transitions and a uniform state update frequency ) and locality ( i.e. , space - point locality ) is assumed .however , a local causal model of qt / qft is achievable if a weaker interpretation of causality and locality is tolerated within quantum objects .this possibility justifies the view of quantum objects as elementary units of locality and causality . 5 aspect , a. , dalibard , j. and roger , g .. 1982 .letters 1982 , 49 , 1804 - 7 bell , j.s .`` speakable and unspeakable in quantum mechanics '' , cambridge university press diel , h. 2013 .on the computabilty of quantum theory , international journal of computational physical sciences .issn 0976 - 5875 volume 4 , number 1 , 7 - 22 ( or see http://www.hans-diel.de/qtcomputabilityrip ) diel , h. 2015 . a functional model of interactions in quantum theory .http://arxiv 1405 : 6099v2 diel , h. 2016 .are local causal models of quantum theory feasible at all ?, arxiv:1604.03959v1 diel , h. 2015 .an improved interference collapse rule of quantum mechanics , open access library journal , 2 : e1838 .diel , h. 2014 . a model of the measurement process in quantum theory , arxiv1405:5982v1 diel , h. 2015 .a lagrangian - driven cellular automaton supporting quantum field theory .http://arxiv.org/abs/1507.08277 einstein , a. , podolsky , b. , rosen , n. , phys .47 , 777 , 1935 kogut , j. 1983 .the lattice gauge theory approach to quantum chromodynamics , rev.mod .55 feynman , r.p .qed : the strange theory of light and matter , princeton university press griffiths , d .einfuehrung in die elementarteilchenphysik , wiley - vch mandl , f. , shaw , g. 1993 . _quantenfeldtheorie_. wiesbaden : aula verlag .mcmahon , d. 2008 ._ quantum field theory demystified_. new york ny : mc graw hill .ryder , l.h .quantum field theory , chapter 3 , cambridge university press , cambridge strassler , m. 2015 .conversations about science with theoretical physicist m. strassler .fields of particular significance.html
|
the author s attempt to construct a local causal model of quantum theory ( qt ) that includes quantum field theory ( qft ) resulted in the identification of `` quantum objects '' as the elementary units of causality and locality . quantum objects are collections of particles ( including single particles ) whose collective dynamics and measurement results can only be described by the laws of qt and qft . local causal models of quantum objects internal dynamics are not possible if a locality is understood as a space - point locality . within quantum objects , state transitions may occur which instantly affect the whole quantum object . the identification of quantum objects as the elementary units of causality and locality has two primary implications for a causal model of qt and qft : ( 1 ) quantum objects run autonomously with system - state update frequencies based on their local proper times and with either no or minimal dependency on external parameters . ( 2 ) the laws of physics that describe global ( but relativistic ) interrelationships must be translated to a causal model of interactions between quantum objects and interactions between quantum objects and the space . keywords : models of quantum theory , causal model , local model , entanglement , quantum field theory
|
a representation in the form of a linear superposition of elements of a vector space is said to be _ sparse _ if the number of elements in the superposition is small , in comparison to the dimension of the corresponding space .the interest for sparse representations has enormously increased the last few years , in large part due to their convenience for signal processing techniques and the results produced by the theory of compressed sensing with regard to the reconstruction of sparse signals from non - adaptive measurements .furthermore , the classical problem of expressing a signal as a linear superposition of elements taken from an orthogonal basis has been extended to consider the problem of expressing a signal as a linear superposition of elements , called _ atoms _ , taken from a redundant set , called _ dictionary _ . the corresponding signal approximation in terms of highly correlated atoms is said to be _ highly nonlinear _ and has been proved relevant to signal processing applications . moreover , a formal mathematical setting for highly nonlinear approximations is being developed . as a small sample of relevant literaturelet us mention . in regard to sparse approximations there are two main problems to be looked at ; one is in relation to the design of suitable algorithms for finding the sparse approximation , and other the construction of the dictionaries endowing the approximation with the property of sparsity . in this communicationwe consider the sparse representation matter for the large class of signals which are amenable to satisfactory approximation in spline spaces . given a signal, we have the double goal of a ) finding a spline space for approximating the signal and b ) constructing those dictionaries for the space which are capable of providing a sparse representation of such a signal . in order to achieve both aimswe first discuss the construction of dictionaries of b - spline functions for non - uniform partitions , because the usual choice , the b - spline basis for the space , is not expected to yield sparse representations . in a previous publication a prescription for constructing b - spline dictionaries on the compact intervalis advised by restricting considerations to uniform partitions ( cardinal spline spaces ) .since our aim entails to relax this restriction , we are forced to look at the problem from a different perspective . herewe divide the partition into subpartitions and construct the dictionary by joining together the bases for the subspaces associated to each subpartition .the resulting dictionary is proved to span the spline space for the given non - uniform partition .consequently , the uniform case considered in arises as a particular case of this general construction .the capability of the proposed nonuniform dictionaries to produce sparse representations is illustrated by a number of examples .the letter is organized as follows : section 2 introduces splines spaces and gives the necessary definitions .the property of splines spaces which provides us with the foundations for the construction of the proposed dictionaries is proven in this section ( c.f .theorem 2 ) . for a fixed partition , the actual constructions of non - uniform b - spline dictionariesis discussed in section 3 .section 4 addresses the problem of finding the appropriate partition giving rise to the spline space suitable for approximating a given signal . in the same section a number of examples are presented , which illustrate an important feature of dictionaries for the adapted spaces .namely , they may render a very significant gain in the sparseness of the representation of those signals which are well approximated in the corresponding space .the conclusions are drawn in section 5 .we refer to the fundamental books for a complete treatment of splines . herewe simply introduce the adopted notation and the basic definitions which are needed for presenting our results . given a finite closed interval ] as the finite set of points we further define subintervals as : and ] .it holds to be true that it stems from theorem 1 and the basic result of linear algebra establishing that for and two sets such that and , one has . certainly , from theorem 1 and for we have : , . hence so that , using theorem 1 on the right hand side , the proof is concluded .the next corollary is a direct consequence of the above theorem .[ cr:1 ] suppose that are partitions of ] and .we denote the b - spline basis for as .accordingly , a dictionary , , for can be constructed as so as to satisfy when , is reduced to the b - spline basis of .it immediately follows from corollary [ cr:1 ] . indeed, note that the number of functions in the above defined dictionary is equal to , which is larger than .hence , excluding the trivial case , the dictionary constitutes a redundant dictionary for . according to theorem [ th : dic ] , to build a dictionary for we need to choose -subpartitions such that .this gives a great deal of freedom for the actual construction of a non - uniform b - spline dictionary .1 shows some examples which are produced by generating a random partition of ] with distance between adjacent points such that , i.e. , is composed by the inner knots .we also consider the partition with knots , where and is a fixed integer in ] , the discrete signal , the subdivision level .the partition . according to theorem [ th : dic ] , in order to build a dictionary for we need to choose -subpartitions such that . as an example we suggest a simple method for producing -subpartitions , which is used in the numerical simulations of the next section .considering the partition such that , for each integer in ] plotted in the top left of fig . 1 . * a seismic signal plotted in the top right graph of fig . 1 .this signal was taken from the wavelab802 toolbox .it is acknowledged there that the signal is distributed throughout the seismic industry as a test dataset . * a cosine function of random phase where is the piecewise constant function depicted in the bottom right of fig . 1. the left graph corresponds to the signal .[ f2 ] = 7.5 cm = 7.5 cm + = 7.5 cm = 7.5 cm the three signals are to be approximated up to a tolerance for the norm of the approximation s error .we deal with the chirp signal on the interval $ ] , by discretizing it into samples and applying algorithm 1 to produce the partition .the resulting number of knots is , which is enough to approximate the signal , by a cubic b - spline basis for the space , within the above specified precision .a dictionary for the identical space is constructed by considering 10 subpartitions , which yield functions .the signal is a piece of data .a partition of cardinality is obtained as and the dictionary of cubic splines we have used arises by considering subpartitions , which yields a dictionary of cardinality .the signal is discretized into samples .the partition produces knots . using subpartitionswe build a dictionary of linear b - spline functions . denoting by the atoms of the - dictionary, we look now for the subsets of indices of cardinality providing us with a sparse representation of the signals . in other words, we are interested in the approximations such that and the values are satisfactory small for the approximation to be considered sparse . since the problem of finding the sparsest solution is intractable , for all the signals we look for a satisfactory sparse representation using the same greedy strategy , which evolves by selecting atoms through stepwise minimization of the residual error as follows .i)the atoms are selected one by one according to the optimized orthogonal matching pursuit ( oomp ) method until the above defined tolerance for the norm of the residual error is reached .ii)the previous approximation is improved , without greatly increasing the computational cost , by a ` swapping refinement ' which at each step interchanges one atom of the atomic decomposition with a dictionary atom , provided that the operation decreases the norm of the residual error .iii)a backward - optimized orthogonal matching pursuit ( boomp ) method is applied to disregard some coefficients of the atomic decomposition , in order to produce an approximation up to the error of stage i ) .the last two steps are repeated until no further swapping is possible .let us stress that , if steps ii ) and iii ) can be executed at least once , the above strategy guarantees an improvement upon the results of oomp .the gain is with respect to the number of atoms involved in the approximation for the given error s norm .the described technique is applied to all the non - orthogonal dictionaries we have considered for comparison with the proposed approach .the results are shown in table 1 . in the first columnwe place the dictionaries to be compared .these are : 1 ) the spline basis for the space adapted to the corresponding signal , as proposed in sec 4 .as already mentioned , for signals and we use cubic b - splines and for signal the linear one .2 ) the dictionary for the identical spaces consisting of functions of larger support .3 ) the orthogonal cosine bases used by the discrete cosine transform ( dct ) .4 ) the semi - orthogonal cardinal chui - wang spline wavelet basis and 5 ) the chui - wang cardinal spline dictionary for the same space . for signals and we use cubic spline wavelets and for signal linear spline wavelets . .[ tab1 ] [ cols="<,^,^,^",options="header " , ] the last three columns of table 1 display the number of atoms involved in the atomic decomposition for each test signal and for each dictionary .these numbers clearly show a remarkable performance of the approximation produced by the proposed non - uniform b - spline dictionaries .notice that whilst the non - uniform spline space is adapted to the corresponding signal , only the dictionary for the space achieves the sparse representation .moreover the performance is superior to that of the chui - wang spline wavelet basis even for signal , which was specially selected because , due to the abrupt edges delimiting the smooth lines , is very appropriate to be approximated by wavelets .it is also worth stressing that for signal and the performance is similar to the cardinal chui - wang dictionary , which is known to render a very good representation for these signals .however , whilst the chui - wang cardinal spline wavelet dictionaries introduced in are significantly redundant with respect to the corresponding basis ( about twice as larger ) the non - uniform b - spline dictionaries introduced here contain a few more functions than the basis .nevertheless , as the examples of this section indicate , the improvement in the sparseness of the approximation a dictionary may yield with respect to the b - spline basis is enormous .still there is the issue of establishing how to decide on the number of subpartitions to be considered . for these numerical examplesthe number of subpartitions was fixed as the one producing the best result when allowing the number of subpartitions to vary within some range .it was observed that only for signal the optimum number of subpartitions produced results significantly better that all other values .conversely , from signals and some variations from the optimum number of subpartitions still produce comparable results .non - uniform b - spline dictionaries for adapted spline spaces have been introduced .the proposed dictionaries are built by dividing a given partition into subpartitions and merging the basis for the concomitant subspaces .the dictionary functions are characterized by having broader support than the basis functions for the identical space .the uniform b - spline dictionaries proposed in readily arise here as a particular case . the capability of the non - uniform b - spline dictionaries to produce sparse signal representation has been illustrated by recourse to numerical simulations .thus , we feel confident that a number of applications could benefit from this construction , e.g. , we believe it could also be useful in computer - aided geometry design ( cagd ) , for reducing control points and for finding sparse knot sets of b - spline curves .sang - mook lee , a. lynn abbott , n. c. clark and p. a. araman , spline curve matching with sparse knot sets : applications to deformable shape detection and recognition , the 29th annual conference of the ieee industrial electronics society , 2003 .
|
non - uniform b - spline dictionaries on a compact interval are discussed . for each given partition , dictionaries of b - spline functions for the corresponding spline space are constructed . it is asserted that , by dividing the given partition into subpartitions and joining together the bases for the concomitant subspaces , slightly redundant dictionaries of b - splines functions are obtained . such dictionaries are proved to span the spline space associated to the given partition . the proposed construction is shown to be potentially useful for the purpose of sparse signal representation . with that goal in mind , spline spaces specially adapted to produce a sparse representation of a given signal are considered .
|
in this paper , we explore a class of abstractions for non - linear autonomous systems ( continuous , discrete and hybrid systems ) using _ change - of - bases _ ( cob ) transformations .cob transformations are obtained for a given system by expressing the dynamics of the system in terms of a new set of variables that relate to the original system variables through the cob transformation .such a transformation is akin to studying the system under a new set of `` bases '' .we derive conditions on the transformations such that ( a ) the cob transformations also define an _ autonomous system _ and ( b ) the resulting system abstracts the original system : i.e. , all invariants of the abstract system can be transformed into invariants for the original system .furthermore , we often seek abstract systems through cob transformations whose dynamics are of a simpler form , more amenable to automatic verification techniques .for instance , it is possible to use cob transformations that relate an ode with non - linear right - hand sides to an affine ode , or transformations that reduce the degree of a system with polynomial right - hand sides .if such transformations can be found , then safety analysis techniques over the simpler abstract system can be used to infer safety properties of the original system . in this paper , we make two main contributions : ( a ) we define cob transformations for continuous , discrete and hybrid systems and provide conditions under which a given transformation is valid ; ( b ) we provide search techniques for finding cob transformations that result in a polynomial system whose right - hand sides are degree limited by some limit .specifically , the case yields an affine abstraction ; and ( c ) we provide experimental evidence of the application of our techniques to a variety of ordinary differential equations ( odes ) and discrete programs . the results in this paperextend our previously published results that appeared in hscc 2011 .the contributions of this paper include ( a ) an extension from linearizing cob transformations to degree - bounded polynomial cob transformations , ( b ) extending the theory from purely continuous system to discrete and hybrid systems , and ( c ) an improved implementation that can handle hybrid systems with some evaluation results using this implementation .on the other hand , our previous work also included an extension of the theory to differential inequalities and iterative techniques over cones .these extensions are omitted here in favor of an extended treatment of the theory of differential equation abstractions for continuous , discrete and hybrid systems . in this section ,we motivate the techniques developed in this paper by means of a few illustrative examples involving purely continuous odes and purely discrete programs .our first example concerns a continuous system defined by a system of ordinary differential equations ( odes ) : [ ex : motivating - example ] consider a continuous system over : , with initial conditions given by the set ,\ y \in [ 0,1] ] .we analyze the system using the timepass tool as presented in our previous work to obtain polyhedral invariants : substituting back , we can infer polynomial inequality invariants on the original system including , finally , we integrate the linear system to infer the following conserved quantity for the underlying non - linear system : finally , if , the map is invertible and therefore , the ode above can be integrated .note that not every transformation yields a linear abstraction .in fact , most transformations will not define an abstraction .the conditions for an abstraction are discussed in section [ section : covabstraction ] . [ cols="^,^ " , ]thus far , we have presented an approach that uses change - of - bases transformation for inferring abstractions of continuous , discrete and hybrid systems .we have explored the theoretical underpinnings of our approach , its connections to various invariant generation techniques presented earlier .our previous work presents an extension of the approach presented in this paper to infer differential inequality abstractions .similar extensions for discrete systems remain unexplored .furthermore , the use of the abstractions presented here to establish termination for transition systems is also a promising line of future research .future research will also focus on the use of lie symmetries to reduce the size of the ansatz or templates used in the search for conserved quantities and cob transformations .r. bagnara , e. rodrguez - carbonell , and e. zaffanella .generation of basic semi - algebraic invariants using convex polyhedra . in _12th international symposium on static analysis ( sas05 ) _ , volume 3672 of _ lecture notes in computer science _ , pages 1934 .springer - verlag , sept .s. berman , a. halasz , and v. kumar .a reachability algorithm for multi - affine systems with applications to biological systems . in _ hybrid systems : computation and control _ , volume 4416 , pages 7689 . springer verlag , 2007 .g. collins .quantifier elimination for real closed fields by cylindrical algebraic decomposition . in h.brakhage , editor ,_ automata theory and formal languages _ ,volume 33 of _ lncs _ , pages 134183 .springer , 1975 . .gktas and w. a. hereman .symbolic computation of conservation laws , generalized symmetries , and recursion operators for nonlinear differential - difference equations . in _ dynamical systems and methods_. springer verlag , 2011 .l. kovcs .reasoning algebraically about p - solvable loops . in _ tools and algorithms for the construction and analysis of systems ( tacas ) _ , volume 4963 of _ lecture notes in computer science _ , pages 249264 .springer , 2008 .m. oishi , i. mitchell , a. m. bayen , and c. j. tomlin .invariance - preserving abstractions of hybrid systems : application to user interface design ._ ieee trans . on control systems technology_ , 160 ( 2 ) , mar 2008 .
|
we present abstraction techniques that transform a given non - linear dynamical system into a linear system or an algebraic system described by polynomials of bounded degree , such that , invariant properties of the resulting abstraction can be used to infer invariants for the original system . the abstraction techniques rely on a change - of - basis transformation that associates each state variable of the abstract system with a function involving the state variables of the original system . we present conditions under which a given change of basis transformation for a non - linear system can define an abstraction . furthermore , the techniques developed here apply to continuous systems defined by ordinary differential equations ( odes ) , discrete systems defined by transition systems and hybrid systems that combine continuous as well as discrete subsystems . the techniques presented here allow us to discover , given a non - linear system , if a change of bases transformation involving degree - bounded polynomials yielding an algebraic abstraction exists . if so , our technique yields the resulting abstract system , as well . this approach is further extended to search for a change of bases transformation that abstracts a given non - linear system into a system of linear differential inclusions . our techniques enable the use of analysis techniques for linear systems to infer invariants for non - linear systems . we present preliminary evidence of the practical feasibility of our ideas using a prototype implementation .
|
interference is inherent to any fully connected multi - user wireless network .as the number of devices sharing the spectrum with high rate demands grows , wireless networks become more and more interference limited .the significance of interference on the operation of a wireless network renders it natural to focus on its high snr performance to obtain design insights and characterize the interaction between the signals .thus , degrees of freedom ( dof ) , which characterizes the scaling of the transmission rates of wireless networks in high signal to noise ratio ( snr ) regime , is an important metric to measure the performance of an interference - limited system .interference alignment was shown to achieve the optimal dof for a variety of interference - limited wireless networks . in reference , the authors have shown that the optimal dof can be achieved for the 2-user multiple input multiple output ( mimo ) x channel with antennas at each node , using symbol extensions and interference alignment , demonstrating the achievability of non - integer dof with constant channel for . for with constant channels , the dof is shown to be achievable in .reference further generalized the result to the user x channel , and showed that the optimal dof is with single antenna nodes and a time - varying channel .reference showed that interference alignment achieves the optimal dof of the -user interference channel , , with single antenna nodes and time - varying channel .follow up studies on the dof of the interference channels , for example , the simo interference channel , the -user mimo interference channel , and interference channel with cooperation and cognition , can be found in references . to effectively implement interference alignment , it is crucial to have global instant csit which can be difficult to obtain for practical systems. reference has studied the dof region of the -user mimo broadcast channel and the -user mimo interference channel without csit , and loss of dof is observed for many scenarios of interest .reference has further generalized the results to -user broadcast and interference channels , and also derived outerbounds on dof region for the -user x channel .this reference has established that without csit , the transmitters can not steer the signals to the exact desired directions to guarantee that the interference is aligned together at the receivers , which causes the performance degradation in terms of dof . while loss of dof is observed when no csit is available at the transmitters , reference has observed that as long as the channel s correlation structure is known at the transmitters , without any knowledge of the exact channel coefficient , interference alignment is still possible for certain wireless networks .reference has further developed this idea and proposed the blind interference alignment strategies using staggered antennas , which can artificially create the desired channel correlation pattern by switching the antennas used by the receivers . for systems where csit is completely unknownhowever , loss of dof appears to be inevitable .a more practical assumption about csit is that the transmitter may have delayed csi .the delayed csit model characterizes the channel variation and the delay in the feedback of csi from receivers , and thus is important from both theoretical and practical perspective .the delayed csit assumption is first studied in the context of the -user broadcast channel , i.e. , a channel with a transmitter having antennas and receivers each with a single antenna , where the transmitter has accurate and global csit delayed by several time slots .it is shown that the delayed csit can be useful for interference alignment and the dof can be improved significantly compared to the case without csit .this delayed csit assumption is then applied to various channel models such as the general broadcast channels , interference channels and x channels , and improvement on the dof compared to the cases without csit can be found in references .the delayed csit is an interesting assumption which , in fact , shows that feedback of delayed csi can provide capacity gain for multi - destination wireless networks , which is in contrast with various single - destination models .however , there is a performance degradation with the delayed csit assumption compared to when global csit is available .for example , the dof for the -user broadcast channel with delayed csit is shown in to be , whereas with global csit , the optimal dof is .the operation of relaying , although is beneficial in improving the achievable rates for many multi - user wireless networks , is shown in reference to be unable to provide dof gain for the fully connected interference channel and x channel with full csi at all nodes . in this context , relaying is shown to be useful only to facilitate interference alignment for some specific scenarios .for quasi - static channels , references have proposed strategies to utilize the relay to randomize the channel coefficients at the receivers , and the optimal dof can be achieved although the channel is not time varying within the transmission blocks .reference has proposed relay - aided interference alignment schemes that can achieve the optimal dof of the -user interference channel with finite time extensions . for networks that have limited csi ,it is shown in reference that using a relay , the optimal dof for the -user interference channel can be achieved when all the nodes have local csi only , provided that the relay has more antennas than the total number of single - antenna transmitters . for networks that are not fully connected , for example ,the multi - hop relay networks , references have studied the dof under either global csit or delayed csit assumptions .whereas the study of relaying on the dof of fully connected wireless networks so far focused on using relays to facilitate interference alignment , in this work , we aim to theoretically study the impact of relaying on the dof from another perspective .we focus on understanding whether , and to what extent , relays can improve the dof of wireless networks when the source nodes , i.e. , the transmitters , are _ blind _ in the sense that no csit is available . in this paper , we mainly consider the case when relays have global csi as a first step to investigate the impact of relays on the dof of wireless networks without csit .the justification of the setting is that it is likely that the relay nodes are located in between the sources and the destinations and could have access to more accurate csi .the relays can be small base stations at fixed locations with more power resources and computing capability , and obtaining csi can be less challenging .specifically , we study the dof of the x channel and the interference channel with single - antenna users and half - duplex multi - antenna relays , where no csi is available at the transmitters , but global csi can be obtained at the relays and the receivers .we first design a _ joint beamforming _ based transmission scheme for the general x channel with relays .we show that when each relay is equipped with antennas , with relays , the dof can be achieved , which is the same optimal dof as the case when csit is available .we then consider two special cases : the -user x channel with a multiple antenna relay and the -user x channel with single antenna relays .for the case when relay nodes are only equipped with single antenna , the optimal dof can be achieved with relays . for the case with one multiple antenna relay, we can design a different scheme with less computational complexity that uses _ partial interference alignment _ at the relay and joint beamforming to show the achievability of optimal dof using one relay with antennas .note that in the above results , the channel is required to be time varying in order to achieve the optimal dof .an interesting feature of the dof optimal interference alignment scheme using relays is that only finite channel usage is required to achieve the exact optimal dof , whereas for the general x channel without relays but with csit , infinite channel uses are required .the case when there is no csit but relays only have delayed csi is also investigated .using the techniques developed for the x channel , we further show that interference alignment is possible for the -user interference channel without csit with the help of half - duplex relays .for the general case , we design a two - slot transmission scheme using joint beamforming , and show that it requires relays with antennas to achieve the dof , which is exactly the same optimal dof as the case with csit .we then consider two special cases : the case with one relay with antennas , and the case with single antenna relays .note that the special case when the relay has antennas is also investigated in reference .when we have one relay with antennas , joint beamforming is not necessary for interference alignment and the channel does not need to be time varying .however , when we have relays each with a single antenna , joint beamforming is required to achieve interference alignment and the channel does need to be time varying .throughout the paper , we use bold letters , e.g. , to denote constant vectors , bold capital letters , e.g. , to denote matrices or vector of random variables , and ordinary capital letters , e.g. , to denote random variables .we use to denote the closest integer that is smaller than , and to denote the closest integer that is larger than .^{(i)} ] denotes the column vector ^{(j)}\right]^t,\left[\left[a_{2j}\right]^{(j)}\right]^t,\cdots\right]^t,\label{eq:11}\ ] ] which is obtained by enumerating for all indices and as its entries .we also use ^{(ijk)} ] under power constraint .the dof is defined as where .note that since we consider the dof as our metric , in the rest of the paper , we omit the noise terms in equations - .-user interference channel with relays.,width=259 ] fig .[ fig - icrelay ] shows the -user interference channel with relays . in this model, there are transmitters and receivers , and each transmitter has a message to be communicated to one intended receiver .it is assumed that the transmitters and receivers are equipped with single antenna .there are half - duplex relays available to help the transmission .each relay is assumed to have antennas .we denote as the message intended from transmitter to receiver , .the transmitted signal from transmitter is denoted as and the signal from relay is denoted as , where is the time index denoting the slot in which the signal is transmitted . when the relays listen to the channel , the received signals at the receivers are and the received signals at the relays are when relays transmit , the received signals at the receivers are the power constraints on the transmitted signals , the channel coefficients , and the channel noise are defined as in section [ sec : mtimes - n - x ] .we denote the rate of message is under power constraint .define as the set of all achievable rate tuples ^{(k)} ] is defined as in equation - .we also define vectors ^{(ipq)},\end{aligned}\ ] ] and matrix , which is formed by taking as its rows for all enumeration of and .all the linear equations can now be written as where ^{(nk)} ] , where ] has dimension , and thus its null space is non - empty , which guarantees the existence of . for receiver , , ,are the messages that it needs to decode , which should span a dimensional space .there are a total of dimensions available for the received signals , and hence we should align the rest interference signals into a dimensional space . with the help of the relay ,we have already aligned the interfering data streams , into a one dimensional space for each fixed .if we can steer the data stream into the same dimension of the signal space , then we are able to keep all the interference into a dimensional space .this is feasible by choosing the parameters such that equation at the beginning of previous page is satisfied for all .it is easy to verify that after aligning the interference , the intended messages occupy a -dimensional space , which does not intersect with the -dimensional space of the interfering signals , and thus they can be decoded using a zero - forcing decoder to completely eliminate the interference .therefore we are able to send messages with slots , and the dof is achievable .v. cadambe , s. a. jafar , c. wang , `` interference alignment with asymmetric complex signaling settling the host - madsen - nosratinia conjecture '' , _ ieee transactions on information theory _ , vol .56 , no , 9 , pp .45524565 , september 2010 . c. huang and s. a. jafar , `` degrees of freedom of the mimo interference channel with cooperation and cognition , '' _ ieee transactions on information theory _ , vol .55 , no . 9 , pp .42114220 , september 2009 . c. huang , s. a. jafar , s. shamai , and s. vshwanath , `` on degrees of freedom region of mimo networks without channel state information at transmitters , '' _ ieee transactions on information theory _ , vol .58 , no . 2 ,pp . 849857 , february 2012 .c. s. vaze and m. varanasi , `` the degrees of freedom regions of mimo broadcast , interference , and cognitive radio channels with no csit , '' _ ieee transactions on information theory _ , vol .58 , no . 8 , pp . 53545374 , august 2012 .c. wang , t. gou , and s. a. jafar , `` aiming perfectly in the dark - blind interference alignment through staggered antenna switching , '' _ ieee transactions on signal processing _ , vol .59 , no . 6 , pp . 27342744 , june 2011 .m. a. maddah - ali and d. tse , `` completely stale transmitter channel state information is still very useful , '' _ ieee transactions on information theory _58 , no . 7 , pp .44184431 , july 2012 .september 2010 .h. maleki , s. a. jafar , and s. shamai , `` retrospective interference alignment over interference networks , '' _ ieee journal of selected topics in signal processing _ , vol . 6 , no . 3 , pp .228240 , june 2012 . c. s. vaze and m. varanasi , `` the degrees of freedom region and interference alignment for the mimo interference channel with delayed csi , '' _ ieee transactions on information theory _ , vol .58 , no . 7 , pp .43964417 , july 2012 , accepted , available at arxiv:1101.5809v2 [ cs.it ] . c. s. vaze and m. k. varanasi ,`` the degrees of freedom region of the two - user mimo broadcast channel with delayed csit , '' in _ proceedings of ieee international symposium on information theory _, july 2011 .a. ghasemi , a. s. motahari , and a. k. khandani , `` interference alignment for the mimo interference channel with delayed local csit , '' _ submitted to ieee transactions on information theory _ , available at http://arxiv.org/abs/1101.5809 .r. tandon , s. mohajer , h. v. poor , and s. shamai , `` degrees of freedom region of the mimo interference channel with output feedback and delayed csit , '' _ ieee transactions on information theory _ , vol . 59 , no .3 , pp . 14441457 , march 2013 .m. j. abdoli , a. ghasemi , and a. k. khandani , `` on the degrees of freedom of three - user mimo broadcast channel with delayed csit , '' in _ proceedings of ieee international symposium on information theory _, july 2011 . u. basher , a. shirazi , and h. permuter , `` capacity region of finite state multiple - access channel with delayed state information at the transmitters , '' _ ieee transactions on information theory _ ,58 , no .34303452 , june 2012 .y. tian and a. yener , `` the gaussian interference relay channel : improved achievable rates and sum rate upperbounds using a potent relay , '' _ ieee transactions on information theory , special issue on interference networks _ ,57 , no .28652879 , may 2011 .o. sahin , o. simeone , and e. erkip , `` interference channel with an out - of - band relay , '' _ ieee transactions on information theory , special issue on interference networks _ ,57 , no . 5 , pp .27462764 , may 2011 . v.r. cadambe and s. a. jafar , `` degrees of freedom of wireless networks with relays , feedback , cooperation and full duplex operation , '' _ ieee transactions on information theory _ , vol . 55 , no . 5 , pp . 23342344 , may 2009 .b. nourani , s. a. motahari , and a. k. khandani , `` relay - aided interference alignment for the quasi - static x channel , '' in _ proceedings of ieee international symposium on information theory _ ,june 2009 .jeon , s .- y .chung , and s. a. jafar , `` degrees of freedom region of a class of multisource gaussian relay networks , '' _ ieee transactions on information theory _ , vol . 57 , no . 5 , pp . 30323044 , may 2011 .t. gou , s. a. jafar , c. wang , s .- w .jeon , and s .- y .chung , `` aligned interference neutralization and the degrees of freedom of the 2x2x2 interference channel , '' _ ieee transactions on information theory _ , vol .58 , no . 7 , pp .43814395 , july 2012 .
|
channel state information ( csi ) at the transmitters ( csit ) is of importance for interference alignment schemes to achieve the optimal degrees of freedom ( dof ) for wireless networks . this paper investigates the impact of half - duplex relays on the degrees of freedom ( dof ) of the x channel and the interference channel when the transmitters are blind in the sense that no csit is available . in particular , it is shown that adding relay nodes with global csi to the communication model is sufficient to recover the dof that is the optimal for these models with global csi at the transmitters . the relay nodes in essence help steer the directions of the transmitted signals to facilitate interference alignment to achieve the optimal dof with csit . the general x channel with relays and the -user interference channel are both investigated , and sufficient conditions on the number of antennas at the relays and the number of relays needed to achieve the optimal dof with csit are established . using relays , the optimal dof can be achieved in finite channel uses . the dof for the case when relays only have delayed csi is also investigated , and it is shown that with delayed csi at the relay the optimal dof with full csit can not be achieved . special cases of the x channel and interference channel are investigated to obtain further design insights . degrees of freedom , interference alignment , relay , x channel , -user interference channel , channel state information .
|
two frequency regimes have to be distinguished when dealing with poroelastic waves . in the low - frequency range ( lf), the viscous efforts are proportional to the relative velocity of the fluid and the solid matrix . in the high - frequency range ( hf ), modeling the dissipation is a more delicate task : biot first presented an expression for particular pore geometries. in 1987 , johnson - koplik - dashen ( jkd ) published a general expression for the dissipation in the case of random pores .the viscous efforts depend in this model on the square root of the frequency . in the time domain , time fractional derivativesare then introduced , which involves convolution products with singular kernels .transient analytical solutions of biot - jkd model have been derived in academic geometries and homogeneous media. when dealing with more complex geometries , numerical methods are required .many time - domain simulation methods have been developed in the lf regime : see ref . and the introduction to ref . for general reviews .in the hf regime , the fractional derivatives greatly complicate the numerical modeling of the biot - jkd equations .the past values of the solution are indeed required in order to evaluate the convolution products , which means that the time evolution of the solution must be stored .it greatly increases the memory requirements and makes large - scale simulations impossible . to our knowledge, two approaches have been proposed so far in the literature .the first approach consists in discretizing the convolution products, and the second one is based on the use of a diffusive representation of the fractional derivative. in the latter approach , the convolution product is replaced by a continuum of diffusive variables - or memory variables - satisfying local differential equations. this continuum is then discretized using appropriate quadrature formulas , resulting in the biot - da ( diffusive approximation ) model . in ref ., the diffusive approach was followed in a one - dimensional homogeneous configuration . compared with ref ., important improvements were introduced : good representation of the viscous dissipation on the whole range of frequencies ; optimization of the model on the range of interest ; estimation of the computational effort in terms of the required accuracy .the goal of the present work is to extend the algorithm to two - dimensional heterogeneous porous media with interfaces .this paper is organized as follows . the original biot - jkd model is briefly outlined in section [ sec : phys ] and the principles underlying the diffusive representation of fractional derivatives are described .the decrease of energy and the dispersion analysis are addressed . in section [ sec : da ] , the method used to discretize the diffusive model is presented : the diffusive approximation thus obtained is easily treatable by computers . following a similar approach than in viscoelasticity, the coefficients of the model are determined using an optimization procedure in the frequency range of interest , giving an optimum number of additional computational arrays .the numerical modeling is addressed in section [ sec : num ] , where the equations of evolution are split into two parts : a propagative part , which is discretized using a fourth - order scheme for hyperbolic equations , and a diffusive part , which is solved exactly .an immersed interface method accounts for the jump conditions and for the geometry of the interfaces , preventing from the usual limitations of finite differences on a cartesian grid .numerical experiments performed with realistic values of the physical parameters are presented in section [ sec : exp ] . in section [ sec : conclu ] , a conclusion is drawn and some futures lines of research are given .the biot model describes the propagation of mechanical waves in a macroscopic porous medium consisting of a solid matrix saturated with a fluid circulating freely through the pores. it is assumed that * the wavelengths are large in comparison with the diameter of the pores ; * the amplitude of the perturbations is small ; * the elastic and isotropic matrix is completely saturated with a single fluid phase ; * the thermo - mechanical effects are neglected .this model involves 10 physical parameters : the density and the dynamic viscosity of the fluid ; the density and the shear modulus of the elastic skeleton ; the porosity , the tortuosity , the absolute permeability , the lam coefficient and the two biot s coefficients and of the saturated matrix .the following notations are introduced taking and to denote the solid and fluid displacements , the unknowns in 2d are the elastic velocity , the filtration velocity , the elastic stress tensor , and the acoustic pressure .the constitutive laws are \displaystyle p = m\,(-\beta\,\mbox{tr}\,\bm{\varepsilon } + \xi ) , \end{array } \label{eq : biot_comportement}\ ] ] where is the strain tensor and is the rate of fluid change .the symmetry of implies compatibility conditions between spatial derivatives of , leading to the beltrami - michell equation: \displaystyle \theta_0 = -\frac{\lambda_0}{4(\lambda_0 + \mu)},\quad\theta_1 = \frac{\lambda_0 + 2\,\mu}{4(\lambda_0 + \mu)},\quad\theta_2 = \frac{\mu\,\beta}{2(\lambda_0 + \mu)}. \end{array } \label{eq : beltrami}\ ] ] on the other hand , the conservation of momentum yields \displaystyle \rho_s\,\frac{\partial \bf v_s}{\partial t } + \rho_w\,\frac{\partial \bf w}{\partial t } + \frac{\eta}{\kappa}\,f*{\bf w } = -\nabla p , \end{array } \right .\label{eq : biot_dynamique}\ ] ] where is the convolution product in time .the second equation of ( [ eq : biot_dynamique ] ) is a generalized darcy law .the quantity denotes the viscous dissipation induced by the relative motion between the fluid and the elastic skeleton .the frontier between the low - frequency range ( lf ) and the high - frequency range ( hf ) is reached when the viscous efforts are similar to the inertial effects .the transition frequency is given by in lf , the flow in the pores is of the poiseuille type , and dissipation efforts in the second equation of ( [ eq : biot_comportement ] ) are given by where is the dirac distribution . in hf , the width of the viscous boundary - layer is small in comparison with the size of the pores , and modeling the dissipation process is a more complex task . herewe adopt the widely - used model proposed by johnson - koplik - dashen ( jkd ) in 1987 , which is valid for random networks of pores with constant radii. the only additional parameter is the viscous characteristic length .we take where is the pride number ( typically ) .based on the fourier transform in time , , the frequency correction given by the jkd model can be written & \displaystyle = \left ( 1+i\,p\,\frac{\omega}{\omega_c}\right ) ^{1/2},\\ [ 10pt ] & \displaystyle = \frac{1}{\sqrt{\omega}}\,(\omega + i\,\omega)^{1/2}. \end{array } \label{eq : f_omega}\ ] ] this correction is the simplest function satisfying the lf and hf limits of the dynamic permeability. therefore , the term involved in the second equation of ( [ eq : biot_comportement ] ) is & \displaystyle = \frac{1}{\sqrt{\omega}}\,(d+\omega)^{1/2}{\bf w}(x , y , t ) . \end{array } \label{eq : f_t}\ ] ] the operator is a shifted order 1/2 time fractional derivative , generalizing the usual derivative characterized by .the notation accounts for the shift in ( [ eq : f_t ] ) . based on ( [ eq : biot_comportement ] ) , ( [ eq : biot_dynamique ] ) and ( [ eq : f_t ] ) , the biot - jkd equations can be written \displaystyle \rho_f\,\frac{\partial\,\bf v_s}{\partial\,t } + \rho_w\,\frac{\partial\,\bf w}{\partial\,t } + \frac{\eta}{\kappa}\,\frac{1}{\sqrt{\omega}}\,(d+\omega)^{1/2}\,{\bf w } = -\nabla p,\\ [ 10pt ] \displaystyle \bm{\sigma } = ( \lambda_f\,\mbox{tr}\,\bm{\varepsilon } - \beta\,m\,\xi){\bf i } + 2\,\mu\,\bm{\varepsilon},\\ [ 10pt ] \displaystyle p = m\,(-\beta\,\mbox{tr}\,\bm{\varepsilon } + \xi ) . \end{array } \right . \label{eq : lcbiot}\ ] ] we rearrange this system by separating and in the first two equations of ( [ eq : lcbiot ] ) and by using the definitions of and . taking obtains the following system of equations of evolution \displaystyle \frac{\partial\,v_{sy}}{\partial\,t } - \frac{\rho _ w}{\chi}\,\left ( \frac{\partial\,\sigma_{xy}}{\partial\,x}+\frac{\partial\,\sigma_{yy}}{\partial\,y}\right ) - \frac{\rho _ f}{\chi}\,\frac{\partial\,p}{\partial\,y } = \frac{\rho _ f}{\rho}\,\gamma\,(d+\omega)^{1/2}w_{y}+f_{v_{sy}},\\ [ 15pt ] \displaystyle \frac{\partial\,w_x}{\partial\,t } + \frac{\rho _ f}{\chi}\,\left ( \frac{\partial\,\sigma_{xx}}{\partial\,x}+\frac{\partial\,\sigma_{xy}}{\partial\,y}\right ) + \frac{\rho } { \chi}\,\frac{\partial\,p}{\partial\,x } = -\,\gamma\,(d+\omega)^{1/2}w_x+f_{w_x},\\ [ 15pt ] \displaystyle \frac{\partial\,w_y}{\partial\,t } + \frac{\rho _ f}{\chi}\,\left ( \frac{\partial\,\sigma_{xy}}{\partial\,x}+\frac{\partial\,\sigma_{yy}}{\partial\,y}\right ) + \frac{\rho } { \chi}\,\frac{\partial\,p}{\partial\,y } = -\,\gamma\,(d+\omega)^{1/2}w_y+f_{w_y},\\ [ 15pt ] \displaystyle \frac{\partial\,\sigma_{xx}}{\partial\,t } - ( \lambda_f + 2\mu)\,\frac{\partial\,v_{sx}}{\partial\,x } - m\,\beta \,\frac{\partial\,w_x}{\partial\,x}-\lambda_f\,\frac{\partial\,v_{sy}}{\partial\,y}-m\,\beta\,\frac{\partial\,w_y}{\partial\,y}=f_{\sigma_{xx}},\\ [ 15pt ] \displaystyle \frac{\partial\,\sigma_{xy}}{\partial\,t}-\mu\,\left ( \frac{\partial\,v_{sy}}{\partial\,x}+\frac{\partial\,v_{sx}}{\partial\,y}\right ) = f_{\sigma_{xy}},\\ [ 15pt ] \displaystyle \frac{\partial\,\sigma_{yy}}{\partial\,t}-\lambda_f\,\frac{\partial\,v_{sx}}{\partial\,x}-m\,\beta\,\frac{\partial\,w_x}{\partial\,x}- ( \lambda_f + 2\mu)\,\frac{\partial\,v_{sy}}{\partial\,y } - m\,\beta \,\frac{\partial\,w_y}{\partial\,y}=f_{\sigma_{yy}},\\ [ 15pt ] \displaystyle \frac{\partial\,p}{\partial\,t } + m\ , \left ( \beta\,\frac{\partial\,v_{sx}}{\partial\,x } + \frac{\partial\,w_x}{\partial\,x}+\beta\,\frac{\partial\,v_{sy}}{\partial\,y } + \frac{\partial\,w_y}{\partial\,y } \right ) = f_p . \end{array } \right .\label{eq : s1}\ ] ] terms , , , , , , and have been to simulate sources .taking the shifted fractional derivative ( [ eq : f_t ] ) can be written the operator is not local in time and involves the entire time history of . as we will see in section[ sec : da ] , a different way of writing this derivative is more convenient for numerical evaluation . based on euler s function , the diffusive representation of the totally monotone function is substituting ( [ eq : fonction_diffu ] ) into ( [ eq : dfrac ] ) gives & \displaystyle = \frac{1}{\pi}\ , \int _ 0 ^{\infty}\frac{1}{\sqrt{\theta}}\,\bm{\psi}(x , y,\theta , t)\,d\theta , \end{array } \label{eq : derivee_frac}\ ] ] where the diffusive variable is defined as for the sake of clarity , the dependence on and is omitted in . from ( [ eq : variable_diffu ] ) , it follows that the diffusive variable satisfies the ordinary differential equation \displaystyle \bm{\psi}(x , y,\theta,0 ) = 0 .\end{array } \right .\label{eq : edo_psi}\ ] ] the diffusive representation therefore transforms a non - local problem ( [ eq : dfrac ] ) into a continuum of local problems ( [ eq : edo_psi ] ) .it should be emphasized at this point that no approximation have been made up to now .the computational advantages of the diffusive representation will be seen in sections [ sec : da ] and [ sec : exp ] , where the discretization of ( [ eq : derivee_frac ] ) and ( [ eq : edo_psi ] ) will yield a numerically tractable formulation .now , we express the energy of the biot - jkd model ( [ eq : lcbiot ] ) .this result generalizes the analysis performed in the 1d case in the ref . .setting the poroelastic tensor such that in ( [ eq : biot_comportement ] ) , let with e_2 & = & \displaystyle \frac{\textstyle 1}{\textstyle 2}\,\int_{\mathbb{r}^2}\left ( { \bf c}\,\bm{\varepsilon}\,:\,\bm{\varepsilon } + \frac{\textstyle 1}{\textstyle m}\,p^2\right)\,dx\,dy,\\ [ 20pt ] e_3 & = & \displaystyle \frac{\textstyle 1}{\textstyle 2}\,\int_{\mathbb{r}^2}\int_{\theta\in\mathbb{r}^+}\frac{\eta}{\kappa}\,\frac{1}{\pi}\,\frac{1}{\sqrt{\omega\,\theta}}\,\frac{1}{\theta+2\,\omega}\,({\bf w}-\bm{\psi})^2\,d\theta\,dx\,dy .\end{array } \label{eq : energie}\ ] ] without any source terms , is an energy which satisfies [ prop : nrj ] proposition [ prop : nrj ] calls for the following comments : * the biot - jkd model is well - posed ; * when the viscosity of the saturating fluid is neglected ( ) , the energy of the system is conserved ; * the terms in ( [ eq : energie ] ) have a clearly physical significance : is the kinetic energy , and is the potential energy .the term corresponds to the kinetic energy resulting from the relative motion of the fluid inside the pores .a plane wave is injected in ( [ eq : s1 ] ) , where and are the wavevector and the polarization , respectively ; is the position , is the angular frequency and is the frequency .if is collinear with , the dispersion relation of compressional waves is obtained . setting \displaystyle d_2(\omega ) & = \displaystyle -\left ( ( \lambda_f+2\,\mu)\,\rho_w + m\,(\rho-2\,\rho_f\,\beta)\right ) \,\omega^2 + i\,\omega\,\frac{\eta}{\kappa}\,\widehat{f}(\omega)\,(\lambda_f+2\,\mu),\\ [ 10pt ] \displaystyle d_0(\omega ) & = \displaystyle \chi\,\omega^4 - i\,\omega^3\,\frac{\eta}{\kappa}\,\rho\,\widehat{f}(\omega ) , \end{array } \right .\label{eq : coef_relation_dis_p}\ ] ] the dispersion relation of compressional waves takes the form if is orthogonal with , the dispersion relation of the shear wave is obtained . setting \displaystyle d_2(\omega ) & \displaystyle = -\omega^2\,\phi\,\rho_f\,(a-1 ) + i\,\omega\,\phi^2\,\frac{\eta}{\kappa}\,\widehat{f}(\omega),\\ [ 10pt ] \displaystyle d_0(\omega ) & \displaystyle = \omega^2\,\phi\,\rho_f\,a - i\,\omega\,\phi^2\,\frac{\eta}{\kappa}\,\widehat{f}(\omega ) , \end{array } \right .\label{eq : coef_relation_dis_shear}\ ] ] the dispersion relation of shear wave takes the form expressions ( [ eq : coef_relation_dis_p])-([eq : relation_dispersion_p])-([eq : coef_relation_dis_shear])-([eq : relation_dispersion_shear ] ) are valid in the case of both the biot - lf and biot - jkd models with the frequency correction defined by \displaystyle \widehat{f}_{jkd}(\omega ) & \displaystyle = \frac{1}{\sqrt{\omega}}\,(\omega + i\,\omega)^{1/2 } & \mbox{biot - jkd}. \end{array } \right .\label{eq : fonction_correction}\ ] ] the solutions , of ( [ eq : relation_dispersion_p ] ) and the solution of ( [ eq : relation_dispersion_shear ] ) give the phase velocities of the fast compressional wave , of the slow compressional wave , and of the shear wave , with and .the attenuations , and can also be deduced .both the phase velocities and the attenuations of biot - lf and biot - jkd are strictly increasing functions of the frequency .the high frequency limits of phase velocities of compressional waves , and , obtained by diagonalizing the left - hand side of system ( [ eq : s1 ] ) , satisfy the relation and the high frequency limit of phase velocity of the shear wave , , is figure [ fig : dispersion_bf ] shows the dispersion curves corresponding to the biot - lf and biot - jkd models .the physical parameters are those used in the numerical experiments presented in section [ sec : exp ] ( medium ) .note that the scales are radically different in the case of fast and slow waves .the following properties can be observed : * when , the biot - jkd and biot - lf dispersion curves are very similar as might be expected , since ; * the fast compressional wave and the shear wave are almost not affected by the frequency correction , while the slow compressional wave is greatly modified ; * when , the slow compressional wave is almost static . when , the slow wave propagates but is greatly attenuated .[ cols="^,^ " , ]an explicit finite - difference method is presented here for simulating transient poroelastic waves in the full range of validity of poroelasticity .the biot - jkd model , which involves order 1/2 fractional derivatives , was replaced here by an approximate biot - da model , which is much more tractable numerically .the biot - da coefficients are determined here using an optimization procedure , which depends on the frequency range of interest .the hyperbolic system of partial differential equations was discretized using efficient tools ( strang splitting , fourth - order ader scheme , immersed interface method ) .this numerical methodology provides highly accurate numerical solutions , allowing a fine numerical investigation of realistic porous media .some future lines of research are suggested : * _ continuously variable porous media ._ in the previous numerical experiments , the physical parameter characterizing the porous media are constant and discontinuous across the interfaces .the possibility of applying the presented numerical method to 2d porous media with continuously variable coefficients, where no analytical expressions are available , is currently in progress : see ref . in 1d .* _ multiple scattering . _the numerical tools presented here make possible the modeling of multiple scattering in random media .based on simulated data , the properties of the _ effective medium _amount to the disordered under study can be deduced. optimization and parallelization of the method must be realized to take into account of hundreds of inclusions in our simulations .* _ anisotropy . _the anisotropy of some porous media is sometimes a main feature , for instance in biomechanics to model trabecular and cortical bones. the anisotropy of a medium changes only the poroelastic tensor ( [ prop : nrj ] ) .so the ader 4 scheme and the immersed interface method have to be modified , but the approximation of the fractionnal derivative is still valid . * _ thermic boundary - layer . _ in cases where the saturating fluid is a gas , thermo - mechanical effects have to be taken into account .extended versions of the biot - jkd have been developed, involving additional order 1/2 fractional derivatives .the numerical method developed in this paper should lend itself well to working with this model .the matrices in ( [ eq : syst_hyperbolique ] ) are \hline \bf{a}_2 & \bf{0}_{4,4 } & \bf{0}_{4,2n}\\ [ 5pt]\hline \bf{0}_{2n,4 } & \bf{a}_3 & \bf{0}_{2n,2n } \end{array } \right ) , \quad { \bf a}_1 = \left ( \begin{array}{cccc } \displaystyle -\frac{\rho_w}{\chi } & 0 & 0 & \displaystyle -\frac{\rho_f}{\chi}\\ 0 & \displaystyle -\frac{\rho_w}{\chi } & 0 & 0\\ \displaystyle \frac{\rho_f}{\chi } & 0 & 0 & \displaystyle \frac{\rho}{\chi}\\ 0 & \displaystyle \frac{\rho_f}{\chi } & 0 & 0 \end{array } \right ) , \\ [ 50pt ] { \bf a}_2 = \left ( \begin{array}{cccc } \displaystyle -(\lambda_f+2\,\mu ) & 0 & \displaystyle -m\,\beta & 0\\ 0 & \displaystyle -\mu & 0 & 0\\ \displaystyle -\lambda_f & 0 & \displaystyle -m\,\beta & 0\\ \displaystyle m\,\beta & 0 & m & 0 \end{array } \right ) , \quad { \bf a}_3 = \left ( \begin{array}{cccc } \displaystyle \frac{\rho_f}{\chi } & 0 & 0 & \displaystyle \frac{\rho}{\chi}\\ 0 & \displaystyle \frac{\rho_f}{\chi } & 0 & 0\\ \vdots & \vdots & \vdots & \vdots \\ \displaystyle \frac{\rho_f}{\chi } & 0 & 0 & \displaystyle \frac{\rho}{\chi}\\ 0 & \displaystyle \frac{\rho_f}{\chi } & 0 & 0 \end{array } \right ) ; \end{array } \label{eq : matricea}\ ] ] \hline \bf{b}_2 & \bf{0}_{4,4 } & \bf{0}_{4,2n}\\ [ 5pt]\hline \bf{0}_{2n,4 } & \bf{b}_3 & \bf{0}_{2n,2n } \end{array } \right ) , \quad { \bf b}_1 = \left ( \begin{array}{cccc } 0 & \displaystyle -\frac{\rho_w}{\chi } & 0 & 0\\ 0 & 0 & \displaystyle -\frac{\rho_w}{\chi } & \displaystyle -\frac{\rho_f}{\chi}\\ 0 & \displaystyle \frac{\rho_f}{\chi } & 0 & 0\\ 0 & 0 & \displaystyle \frac{\rho_f}{\chi } & \displaystyle \frac{\rho}{\chi } \end{array } \right ) , \\ [ 50pt ] { \bf b}_2 = \left ( \begin{array}{cccc } 0 & \displaystyle -\lambda_f & 0 & \displaystyle -m\,\beta\\ \displaystyle -\mu & 0 & 0 & 0\\ 0 & \displaystyle -(\lambda_f+2\,\mu ) & 0 & \displaystyle -m\,\beta\\ 0 & \displaystyle m\,\beta & 0 & \displaystyle m \end{array } \right ) , \quad { \bf b}_3 = \left ( \begin{array}{cccc } 0 & \displaystyle \frac{\rho_f}{\chi } & 0 & 0\\ 0 & 0 & \displaystyle \frac{\rho_f}{\chi } & \displaystyle \frac{\rho}{\chi}\\ \vdots & \vdots & \vdots & \vdots \\ 0 & \displaystyle \frac{\rho_f}{\chi } & 0 & 0\\ 0 & 0 & \displaystyle \frac{\rho_f}{\chi } & \displaystyle \frac{\rho}{\chi } \end{array } \right ) ; \end{array } \label{eq : matriceb}\ ] ] \hline { \bf 0}_{4,4 } & { \bf 0}_{4,4 } & { \bf 0}_{4,2n}\\ [ 5pt]\hline { \bf s}_3 & { \bf 0}_{2n,4 } & { \bf s}_2 \end{array } \right),\quad { \bf s}_3 = \left ( \begin{array}{cccc } 0 & 0 & -\omega & 0\\ 0 & 0 & 0 & -\omega \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & -\omega & 0\\ 0 & 0 & 0 & -\omega \end{array } \right ) , \\ [ 50pt ] { \bf s}_1 = \left ( \begin{array}{ccccc } \displaystyle -\frac{\rho_f}{\rho}\,\gamma\,a_1 & 0 & ... & \displaystyle -\frac{\rho_f}{\rho}\,\gamma\,a_n & 0 \\ [ 5pt ] 0 & \displaystyle -\frac{\rho_f}{\rho}\,\gamma\,a_1 & ... & 0 & \displaystyle -\frac{\rho_f}{\rho}\,\gamma\,a_n \\ [ 5pt ] \displaystyle \gamma\,a_1 & 0 & ... & \displaystyle \gamma\,a_n & 0 \\ [ 5pt ] 0 & \displaystyle \gamma\,a_1 & ... & 0 & \displaystyle \gamma\,a_n \end{array } \right),\\ [ 50pt ] { \bf s}_2 = \left ( \begin{array}{ccccc } \gamma\,a_1 + ( \theta_1+\omega ) & 0 & ... & \gamma\,a_n & 0 \\ [ 5pt ] 0 & \gamma\,a_1 + ( \theta_1+\omega ) & ... & 0 & \gamma\,a_n \\ [ 5pt ] \vdots & \vdots & \vdots & \vdots & \vdots \\ [ 5pt ] \gamma\,a_1 & 0 & ... & \gamma\,a_n + ( \theta_n+\omega ) & 0 \\ [ 5pt ] 0 & \gamma\,a_1 & ... & 0 & \gamma\,a_n + ( \theta_n+\omega ) \end{array } \right ) . \end{array } \label{eq : matrices}\ ] ]
|
an explicit finite - difference scheme is presented for solving the two - dimensional biot equations of poroelasticity across the full range of frequencies . the key difficulty is to discretize the johnson - koplik - dashen ( jkd ) model which describes the viscous dissipations in the pores . indeed , the time - domain version of biot - jkd model involves order shifted fractional derivatives which amounts to a time convolution product . to avoid storing the past values of the solution , a diffusive representation of fractional derivatives is used : the convolution kernel is replaced by a finite number of memory variables that satisfy local - in - time ordinary differential equations . the coefficients of the diffusive representation follow from an optimization procedure of the dispersion relation . then , various methods of scientific computing are applied : the propagative part of the equations is discretized using a fourth - order ader scheme , whereas the diffusive part is solved exactly . an immersed interface method is implemented to discretize the geometry on a cartesian grid , and also to enforce the jump conditions at interfaces . numerical experiments are proposed in various realistic configurations .
|
our knowledge about some of the most fundamental parameters of physics is derived from a vast number of measurements produced by different experiments using several complementary techniques .many statistical methods are routinely used to combine the available data and extract the most appropriate estimates of the values and uncertainties for these parameters , properly taking into account all correlations between the measurements .one the most popular methods for performing these combinations is the best linear unbiased estimate ( blue ) technique , an approach first introduced in the 1930 s and whose reformulation in the context of high - energy physics has been routinely used for the combination of the precision measurements performed by experiments at the lep , tevatron and lhc colliders , as well as in other domains . to quantify the `` relative importance '' of each measurement in its contribution to the combined knowledge about the measured physical quantity , its coefficient in the blue weighted average is traditionally used . in many examples in the literature where the blue technique has been used ,the combinations are dominated by systematic uncertainties , often assumed as fully correlated among different measurements .this often leads to situations where one or more measurements contribute with a negative blue coefficient , pushing experimentalists to redefine the `` relative importance '' of a measurement as the absolute value of its blue coefficient , normalised to the sum of the absolute values of all coefficients . in our opinion, this approach is . in this paper, we propose a different approach for comparing the relative contributions of the measurements to the combined knowledge about the unknown parameter , using the well - established concept of fisher information .we also show that negative coefficients in the blue weighted average invariably indicate the presence of very high correlations , whose marginal effect is that of reducing the error on the combined estimate , rather than increasing it . in these regimes, we stress that taking systematic uncertainties to be fully ( i.e. 100% ) correlated is not a conservative assumption , and we therefore argue that the correlations provided as inputs to blue combinations need to be assessed with extreme care . in those situations where their precise evaluation is impossible , we offer a few guidelines and tools for critically re - evaluating these correlations , in order to help experimental physicists perform more `` conservative '' combinations . in our discussion , we will generally limit ourselves to blue combinations of a single measured parameter and where the correlations used as inputs to the combination are positive .many of the concepts and tools we present could be applied also to the more general cases of blue combinations of several measured parameters , and/or involving also negative correlations between measurements , but this discussion is beyond the scope of this paper .the outline of this article is the following . in sec .[ sec : ri ] we review the definition of `` relative importance '' of a measurement in a blue combination as presented by some papers in the literature and we present our objections to it by using a simple numerical example .we then present our alternative definitions of information weights in sec .[ sec : information ] , after a brief recall of the definition of fisher information and of its relevant features . by studying marginal information and information derivatives , in sec .[ sec : highcorr ] we show that negative blue coefficients in the combination of several measurements of one parameter are always a sign of a `` high - correlation '' regime , thus generalising the results presented for two measurements by the authors of ref . . in sec .[ sec : conservativeness ] we go on to discuss practical guidelines and tools , illustrated by numerical examples , to identify correlations that may have been overestimated and to review them in a more `` conservative '' way . in sec . [ sec : conclusions ] we summarize our discussion and present some concluding remarks .in the blue technique , the best linear unbiased estimate of each unknown parameter is built as a weighed average of all available measurements .the coefficients multiplying the measurements in each linear combination are determined as those that minimize its variance , under the constraint of a normalisation condition which ensures that this represents an unbiased estimate of the corresponding parameter . as discussed extensively in refs . , this technique is equivalent to minimizing the weighted sum of squared distances of the measurements from the combined estimates , using as weighting matrix the input covariance matrix of the measurements , which is assumed to be known a priori . in the case of measurements of a single parameterwhose true value is , in particular , the best linear unbiased estimate can be determined as follows .first , the blue should be a linear combination of the available measurements .second , the blue should be an unbiased estimator , i.e. its expectation value ] , this is equivalent to requiring a normalisation condition for the coefficients in the linear combination .third , the blue should be the best of such unbiased linear combinations , i.e. that for which the combined variance , where is the covariance matrix of the measurements , is minimized .it is then easy to show that is the best linear unbiased estimate if the coefficients are equal to where is a vector whose elements are all equal to 1 . while the normalisation condition ensures that the coefficients sum up to 1 , one peculiar and somewhat counter - intuitive feature of this method is that some of these individual coefficients may be negative .negative coefficients in the blue weighted averages apparently still pose a problem of interpretation sometimes , especially if these coefficients are used to compare the contributions of the different measurements to the combined knowledge about the measured observable .for instance , the `` relative importance '' of each measurement in the combination of atlas and cms results on the top quark mass was defined as the absolute value of its coefficient in the blue weighted average , divided by the sum of the absolute values of the coefficients for all input measurements , the same procedure had already been used to visualize the `` weight that each measurement carries in the combination '' of cdf and d0 results on the top quark mass . in both cases , the relative importances of the measurements sum up to 1 by definition , . in our opinion , this procedure is an artefact that is conceptually wrong and suffers from two important limitations : first , it is not internally self - consistent and may easily lead to numerical conclusions which go against common sense ; second , it does not help to understand in which way the results with negative coefficients contribute to reducing the uncertainties on the combined estimates .we will use a simple example to illustrate the first objection . consider the combination of two uncorrelated measurements and of an observable in the appropriate units .the covariance matrix is then and the blue for their combination is , where the coefficients of these two uncorrelated measurements in the blue weighted average , and , are proportional to the inverses of the variances and as expected from simple error propagation .it is rather intuitive in this case to claim that the relative contributions to the knowledge about contributed by the two independent measurements a and b can be quantified by their blue coefficients , 40% for a and 60% for b. as and are both positive , these are also the `` relative importances '' of a and b according to eq .[ eq : ri ] .imagine now that is not the result of a direct measurement , but is itself the result of the combination of two measurements and , where a high positive correlation between them leads to negative blue coefficients in their weighted average . instead of combining first and and then adding , one could also combine , and directly using the full covariance matrix this yields , where the blue coefficients in this overall weighted average are given by , and . as expected , the final numerical result for is of course the same whether it is obtained from the combination of and or from the combination of , and .it is also not surprising that the blue coefficient for is the same in both combinations , as this is an independent measurement that is not correlated to either or ( the sum of whose blue coefficients , , of course , equals the blue coefficient of ) . what is rather surprising , however , is that the `` relative importance '' of computed using normalised absolute values of the blue coefficients is very different in the two cases : in our opinion , this is an internal inconsistency of eq .[ eq : ri ] , as common sense suggests that the relative contribution of to the knowledge about is the same in both combinations . in particular , we consider that the contribution of is indeed 40% , and that this is underestimated as 25% in the second combination because the relative contributions of and in the presence of negative blue coefficients are not being properly assessed and are overall overestimated . more generally , the problem with defining the `` relative importances '' of measurements according to eq .[ eq : ri ] is that the coefficient with which a measurement enters the linear combination of all measurements in the blue , i.e. its `` weight '' in the blue weighted average , is being confused with the impact or `` weight '' of its relative contribution to the knowledge about the measured observable . in the following we will therefore clearly distinguish between these two categories of `` weights '' : we will sometimes refer to the blue coefficient of a measurement as its `` central value weight '' ( cvw ) , while we will use the term `` information weight '' ( iw ) to refer to , using the same words as in refs . , its `` relative importance '' or the `` weight it carries in the combination '' . we will propose and discuss our definitions of intrinsic and marginal information weights in the next section , using the well - established concept of fisher information .in this section , we present our definitions of intrinsic and marginal information weights , after briefly recalling the definition of fisher information and summarizing its main relevant features . a more general discussion of fisher information and its role in parameter estimation in experimental science is well beyond the scope of this paper and can be found in many textbooks on statistics such as the two excellent reviews in refs . , which will largely be the basis of the overview presented in this section .consider experimental measurements that we want to use to infer the true values of unknown parameters , with ( though each of the need not necessarily be a direct measurement of one of the parameters ) .we will use the symbols and to indicate the vectors of all and of all , respectively .the measurements are random variables distributed according to a probability density function that is defined under the normalisation condition . the sensitivity of the measurements to the unknown parameters can be represented by the fisher `` score vector '' , which is itself a random variable , defined in the -dimensional space of the measurements and whose value in general also depends on the parameters . under certain regularity conditions ( in summary, the ranges of values of must be independent of , and must be regular enough to allow and to commute ) , it can be shown that the expectation value of the fisher score is the null vector , .the fisher information matrix , which in the following we will generally refer to simply as `` information '' , is defined as the covariance of the score vector : as the expectation value of the score is null , this can simply be written as \\ = \int\,\frac{\partial\log{p}({{\mathbf{{y}}}};{{\mathbf{{x}}}})}{\partial{x}{_\alpha } } \,\,\frac{\partial\log{p}({{\mathbf{{y}}}};{{\mathbf{{x}}}})}{\partial{x}{_\beta } } \,\,{p}({{\mathbf{{y}}}};{{\mathbf{{x } } } } ) \ , d{y}_1 \ldots d{y}_{n}{}. \label{eq : infodef}\end{gathered}\ ] ] information is thus defined as the result of an integral over and does not depend on the specific numerical outcomes of the measurements , although in general it is a function of the parameters instead . in other words , information is a property of the measurement process , and more particularly of the errors on the measurements and of the correlations between them , rather than of the specific measured central values . as pointed out in ref . , fisher information is a valuable tool for assessing quantitatively the contribution of an individual measurement to our knowledge about an unknown parameter inferred from it , because it possesses three remarkable properties .first , information increases with the number of observations and in particular it is additive , i.e. the total information yielded by two independent experiments is the sum of the information from each experiment taken separately .second , the definition of the `` information obtained from a set of measurements '' depends on which parameters we want to infer from them .this is clear from eq .[ eq : infodef ] , which defines fisher information about in terms of a set of derivatives with respect to the parameters .finally , information is related to precision : the greater the information available from a set of measurements about some unknown parameters , the lower the uncertainty that can be achieved from the measurements on the estimation of these parameters . more formally , if is any unbiased estimator of the parameter vector derived from the measurements , then under the same regularity conditions previously assumed it can be shown that , where the symbol indicates that the difference between the matrices on the left and right hand sides is positive semidefinite . in particular , for the diagonal elements of these matrices , in other words , the quantity represents a lower bound ( called cramer - rao lower bound ) on the variance of any unbiased estimator of each parameter .an unbiased estimator whose variance is equal to its cramer - rao lower bound , i.e. one for which the equality in eq .[ eq : crlbdiag ] holds , is called an efficient unbiased estimator . while in the general case it is not always possible to build one , an efficient unbiased estimator does exist under the assumption that the measurements are multivariate gaussian distributed with a positive definite covariance matrix that is known a priori and does not depend on the unknown parameters .this is the same assumption that had been used for the description of the blue method in ref . and we will take it as valid throughout the rest of this paper . as discussed at length in refs . , such distributions possess in fact a number of special properties that significantly simplify all statistical calculations involving them .in particular , it is easy to show , in the general case with several unknown parameters , that the best linear unbiased estimator is under these assumptions an unbiased efficient estimator , i.e. that its covariance matrix is equal to the inverse of the fisher information matrix .moreover , the fisher information matrix and the combined covariance do not depend on the unknown parameters under these assumptions , while this is not true in the general case . for gaussian distributions, the best linear unbiased estimator also coincides with the maximum likelihood estimator , while this is not true in most other cases , including the case of poisson distributions . in the case of one unknown parameter , in particular , i.e. when the parameter vector reduces to a scalar , the probability density function is simply {}.\end{gathered}\ ] ] remembering that $ ] is the covariance of the unbiased measurements and , the fisher information for , which also reduces to a scalar , can simply be written as this is clearly the inverse of the variance of the blue for corresponding to the central value weights given in eq .[ eq : coeff1 ] , to further simplify the notation , in the following by we will always indicate the information relative to , dropping the superscript .having recalled the relevance of the fisher information concept to quantitatively assess the contribution of a set of measurements to the knowledge about an unknown parameter , we may now introduce our proposal about how to best represent the `` weight that a measurement carries in the combination '' or its `` relative importance '' .we define this in terms of intrinsic and marginal information weights .our approach is radically different from that of refs . , because we do not attempt to make sure that the weights for the different measurements sum up to 1 .[ cols="<,^,^,^,^,^",options="header " , ] note also that the onionization prescription leads to the only combination where the blue coefficient for measurement becomes strictly positive . as mentioned earlier, this may be a consequence of the fact that this prescription may reduce correlations even more than their `` most conservative '' values , trespassing well into the low - correlation regime . in this respect , it is interesting to have a look at the effect of onionization on the partial covariance matrices , and more generally at the effect on the total covariance matrices of all procedures presented in this section : these are shown in tables [ tab : covxseonion ] and [ tab : covxse ] , respectively .in particular , note in table [ tab : covxseonion ] that the onionization procedure ( but the same is true for minimizations ) affects correlations for the bkgd and lumi error sources in exactly the same way without distinctions .if this was a real combination , instead , one would most likely keep the lumi correlation unchanged ( because a common luminosity measurement would indeed result in a 100% correlation between , and , and these three measurements together could even help to constrain the error on it ) , concentrating instead on the re - assessment of the bkgd correlation alone ( because the initial `` nominal '' estimate of 100% correlation is neither conservative nor realistic in the presence of different sensitivities to differential distributions ) .it should finally be added that the total covariance matrix derived from the onionization prescription is used as the starting point of the `` byoffdiagelem '' minimization in the bluefin software , as we have found this to improve the efficiency of the minimization procedure . as an additional cross - check of the onionization prescription, we also tested a fourth type of minimization , where information is independently minimized for each source of uncertainty as if this was the only one , varying each time only the correlations in that error source ( after removing those measurements not affected by it and slightly reducing the allowed correlation ranges to keep the partial covariance positive definite ) .the preliminary results of this test ( not included in table [ tab : blue5xse ] ) indicate that these minimizations do not seem to significantly move partial covariances or the final result away from those obtained through the onionization prescription , which are used as a starting point also in this case .we conclude this section by reminding that the prescriptions presented here are only empirical recipes that assume no prior knowledge of the physics involved and , for this reason , can never represent valid substitutes for a careful quantitative analysis of correlations using real or simulated data . a precise estimate of correlations is important in general , but absolutely necessary in high correlation regimes , where it may be as important as a precise assessment of measurement errors themselves .combining many correlated measurements is a fundamental and unavoidable step in the scientific process to improve our knowledge about a physical quantity . in this paper, we recalled the relevance of the concept of fisher information to quantify and better understand this knowledge .we stressed that it is extremely important to understand how the information available from several measurements is effectively used in their combination , not only because this allows a fairer recognition of their relative merit in their contribution to the knowledge about the unknown parameter , but especially because this makes it possible to produce a more robust scientific result by critically reassessing the assumptions made in the combination . in this context , we described how the correlations between the different measurements play a critical role in their combination .we demonstrated , in particular , that the presence of negative coefficients in the blue weighted average of any number of measurements is a sign of a `` high - correlation regime '' , where the effect of increasing correlations is that of reducing the error on the combined result .we showed that , in this regime , a large contribution to the combined knowledge about the parameter comes from the joint impact of several measurements through their correlation and we argued , as a consequence , that the merit for this particular contribution to information can not be claimed by any single measurement individually . in particular , we presented our objections to the standard practice of presenting the `` relative importances '' of different measurements based on the absolute values of their blue coefficients , and we proposed the use of ( `` intrinsic '' and `` marginal '' ) `` information weights '' instead .in the second part of the paper , we questioned under which circumstances assuming systematic errors as fully correlated can be considered a `` conservative '' procedure .we proposed the use of information derivatives with respect to inter - measurement correlations as a tool to identify those `` nominal '' correlations for which this assumption is wrong and a more careful evaluation is necessary .we also suggested a few procedures for trying to make a combination more `` conservative '' when a precise estimate of correlations is simply impossible .we should finally note that blue combinations are not the only way to combine different measurements , but they are actually the simplest to understand when combinations are performed under the most favorable assumptions that measurements are multivariate gaussian distributed with covariances known a priori , as in this case all relevant quantities become easily calculable by matrix algebra .we therefore stress that , while the results in this paper were obtained under these assumptions and using the blue technique , large positive correlations are guaranteed to have a big impact , and should be watched out for , also in combinations performed with other methods or under other assumptions .this work has been inspired by many discussions , during private and public meetings , on the need for critically reviewing the assumptions about correlations and the meaning of `` weights '' , when combining several measurements in the presence of high correlations between them .it would be difficult to mention and acknowledge all those colleagues who have hinted us towards the right direction and with whom we have had very fruitful discussions .we are particularly grateful to the members of the toplhcwg and to the atlas and cms members who have helped in the reviews of the recent top mass combinations at the lhc .we would also like to thank our colleagues who have sent us comments about the first two public versions of this paper .in particular , it is a pleasure to thank louis lyons for his extensive feedback and his very useful suggestions .we are also grateful to the epjc referees for their detailed and insightful comments , as well as for making us aware of the research presented in ref . .finally , a.v .would like to thank the management of the cern it - es and it - sdc groups for allowing him the flexibility to work on this research alongside his other committments in computing support for the lhc experiments .the aleph , delphi , l3 and opal collaborations and the lep electroweak working group , _ electroweak measurements in electron - positron collisions at w - boson - pair energies at lep _rep . 532 ( 2013 ) 119 the tevatron electroweak working group for the cdf and d0 collaborations , _ combination of cdf and do results on the mass of the top quark using up to 5.8 fb of data _ , arxiv:1107.5255v3 ( 2011 )
|
we discuss the effect of large positive correlations in the combinations of several measurements of a single physical quantity using the best linear unbiased estimate ( blue ) method . we suggest a new approach for comparing the relative weights of the different measurements in their contributions to the combined knowledge about the unknown parameter , using the well - established concept of fisher information . we argue , in particular , that one contribution to information comes from the collective interplay of the measurements through their correlations and that this contribution can not be attributed to any of the individual measurements alone . we show that negative coefficients in the blue weighted average invariably indicate the presence of a regime of high correlations , where the effect of further increasing some of these correlations is that of reducing the error on the combined estimate . in these regimes , we stress that assuming fully correlated systematic uncertainties is not a truly conservative choice , and that the correlations provided as input to blue combinations need to be assessed with extreme care instead . in situations where the precise evaluation of these correlations is impractical , or even impossible , we provide tools to help experimental physicists perform more conservative combinations .
|
publications in scholarly journals establish the body of knowledge deriving from scientific research ; they also play a fundamental role in the career path of scientists and in the evaluation criteria of funding agencies .a previous scientometric study highlighted that software - oriented publications are underrepresented with respect to hardware - oriented ones in the field of high energy physics ( hep ) .the results of that analysis showed that the relative difference between the production of scholarly literature in these areas had increased in the context of the experiments at lhc ( large hadron collider ) with respect to the previous generation s experiments at lep ( large electron - positron collider ) .the analysis in was performed prior to the start of operation of lhc . the scientometric analysis summarized in this paper , which reflects a presentation on this topic at the chep ( computing in high energy physics ) 2012 conference , reviews the publication patterns in hep computing in greater detail , with special emphasis on their evolution since the beginning of lhc operation .the study summarized in this paper provides a quantitative overview of publication patterns in high energy physics over the past thirty years , with emphasis on software - oriented publications .the scientometric analysis is focused on a set of topics that are representative software r&d ( research and development ) in the context of hep .the selection is far from exhaustive of the wide variety of research activities in experimental high energy physics , rather it intends to highlight some distinctive features of the literary production in the field .the analysis concerns a representative sample of general software tools , which respond to common needs of the hep experimental community , and a sample of hep experiments of the current and past generation .two widely used general software tools , geant4 and root are the object of a detailed scientometric analysis .more limited investigations concern the publications associated with other software tools contributing to the general computing infrastructure of lhc experiments , such as the lhc computing grid . the four major experiments at lep , the alice , atlas , cms , lhcb and totem experiment at lhc , and the babar experiment at the slac b - factory are included in the scientometric analyis .the sample subject to evaluation consists of regular publications in established peer reviewed journals .contributions to conference proceedings , books , institutional reports , items in preprint archives , white papers posted on web sites and software manuals are not considered . some journals ( e.g. nuclear instruments and methods , nim ) also publish conference proceedings , usually in dedicated issues : these articles have been identified and excluded from the analysis .the examined scientometric indicators include the number of publications produced by the various subjects under study , their time distribution , the journals where they are published and their citation patterns .the main source for the scientometric analysis reported in this paper is thomson - reuters web of science , which is considered the most authoritative reference for bibliometric information in the academic environment .the authors institutional subscription gives access to a subset of it , the `` science citation index expanded '' database ; it does not include the `` conference proceedings citation index '' .the database covers the period from 1970 to date .the access to a subset of the web of science generates an apparent mismatch between the total number of citations associated with a paper , which includes entries from the `` conference proceedings citation index '' , and the actual number of citations available for analysis , which is limited to publications in journals belonging to the `` science citation index expanded '' .a further complication for scientometric analysis is due to the incorrect classification of some publications listed in the `` science citation index expanded '' as `` conference proceedings '' : this label is arbitrarily attributed by thomson - reuters to some regular articles in journals that never publish conference proceedings ( e.g. ieee transactions on nuclear science , tns ) .conversely , some entries in the `` science citation index expanded '' that are not labeled as `` conference proceedings '' , appear in journals ( e.g. nuclear instruments and methods ) as contributions to conference proceedings .these errors in the web of science have been manually corrected in the analysis whenever possible : for instance , all papers published in ieee transactions on nuclear science are considered in the analysis as regular journal publications , irrespective of thomson - reuters classification of some of them as conference proceedings , and papers published in nuclear instruments and methods a issues dedicated to conference proceedings have been removed from the analysis , even if they are not identified as `` conference proceedings '' in the web of science .other sources have been used to cross - check and complement the information derived form the web of science : the web sites of the publishers of technological journals relevant to hep and cern document system ( cds ) .the comparison of the data retrieved from these sources has highlighted some omissions and inconsistencies in the data sample retrieved from thomson - reuters web of science : for instance , some papers published by lhc experiments , which are listed in the cds database , do not appear in the web of science , and the number of citations of a paper reported in the publisher s web site is in some cases inconsistent with that reported by the web of science .the publications by hep experiments are distinguished into physics papers ( i.e. publications of experimental results representing the object of the experiment ) and technological papers ( i.e. publications about the instruments and methods that contribute to produce the experimental results ) .technological publications are further identified as hardware - oriented , software oriented or dealing with data acquisition ( daq ) and trigger .this classification implies some degree of subjectivity , which has been mitigated by performing cross - checks over the selections performed by individual analysts .it is worthwhile to note that the classification of publications is part of the regular professional practice of the authors of this paper , either as members of the editorial board of a core journal in nuclear technology or as responsible of the library of a major hep laboratory .the attribution of a paper to a given category is based on a variety of criteria . in some casesthe title of a paper or the journal where it is published unambiguously identify its topic : for instance , papers published in _ physical review d _are all classified as physics papers . for most papers , the record in the web of science , which also includesthe abstract , provides sufficient information to identify the scope of the paper and to classify it in one of the above mentioned categories . in caseswhere the attribution is not evident based on the information in the web of science , the full text of the paper was evaluated .complementary analyses based on the web of science and on publishers web sites data , performed independently by different analysts , confirmed the robustness of the classification . based on detailed cross checks over selected samples ,the uncertainty in the results reported in this paper , which derives from intrinsic inconsistencies in the web of science and from subjective classification of papers , can be estimated of the order of a few percent .this level of uncertainty does not affect critically the conclusions of this study .the analysis reported in the following sections covers three decades of scientific literature ( 1982 - 2011 ) ; it is limited to papers published until the end of 2011 to ensure the reproducibility of results based on the web of science .unless differently stated , the number of citations reflects the status in the web of science as on 14 may 2012 , i.e. one week prior to the beginning of the chep 2012 conference .two software tools used by lhc experiments for simulation and data analysis have been evaluated : geant4 and root .geant4 is documented in two reference publications , which are brought to the attention of the experimental community in the geant4 web page .these papers have collected respectively 2934 and 574 citations ( including citations from conference proceedings indexed by the web of science ) ; reference has crossed the threshold of 3000 citations shortly after the chep conference ( 3037 citations by 18 june 2012 ) .reference is the most cited publication in the `` nuclear science and technology '' category over the period considered in this scientometric study . excluding the review of particle properties , it is the most cited paper produced by cern and by infn . the time distribution shown in figure [ fig_g4cite_years ] shows that citation to the more recent reference , published in 2006 , is omitted by most publications that cite the earlier one , published in 2003 .although the development of geant4 was originally motivated by the requirements of lhc experiments , the source of the citations to its reference paper shows the widely multidisciplinary character of its use in the scientific community .figure [ fig_g4cite_journals ] lists the journals contributing the largest number of citations to : it includes physics journals with various scope ( high energy physics , nuclear physics , astroparticle physics ) , nuclear technology journals , medical physics and radiation protection journals , and a regional journal ( published by a national physics society ) .one can observe in figure [ fig_g4cite_coll ] that only a relatively small number of citations to are associated with lhc collaborations at this stage of their life - cycle .it is worthwhile to note that only approximately 20% of the citations to listed in the web of science are formally associated with a collaboration ( identified as `` group authors '' ) ; the vast majority of publications citing appear as the product of individual research groups , rather than of formal experimental organizations .figure [ fig_g4cite_coll ] lists the collaborations that contribute the largest number of citations ; they correspond to approximately 75% of the citations associated with collaborations in the web of science .root is documented in two reference publications , published in 1997 and 2009 .the earlier one is a contribution to a workshop proceedings , while the later one is a regular journal publication .these papers have collected respectively 540 and 27 citations ( including citations from conference proceedings indexed by the web of science ) .the time distribution shown in figure [ fig_rootcite_years ] shows a similar pattern as in figure [ fig_g4cite_years ] : the citation to the more recent reference is omitted by most publications that cite the earlier one .the citations to root earlier reference have a multidisciplinary character , as is visible in figure [ fig_rootcite_journals ] , although the relative contribution from various domains appears different for geant4 and root .the distribution of the domains of the citations listed in figures [ fig_g4cite_journals ] and [ fig_rootcite_journals ] is summarized in table [ tab_g4root ] : citations to geant4 appear equally distributed between physics and nuclear technology journals , while nuclear technology journals are the most relevant source of citations to root ; also , the fraction of citations from medical physics and radiation protections journals is significantly larger for geant4 than for root .it is worhtwhile to remind the reader that table [ tab_g4root ] , similarly to figures [ fig_g4cite_journals ] and [ fig_rootcite_journals ] , reflects the major sources of citations , amounting to approximately 75% of total citations collected by geant4 and root main references .[ tab_g4root ] .source of citations to geant4 and root main reference papers . [ cols="<,^,^ " , ]the number of publications produced by the hep experiments considered in this study is plotted in figure [ fig_exp_pub ] . the plot distinguishes papers belonging to various categories : physics , hardware , software , daq - trigger and general .physics papers are the dominant component for the experiment that terminated the data - taking phase and are close to the end of their lifecycle , while they represent a small fraction of the publications by lhc experiments , which are in the early stage of their run .the last category includes papers describing the whole detector , or the performance of some subsystems , which involve hardware and software aspects .software related papers appear to be a small fraction of publications for all the experiments : this trend is evident in figure [ fig_exp_pubtech ] , which shows the apportioning of technological papers across the three categories of hardware , software and daq - trigger .the relatively smaller presence of software publications in the production of lhc experiments is confirmed in a more detailed analysis performed over the papers published since the start of lhc operation in 2008 in two representative nuclear technology journals , nim a and tns , shown in figure [ fig_exp_hwsw2008 ] .the time distribution of the publications produced by the experiments considered in this study is shown in figure [ fig_exp_pubyear ] .the horizontal scale of the plots takes as a reference the year when lep ( 1989 ) , babar ( 1998 ) and lhc ( 2008 ) started running .figure [ fig_exp_pubyear ] shows both the total count of papers produced per year , and the number of published papers per collaboration member along the lifecycle of the experiments .the number of collaboration members is subject to variation over the lifetime of an experiment , a constant number is assumed in this study , due to the difficulty of ascertaining the number of collaboration members as a function of time for all the experiments .the size of the lhc collaborations is assumed to be the number of members reported in cern `` greybook '' at the time of the chep conference ; for lep experiments and babar the size of the collaboration was taken as the number of authors of their most cited paper .the number of collaboration members assumed in this study is shown in figure [ fig_members ] .the number of hardware , software and daq - trigger publications appears approximately constant of the three generations of hep experiments considered in this study , when it is scaled to the collaboration size , as shown in figure [ fig_exp_techmem ] .the ratio of hardware to software publications , shown in figure [ fig_exp_hwsw ] , is also approximately constant across the experiments : harware papers outnumber software ones by approximately a factor four .this result differs from that reported in , which depicted an earlier stage of the lifecycle of lhc experiments , preceding the start of lhc operation. the difference could be also partly explained by evolutions in the web of science since the publication of , namely the move of a large number of conference papers to a dedicated database , which excludes them from the analysis reported here .figure [ fig_journal_all ] illustrates the distribution of papers published by hep experiments in physics and technological journals .the histogram involves the journals collecting the largest number of publications by the experiments considered in this study .one can observe in figure [ fig_journal_years ] that the relative importance of some journals has evolved over the years in the field : among technological journals , tns has increased its popularity in the hep domain in the last decade , while jinst ( journal of instrumentation ) is growing rapidly .the distribution of the number of citations collected by various categories of hep experimental papers is shown in figure [ fig_citations ] : physics papers receive a larger number of citations than technological papers . the fraction of physics papers that are not cited amounts to 4% , while it is much larger for technological papers : 17% for hardware , 25% for software and 27% for daq - trigger publications within the data sample examined in this study .physics papers include a larger number of references than technological papers , as it appears in figure [ fig_references ] : the different citation habits in these two domains are prone to affect the citation patterns shown in [ fig_citations ] .the citations to the physics papers of the hep experiments considered in this study come almost entirely from journals specialized in high energy physics or closely related fields , such as nuclear physics and astroparticle physics : the journals contributing more than 90% of the citations to physics papers published by representative lep experiments ( aleph and delphi ) and lhc experiments ( atlas and cms ) are listed in figure [ fig_exp_physcite ] .technological papers published by hep experiments are cited by high energy and nuclear physics journals , by nuclear technology journals and by review journals , as is illustrated in [ fig_exp_techcite ] . differently from what observed for the general software tools examined in section [ sec_tools ], the papers published by hep experiments do not appear to collect a significant number of citations from other disciplines , such as medical physics and radiation protection .a large fraction of the citations collected by lhc technological publications consists of self - citations ( i.e. the citing papers include at least one of the authors of the cited work ) : this pattern is illustrated in figures [ fig_exp_nimcite ] and [ fig_exp_tnscite ] for the two journals collecting the largest number of technological publications by lhc experiments , nim a and tns .the most cited publications produced by hep experiments are in most cases the respective reference papers describing the whole detector : these papers are usually cited by the papers reporting the physics results of the experiment . in the data sample examined in this study , excluding lhc experiments , the number of citations collected by the most cited paper varies from 309 for the delphi experiment to 859 for the babar experiment ; the citation statistics is not yet meaningful for lhc experiments , that are at an early stage of their physics production .grid computing is an essential component of the operation of lhc experiments : a large effort has been invested in the past decade to develop the grid computing infrastructure and several application tools used by lhc experiments within a project known as `` lhc computing grid '' ( lcg ) . a search for publications associated with lcg in the web of science results in a small sample , consisting of less than 20 papers .grid computing has represented a large fraction of the scientific program of the chep conference for the past decade , in addition to dedicated conferences .the small sample of journal publications related to lcg retrieved in the web of science suggests that only a limited fraction of conference presentations in this field evolves into regular publications in scholarly journals . due to the small sample size , a statistical analysis of lcg publicationsdoes not appear meaningful .the scientometric analysis reported in this paper provides a quantitative overview of publication patterns in hep experiments , covering the last three decades .the analysis has confirmed the general trend observed in a previous study : software related papers are largely underrepresented with respect to hardware papers in the high energy physics experimental environment .the ratio of hardware to software papers is approximately constant over the experiments of the lep and lhc generations .software papers collect in average fewer citations than hardware papers ( and physics papers ) ; they also cite fewer references in their bibliography .the analysis of citations to papers published by hep experiments shows that both physics and technological papers collect the largest number of citations within the hep environment ; a small fraction of citations comes from closely related fields , such as nuclear and astroparticle physics .general software tools motivated by the requirements of hep experiments , such as geant4 and root , exhibit different patterns .the earlier geant4 reference has received more than 3000 citations at the time of writing this paper : it is a landmark paper in thomson - reuters nuclear science and technology category , and the most cited publication for major institutions such as cern and infn .the analysis of the citations collected by these software tools shows the multidisciplinary character of these tools , which appear to be used in a variety of experimental fields not limited to hep .geant4 is cited by a large number of physics papers , which confirm its significant role in the production of physics results by hep experiments in the lhc era .the authors are grateful to the cern library for the support provided to this study .99 basaglia t _ et al _ 2008 _ ieee trans . nucl .sci . _ * 55 * 671
|
an overview of the evolution of computing - oriented publications in high energy physics following the start of operation of lhc . quantitative analyses are illustrated , which document the production of scholarly papers on computing - related topics by high energy physics experiments and core tools projects , and the citations they receive . several scientometric indicators are analyzed to characterize the role of computing in high energy physics literature . distinctive features of software - oriented and hardware - oriented scholarly publications are highlighted . current patterns and trends are compared to the situation in previous generations experiments .
|
in this paper , we study the problem of global testing for periodicity .suppose , is a real - valued time series observed at equispaced time points , that satisfies the model where the noise are i.i.d .normal variables . in the complex - valued case ,similarly , the observed series satisfies the model where represents zero - mean i.i.d .complex white noise , that is , with \sim\mathcal{n}(\mathbf{0 } , \frac{\sigma^2}{2}\mathbf{i}_{2}) ] are discussed in sections [ sectheory ] and [ secnumerics ] .we will fix a threshold level , and reject if and only if .this test is referred to as the _ over - sampled periodogram higher criticism _ ( ophc ) test .an important question is how to choose the over - sampling rate .let . roughly speaking , the success of detection by the higher criticism based on the sequence depends on whether has nonzero elements with sufficiently large magnitudes .if the frequencies are on the fourier grid , the spikiness of is implied by the spikiness of .for example , if and by choosing , we can calculate that has sparsity one , and .therefore , the proposed test is desirable as long as is sufficiently large .however , if the frequencies are off the fourier grid , then for , the spikiness of may not imply the spikiness of .for example , if and one chooses , simple calculation yields this means the resulting is not as spiky as in the case where the frequencies are on the fourier grid , and then the performance of higher criticism based on may be not optimal . in order to increase the spikiness of , we propose to choose the over - sampling rate .our main result theorem [ upperboundsparse ] guarantees that as long as the frequencies of the complex sinusoids in the mean of obey some minimum separation condition , this over - sampling rate leads to an asymptotically optimal global test .a key step in the proof is to show that has significant nonzero components . ) . ] in other words , the spikiness of is translated to the spikiness of .we emphasize that this over - sampling rate is independent of the grid parameter and the sparsity .the higher criticism method was originally coined by john tukey and introduced in donoho and jin for signal detection under a sparse homoscedastic gaussian mixture model , which was previously studied in ingster .cai , jin and low investigated minimax estimation of the nonnull proportion under the same model .hall and jin proposed a modified version of the high criticism for detection with correlated noise with known covariance matrices .cai , jeng and jin considered heteroscedastic gaussian mixture model and showed that the optimal detection boundary can be achieved by a double - sided version of the higher criticism test .the papers considered a related problem of detecting a signal with a known geometric shape in gaussian noise .cai and wu studied the detection of sparse mixtures in the setting where the null distribution is known , but not necessarily gaussian and established the adaptive optimality of the higher criticism for the detection of such general sparse mixtures . in the special case in which ,that is , the frequencies are on the grid , the design matrix becomes the orthogonal dft matrix .multiplying the measurement by the inverse dft matrix , the design matrix is reduced to the identity design .therefore , the problem becomes equivalent to the standard sparse detection model discussed in , and the standard higher criticism test proposed in can be directly applied .notice that in the ophc test defined above , choosing in ( [ eqmatrixu ] ) is equivalent to multiplying the measurement by the inverse dft , so there is no need to over - sample the periodogram . if the dimension in ( [ eqmeasurement ] ) were known , the hypothesis testing model ( [ eqmeasurement ] ) considered in the present paper is also closely related to the global testing problem under a linear model with sparse alternatives .it is helpful to review some well - known results for the real - valued case in this line of research .consider the linear model : , where , are the design matrix and regression coefficients , respectively .the noise vector is assumed to be i.i.d .gaussian variables with mean and variance . the global detection of is still captured by the hypothesis test ( [ eqglobaltesting ] ) . in the recently developed literature of high - dimensional statistics, is comparable or much greater than , while the parameter vector is assumed to be sparse : .the tradeoff between the strength of the nonzero regression coefficients and the sparsity , by which the detectability of is captured , has been intensively studied in the literature . in order to simplify the analysis , it is convenient to assume that the nonzero components of have the same magnitude .the tradeoff between the signal strength and sparsity is reduced to a quantitative relationship between and for fixed and .this relationship also depends closely on the properties of the design matrix .there are two well - studied examples in the literature : * identity design matrix .when and , the detection boundary is given in .let and with ] and ^t : = [ \beta_{\tau_1 } , \ldots , \beta_{\tau_s } ] ^t,\ ] ] then under the alternative , the -sparse signal is uniquely parameterized by .the distribution of the measurement under the alternative is therefore parameterized by and , denoted as . under the null , consists of standard complex normal variables , denoted as .as discussed in section [ secliterature ] , throughout the paper , let with fixed , and with .this implies that , which is consistent with the assumption in .we assume the distances between the indices of the nonzero components of , that is , , satisfy the following minimum separation condition : a similar minimum separation condition appears in the literature of super - resolution ; see .this spacing condition holds asymptotically if the support is assumed to be random .assume that are the order statistics of independent uniformly distributed random variables in .for any , define the distance it is evident that . for any fixed , and any ] , the specific choice of , and the threshold . in our theoretical analysis , we choose , =[1 , \sqrt{\log\frac{n}{3}}] ] is asymptotically powerful : the most significant technical novelty in this paper lies in the proof of theorem [ upperboundsparse ] . in the analysis of under the alternative, the mean and covariance structure of , which is defined in ( [ eqoversampledperiodogram ] ) , requires more careful calculation than in existing work , for example , .in particular , the estimation of and the control of are treated cautiously based on a variety of cases . in relevant calculations , the structure of the edft design matrix needs to be sufficiently employed . under the null ,the statistic is related to the standard statistic discussed in , so the analysis is easier than the case of alternative .the following theorem gives the lower bound for the testing problem .[ lowerboundsparse ] under the same setup of theorem [ upperboundsparse ] , if , then all sequences of hypothesis tests are asymptotically powerless , that is , the proof of theorem [ lowerboundsparse ] is relatively easy , and it is given in the supplemental material .in fact , by taking advantage of the specific structure of the edft matrix , the deduction can be reduced to the case .the classic lower bound arguments in can then be directly applied .theorems [ upperboundsparse ] and [ lowerboundsparse ] together show that the proposed test is asymptotically optimal .we now compare with the detection boundary associated with the gaussian designs established in . as indicated in section [ sec13 ] , after normalizing the rows of the gaussian design , the magnitude parameter is denoted as .notice that in our model the magnitude parameter is , and the difference of is due to the distinction between real - valued and complex - valued models .therefore , it is fair to compare and directly .it is obvious that for all as long as .this implies that the detection boundary associated with the extended dft design matrix leads to milder trade - off between the rareness and the weakness of the nonzero components of than that of gaussian designs . to illustrate their differences, the two detection boundary functions are plotted in figure [ figdetectionboundaries ] for .in this section , we study the empirical behavior of the ophc test by numerical simulations . in terms of computation , it is more convenient to express the statistic as a function of , which are the ordered -values of , that is , .the test in the following numerical simulations is defined as which is equivalent to choosing = [ \sqrt{\log2 } , \sqrt{\log q}] ] defined in section [ sectheory ] . in the following ,we compare the empirical testing powers of the ophc test with various choices of .( red solid line ) and ( green dashed line ) for and . ] first , let and . then the empirical distribution of the ophc test statistic under the null can be derived by monte carlo simulation with independent trials , which is shown in the upper panel of figure [ figophc ] . and .under the mixed alternative , we let , and . empirical distribution of the ophc test statisticis plotted in the middle panel with known , and in the lower panel with estimated variance of noise . ] under the alternative , we assume that and .the support of is distributed uniformly at random , and the phase of the nonzero entries of are uniformly distributed on .all nonzero components of have the same magnitude with .we first assume that the variance is known .the resulting empirical distribution of under the mixed alternative is plotted in the middle panel of figure [ figophc ] by independent trials . in trials of them ,the empirical -values are smaller than , by which the periodicities are successfully detected .let us discuss the case where the variance is unknown . since it is necessary to make sure that the estimation of the variance is consistent under the null , we use the mean square of as the estimate .this estimator of is adopted in order to make fair numerical comparisons between different choices of .robust and efficient estimation of is an interesting problem .it has been considered , for example , in , in the case of gaussian design .efficient estimation of in the current setting is beyond the scope of this paper , and we leave it for future research .the resulting empirical distribution of is plotted in the lower panel of figure [ figophc ] . among the trials ,there are with empirical -values smaller than .next , we consider the ophc test with .we refer to this test as standard periodogram higher criticism ( sphc ) test .the empirical distribution of the sphc test statistic under the null is plotted in the upper panel of figure [ figphcdj ] .the setup of the mixed alternative is the same as in the experiments for the ophc test described before .suppose the variance is known .the distribution of the sphc test statistic under the alternative is plotted in the middle panel of figure [ figphcdj ] by trials . in trials of them ,the empirical -values are smaller than .this is worse than the ophc method with , where the periodicities are successfully detected in trials .when the variance of the noise is unknown , we still estimate it by the mean square of , such that the estimate is consistent under the null .the resulting empirical distribution of the sphc test statistic is plotted in the lower panel of figure [ figphcdj ] based on independent trials . in only trials among them ,the empirical -values are smaller than .this is also worse than ophc where the periodicities are successfully detected in independent trials . and .under the mixed alternative , we let , and .empirical distribution of the sphc test statistic is plotted in the middle panel with known , and in the lower panel with estimated variance of noise . ] and . under the mixed alternative, we let , and .empirical distribution of this test statistic is plotted in the middle panel with known , and in the lower panel with estimated variance of noise . ] finally , we assume were known and consider the ophc test with . in this casethe ophc test coincides with the method proposed in .the empirical distribution of this test statistic under the null by independent trials is plotted in the upper panel of figure [ figophcacp ] .the setup of the mixed alternative is the same as before .when the variance is known , the distribution of this test statistic under the alternative is plotted in the middle panel of figure [ figophcacp ] by trials . in trials of them ,the empirical -values are smaller than , which is slightly worse than the ophc method with as mentioned before .when the variance of the noise is unknown , with its estimation by the mean square of , the resulting empirical distribution of this test statistic is plotted in the lower panel of figure [ figophcacp ] based on independent trials . in trials among them , the empirical -values are smaller than , which is also slightly worse than the ophc method with . since is actually unknown , it would be more convenient to choose .motivated by periodicity detection in complex - valued time series analysis , we investigated the hypothesis testing problem ( [ eqglobaltesting ] ) under the linear model ( [ eqmeasurement ] ) , where the frequencies of the hidden periodicities are not necessarily on the fourier grid , and the number of sinusoids grows in . the ophc test , a higher criticism test applied to the periodogram over - sampled by , is proposed to solve this problem . in terms of theory , by assuming that the frequencies satisfy a minimum separation condition , a detection boundary between the rareness and weakness of the sinusoids is explicitly established . perhaps surprisingly , the detectable region for the edft design matrix is broader than that of gaussian design matrices .for ease of exposition , we assume that is finite but unknown , although is allowed to be infinity by slightly modifying our argument .numerical simulations validate the choice by being compared to the choice , that is , the standard periodogram higher criticism , and , that is , excessively over - sampled periodogram methods . in a recently published paper , it is shown that higher criticism statistics might not be as powerful as berk jones statistics empirically .we find it interesting to investigate alternative global testing methods for periodicity detection both theoretically and empirically , but we leave this as future work . the hypothesis testing problem considered in this paper is related to a number of other interesting problems .we briefly discuss them here , along with several directions for future research .a related important statistical problem is to estimate the frequencies of the periodicities in a given series .sinusoidal regression methods date back to 1795 by prony with many later developments including , to name a few . in the casewhere the number of frequencies is fixed and few , numerous statistical analyses for frequency estimation have been performed in the literature .for example , an insightful threshold behavior of mle was presented in . to determine the number of frequencies , model selection methods are usually applied ; see , for example , . an extensive study on this subjectcan be found in the classical text book and references therein .in contrast , when the number of frequencies is large , although sparse recovery and total - variation minimization can be used for frequency retrieval , their statistical efficiency is not clear .it is interesting to develop both computationally and statistically efficient methods to estimate a number of frequencies hidden in the observed sequence .compared to frequency estimation , denoising , that is , estimation of the mean in ( [ eqmeasurement ] ) , is a conceptually easier statistical task . in the recent papers ,sdp methods were shown to enjoy nearly - optimal statistical properties .it would be interesting to establish theoretically optimal methods .considering the hypothesis test problem ( [ eqtesting ] ) in the real case ( [ eqrealmodel ] ) , the ophc test can be applied to the real sequence by the idea of complexification , that is , transforming into for .consequently , the mean of amounts to a superposition of complex sinusoids , and the noise part in consists of a sequence of complex white noise .then the hypothesis test problem is reduced to the complex case .it would be interesting to investigate whether ophc method can be applied to the real series directly with potential statistical advantage .this section is dedicated to the proofs of theorems [ upperboundsparse ] and [ lowerboundsparse ] .we begin by collecting a few technical tools that will be used in the proof of the main results .[ defcomplexnormal ] we say that is an -dimensionalcomplex - valued multivariate normal vector with distribution , if ]. then \leq\frac{1}{\log^2 n}\ ] ] provided . here , and are positive numerical constants .suppose is an underdetermined positive parameter , which will be specified later in order to establish the detectable region .denote , and hence . by the definition of ( [ eqoversampledperiodogram ] ) and the definition of ( [ eqbetatilde ] ), we have and \label{eqthetaw } \\[-8pt ] \nonumber & & { } + \frac{1}{\sqrt{n}}\sum_{j=1}^n e^{\sfrac{2 \pi i ( m-1)(j-1)}{q } } z_j \\ & : = & \theta_m + w_m , \qquad m=1,\ldots , q. \nonumber\end{aligned}\ ] ] it is obvious that is deterministic and is a -dimensional complex multivariate normal vector .first , since are independent , we know and .this implies and and , therefore , . for any and , straightforward calculation gives and this implies \sim\mathcal{cn}{\left}(\mathbf{0 } , \bigl [ { 1 \atop \bar{\xi } } \enskip { \xi \atop 1 } \bigr ] , \mathbf{0 } { \right}) ] relative to .suppose on /\{0\sim1\} ] only depending on such that such that both inequalities in ( [ eqconstraints ] ) hold .notice that lies in the domain of , that is , $ ] .then when , we have the inequality , and hence . since is independent of the choice of , we have y. hua and t. sarkar .matrix pencil method for estimating parameters of exponentially damped / undamped sinusoids in noise ._ ieee trans . on acoustics , speech and signal processing _ , 38(5):814 824 , 1993 .r. prony .essai exprimental et analytique : sur les lois de la dilatabilit de fluides lastique et sur celles de la force expansive de la vapeur de lalkool , diffrentes tempratures ._ journal de lecole polytechnique _ , 1(2):24 76 , 1795 .
|
in this paper , an _ over - sampled periodogram higher criticism _ ( ophc ) test is proposed for the global detection of sparse periodic effects in a complex - valued time series . an explicit minimax detection boundary is established between the rareness and weakness of the complex sinusoids hidden in the series . the ophc test is shown to be asymptotically powerful in the detectable region . numerical simulations illustrate and verify the effectiveness of the proposed test . furthermore , the periodogram over - sampled by is proven universally optimal in global testing for periodicities under a mild minimum separation condition . ./style / arxiv - general.cfg ,
|
we consider the following problem : find such that where where is a bounded polygonal domain .we assume that and which may be discontinuous , while .the representative examples of the problem are subsurface flows in heterogeneous media where the heterogeneity varies over a wide range of scales .the aim of this paper is to design and analyze a feti - dp method for solving such problems based on a composite fe / dg discretization . instead of using the full dg method over the whole domain ,the composite fe / dg method employs conforming fe methods inside the subdomains , while applies a dg discretization only on the subdomain interfaces to deal with the nonmatching meshes across the interfaces ; see .the local bilinear forms of the discrete problem are composed of three symmetric terms : the one associated with the energy , the one ensuring consistency and symmetry , and the interior penalty term to handle the nonconforming fe spaces across the interfaces ; see cf . - .such discretization allows for nonmatching grids which provides greater flexibility in the choice of mesh partitioning and memory storage .this may be useful particularly when the coefficient varies roughly in one subdomain and mildly in the others .feti - dp methods , as well as feti and bddc , have been well established as a class of nonoverlapping domain decomposition methods for solving large - scale linear systems .these methods have been used widely for standard continuous fe discretization , and verified to be successful both theoretically and numerically ; see and references therein .feti - dp method was firstly introduced in following by a theoretical analysis provided in . in feti - dp algorithms , we need a relatively small number of continuity constraints across the interface in each iteration step .the continuity of the solution across the subdomain interfaces is enforced by lagrange multipliers , while the continuity at the subdomain vertices is enforced directly by assigning unique values .the methods were further improved in to use the continuity constraints on the averages across the edges on subdomain interfaces .the feti - dp methods have been developed more recently , and possess several advantages over the one - level feti method ; see cf . .the feti - dp method was firstly considered for composite fe / dg discretization in .we will follow the same framework as described therein . in , the discontinuities of the coefficients are assumed to occur only across the subdomain interfaces .the main purpose of this paper is to extend the methodology to the case where the coefficients are allowed to have large jumps not only across but also along the subdomain interfaces and in the interior of the subdomains .we recall that such problems were investigated in the context of feti methods in . in this paper , we will use the same dg bilinear form as in , construct our feti - dp preconditioner as in , and define the components of the scaling matrix as proposed in . for the theoretical aspect, we employ the cut off technique and the generalized discrete sobolev type inequality , cf . , as well as the standard estimates of the edge and vertex functions , cf .it will be proved that the convergence of the feti - dp method only weakly depends on the jump of coefficients , i.e. , linearly depends on the contrast of the coefficients in the boundary layer . herewe define the boundary layer as the union of fine triangles that touch the subdomain boundaries .we also show that the condition number estimate of the proposed method is quadratic dependence on where is the subdomain diameter and is the fine mesh size . this quadratic dependence on be relaxed to a weaker dependence of under stronger assumptions on the coefficients in the interior of the subdomains .the remaining part of this paper is organized as follows . in section 2 ,we introduce the composite fe / dg formulation of problem . the feti - dp method is presented in section 3 .the main results of the paper are given in section 4 about the analysis of the condition number estimate .numerical results are provided in section 5 to confirm the theoretical analysis . in the last sectionwe summarize our findings and discuss certain extensions . throughout this paperwe denote a sobolev space of order by the standard notation with norm given by ; see e.g. , for exact definition . for use instead of and write the norm as .in addition , stands for with positive constants and depending only on the shape regularity of the meshes .in this section we present the dg formulations of problem that will be studied here .let the domain and be disjoint shape regular polygonal subdomains of diameters .denote the subdomain boundaries by .for each , we introduce a shape regular triangulation with the mesh size .note that the resulting triangulation of is in general nonmatching across .we assume that the substructures form a geometrically conforming partition of , i.e. , the intersection is either empty , or a common vertex or edge of and .let us denote the common edge .although and are geometrically the same object , we will treat them separately since we consider different triangulations on and on , with the mesh size of and , respectively . in the textbelow , we use and to denote the set of nodal points of the triangulation on and with mesh sizes and , respectively , and and when the endpoints are included .moreover , the two triangulations and can be merged to obtain a finer but the same mesh on and .we also denote when there is an intersection between and the global boundary .let us denote by the set of indices to refer to the edges , i.e. , of which has a common edge with , and by the set of indices to refer to the edges .the set of indices of all edges of is denoted by . for simplicity, we assume that the coefficient , which can be fulfilled by scaling with . without loss of generality again , we assume that is constant over each fine triangle. the analysis will depend on the coefficient in a boundary layer near subdomain boundaries .for each subdomain , we define the boundary layer by i.e. , the union of fine triangles in that touch the boundary . furthermore , we set let be restricted to .we define the harmonic averages along the edges as follows : note that the functions and are piecewise constant over the edge on the mesh that is obtained by merging the partitions and along this common edge .it is easy to check that let be the standard finite element space of continuous piecewise linear functions in .define and represent functions as with .we do not assume that functions in vanish on .the discrete problem obtained by the dg method is of the form : find with such that where here each local bilinear form is given as the sum of three symmetric bilinear forms : where and here denotes the outward normal derivative on , and is a positive penalty parameter .when , we set , and let and be defined in . when , we set , and define and .we introduce the bilinear form with it is easy to check that is symmetric and positive definite , which can induce a broken norm in by .the next lemma characterizes the equivalence between the bilinear forms and .this equivalence implies the existence and uniqueness of the solution to the discrete problem , and also allows us to use the bilinear form instead of for preconditioning .[ equibi ] there exists such that for and for all , we have and where and are positive constants independent of , , , and . for the proof we refer to lemma 2.1 of .in this section , we will give the formulation of our feti - dp method using the framework introduced in .firstly , we borrow the notations from .let i.e. , the union of and the with , and let then we set we introduce as the fe space of functions defined on the nodal values of .that is , where is the trace of the space on with . in the following ,we use the same notation to denote both fe functions and their vector representations .the local bilinear form in is defined over , and the associated stiffness matrix is given by where denotes the euclidean inner product associated to the vectors with nodal values .we will decompose as , where represents values of at interior nodal points on and at the nodal points on .note that for subdomains which intersect by edges , the nodal values of on are treated as unknowns and belong to . hence , we can rewrite and the matrix as where the block rows and columns correspond to the nodal points of and , respectively .the schur complement of , with respect to the nodal points of , takes the form note that satisfies the energy minimizing property subject to the condition that and on .the bilinear form is symmetric and nonnegative with respect to , see lemma [ equibi ] .the minimizing function of is called the discrete harmonic extension in the sense of , denoted by , and satisfies with on . here is the subspace of of functions which vanish on .we also introduce , the standard discrete harmonic extension in the sense of , which is defined by with on .note that the extensions , and , differ from each other in the sense that at the interior nodes depends only on the nodal values of on while depends on the nodal values of on .the next lemma shows the equivalence between and in the energy form induced by .this equivalence will allow us to take advantages of all the discrete sobolev results known for discrete harmonic extensions .the fundamental idea of the proof comes from , and we still include the proof here for completeness . [ enerequiv ] for any , there exists a constant independent of and , such that here and below , for simplicity of presentation , we omit the subscript and denote by if there is no confusion .the left - hand inequality of follows from the energy minimizing property of the discrete harmonic extension in the sense of , and the fact that on . herewe remain to prove the right - hand inequality .it is easy to verify that since the extensions keep the boundary values .note that we can represent as where is the projection of into in the sense of , i.e. , and satisfies choosing , by cauchy - schwarz inequality , we obtain hence , since the bilinear form is symmetric and nonnegative , by cauchy - schwarz inequality again , we have with arbitrary . since , using the formulations and , we get and which together imply that we proceed the same lines of lemma 2.1 in , and finally obtain where we have used . combining and , we have the right - hand side of follows by choosing a sufficiently small .lemma [ equibi ] and lemma [ enerequiv ] together directly give the following corollary .[ coro1 ] for any , there exist positive constants and independent of and , such that let us introduce the product spaces that is , a function means that with , and a function means that with ; see and for the definitions of and , and also for notation .we also define where is given in .secondly , we formulate as a constrained minimization problem . with a similar decomposition as, we can partition as where is the trace of the space on .a function can be written as where is restricted to and is restricted to for all .we consider as the subspace of which contains the continuous functions on .a function is defined to be continuous on in the sense that for all we have we say that , where with and , is continuous on if satisfies the continuity condition .the subspace of of functions which are continuous on is denoted by ; c.f . , definition 3.3 in .note that there is a one - to - one correspondence between vectors in and .next we define the nodal points associated with the corner variables by we now consider the subspace and as the space of functions that are continuous on all the .a function is defined to be continuous at the corners in the sense that for all we have we say that , where with and , is continuous on if satisfies the continuity condition .the subspace of of functions which are continuous on is denoted by ; c.f ., definition 4.1 in .note that .we can represent as , where the subscript refers to the interior degrees of freedom at nodal points ; see , the refers to the primal( ) variables at the corners for all , and the refers to the dual( ) variables at the remaining nodal points on for all .similarly , a vector can be uniquely decomposed as .therefore , we can represent , where and refer to the and of freedom of , respectively .let be the stiffness matrix obtained by restricting the block diagonal matrix from to , where .note that the matrix is no longer block diagonal since there are couplings between primal( ) variables . using the decomposition , we can partition as note that the only coupling across subdomains are through the variables where the matrix is subassembled .once the variables of and sets are eliminated , the schur complement matrix associated with the is obtained of the form note that is defined on the vector space .[ lemschur ] let and be defined in and .for any , it holds where the minimum is taken over with .the proof of the above lemma can be found in lemma 6.22 of and lemma 4.2 of . + next we introduce some notations to define the jump matrix .the vector space can be further decomposed as where the local space includes functions associated with variables at the nodal points of . hence, a vector can be represented as with .moreover , the vector can be partitioned as with and . in order to measure the jump of across the , we introduce the space where is the restriction of to .the jumping matrix is constructed as follows : let and let where satisfies the jumping matrix can be written as where the rectangular matrix consists of columns of attributed to the components of the product space .the entries of consist of values of .it is easy to see that , and has full rank .in addition , if and then .we can reformulate the discrete problem , on the space of , as a minimization problem with constraints given by the continuity requirement : find such that where the minimum is taken over with constraints .the objective function where is defined in and here , where is the load vector associated with the subdomain , and can be represented as .the forcing term is defined by , where the entries are defined as when are the canonical basis functions of . note that and are both symmetric and positive definite ; see also lemma [ lemschur ] . by introducing a set of lagrange multipliers , to enforce the continuity constraints , we obtain the following saddle point formulation of : find and such that this reduces to where once is computed , we can back solve and obtain we will now define a preconditioner for in .let us introduce the diagonal scaling matrix , which maps into itself , for all .each of the diagonal entries of corresponds to one , and it is given by the weighted counting function where is defined in .note that one edge is shared by two subdomains .the union of all these functions provides a partition of unity on all .we also define an important role will be played by the operator which maps into itself .it is easy to check that for and , we have \,\,\text{for all}\;\ ; x\in e_{ijh},\end{aligned}\ ] ] \,\,\text{for all}\;\ ; x\in e_{jih},\end{aligned}\ ] ] where is defined in .hence , preserves jumps in the sense that which implies that is a projection with .define where is the local schur complement , see , restricted to from , i.e. , is obtained from by deleting rows and columns associated with the variables at nodal points of .the feti - dp method is the standard preconditioned conjugate gradient algorithm for solving the preconditioned system with the preconditioner note that is a block diagonal matrix and each block is invertible since and are invertible , and has full rank .the main result of our paper is included in the following theorem , which gives an estimate of the condition number for the preconditioned feti - dp operator .[ mainthm ] for any , there exists a positive constant independent of , , and such that where with .if for any the coefficient in the subdomain satisfies then we have by the general abstract theory for feti - dp method , see and theorem 6.35 of , the proof of the lower and upper bound in follows by checking lemma [ lowerbound ] and lemma [ upperbound ] as below , respectively . for clarity , we will use the following norms for with : and where , and are defined in , and , respectively .[ lowerbound ] for any there exists a such that with and for any , there exists an element such that since has full rank .note that is a projection which maps to itself . by choosing can easily obtain and where we have used .it follows from lemma [ lemschur ] that where the first minimum is taken over with , and the second one over with and .the next lemma gives us a crucial estimate of the norm of .[ upperbound ] for any it holds that where is defined in or / and , and is a positive constant independent of , , and . for any ,let be the solution of where the minimum is taken over with and .we can represent as with .we define linear functions to approximate on and with as follows : and let with be defined by and note that ; see . therefore ,representing , we have . using the definition of ,we have define to be equal to at the , and equal to zero at the .let us represent with and where ; see and . using and , it is easy to check that ,\end{aligned}\ ] ] and .\end{aligned}\ ] ] we denote by the space of continuous and piecewise linear functions on the local boundaries .it is obvious that . by the definitions of and , , , and, we have where with the discrete harmonic extension defined in . with ,to prove , we need to show that by corollary [ coro1 ] it remains to prove with ||_{l^2(e_{ij})}^2\\ & = i_1+i_2 . \end{ aligned}\ ] ] first we consider the term of .for the proof is trivial due to the specific choices of parameters . for , it follows from and that since , and . here is a fine edge on the mesh that is obtained by merging and along .we recall that is constant on each and denoted by . by summing up, we finally get \|^2_{l^2(e_{ij})}+\\ & \quad+c \sum_{j\in\e_i}\frac{1}{l_{ij}}\frac{\delta}{h_{ij}}\|\al_{ij}^{1/2}[(w_j)_i-(w_j)_j]\|^2_{l^2(e_{ij})}\\ & \leq c \{d_i(\h_i w_i , \h_i w_i ) + \sum_{j\in\e_i } d_j(\h_j w_j , \h_j w_j)\}. \end { aligned}\ ] ] now we estimate the first term of . here we introduce two semi - norms defined on as follows : for any , as the discrete harmonic extension in the sense of .hence the function is the minimizing function of .note that at the interior nodes depends only on the nodal values of on , i.e. , in the interior of subdomains .this implies that ||_{l^2(\om_i)}^2\\ & = |(v_i)_i|_{b_i}^2\\ & \leq c \overline{\al}_i{ \left(|(v_i)_i|_{h^{1/2}(\pa\om_i)}^2 + \frac{1}{h_i}\|(v_i)_i\|_{l^2(\pa\om_i)}^2\right ) } , \end { aligned}\ ] ] where we have used the second inequality of lemma 4.1 in . we can write as \right ) } , \end { aligned}\ ] ] where is the usual lagrange interpolation operator , and for the finite element cut - off function equals to 1 for all and vanishes on all the other nodes ; see definition 4.2 of . by , we know that putting into , we obtain where \right)}\\ & = i^h { \left(\theta_{e_{ij}}[(w_i-\hat{w}_i)_i - ( w_j-\hat{w}_j)_j]\right ) } + i^h { \left(\theta_{e_{ij}}[(w_j-\hat{w}_j)_j - ( w_j-\hat{w}_j)_i]\right)}\\ & : = a_{ij } + b_{ij}.\end{aligned}\ ] ] as stated in that , since the discrete harmonic extensions from to and are equivalent in the corresponding . here we refer to lemma 4.19 of with the two dimensional case , and have since is a convex combination of the values of at the end points of , we can employ the generalized discrete sobolev inequality , c.f .lemma 4.5 of , and obtain with the same argument as , we get let be the projection on , the restriction of to with on . using the inverse inequality , and the stability of the projection we have \|^2_{l^2(e_{ij } ) } + \|q_i(w_j-\hat{w}_j)_j\|^2_{l^2(e_{ij})}+\\ & \quad+\|(w_j-\hat{w}_j)_j\|^2_{l^2(e_{ij})}+\|(\hat{w}_j)_i - ( \hat{w}_j)_j\|^2_{l^2(e_{ij})}\}\\ & \leq c\{\|(w_j)_j - ( w_j)_i\|^2_{l^2(e_{ij})}+\|(w_j-\hat{w}_j)_j\|^2_{l^2(e_{ij})}+\|(\hat{w}_j)_i - ( \hat{w}_j)_j\|^2_{l^2(e_{ij})}\}\\ & \leq c\{\|(w_j)_j - ( w_j)_i\|^2_{l^2(e_{ij } ) } + \frac{h_j^2}{h_j}|\accentset{\circ}{\h}_j ( w_j)_j|_{h^1(\om_j^h)}^2 + h_j \max_{\partial e_{ij}}((w_j)_i - ( w_j)_j)^2\ } , \end { aligned}\ ] ] since and are linear on and , respectively .let be the average of on . by ( 4.42 ) in obtain where we have used ( 4.10 ) in , and the stability of the projection .substituting into , we have where we have used . putting the above inequality into ,we obtain where we used the fact that for all using the continuity of the nodal interpolation operator , and proceeding the same lines of , we have and since . combining the inequalities , , and, we have substituting and into , we get by the summation of the above inequality for all and noting that the number of edges of each subdomain can be bounded independently of , we finally obtain with satisfying .next we consider the special case when the coefficient in the subdomains satisfies for all . where it is well - known that ; c.f . , since for all . using ( 4.44 ) in , we have \|_{l^2(e_{ji})}^2\right)}\\ & \leq c(1+\log\frac{h}{h})^2\max_{i=1}^n \frac{\overline{\al}_i}{\underline{\al}_i}\sum_{j\in\e_i}d_j(\h_j w_j , \h_j w_j ) , \end { aligned}\ ] ] where we have used , and the fact that is practically chosen such that . proceeding with the same lines of ,we can obtain since at the end points of . using the inverse inequality , and the stability of the projection we have \|^2_{l^2(e_{ij } ) } + \|q_i(w_j-\hat{w}_j)_j\|^2_{l^2(e_{ij})}+\\ & \quad+\|(w_j-\hat{w}_j)_j\|^2_{l^2(e_{ij})}+\|(\hat{w}_j)_i - ( \hat{w}_j)_j\|^2_{l^2(e_{ij})}\}\\ & \leq c\{\|(w_j)_j - ( w_j)_i\|^2_{l^2(e_{ij})}+\|(w_j-\hat{w}_j)_j\|^2_{l^2(e_{ij})}+\|(\hat{w}_j)_i - ( \hat{w}_j)_j\|^2_{l^2(e_{ij})}\}\\ & \leq c\{\|(w_j)_j - ( w_j)_i\|^2_{l^2(e_{ij } ) } + h_i |(w_j-\hat{w}_j)_j|^2_{h^{1/2}(e_{ij } ) } + h_i \max_{\partial e_{ij}}((w_j)_i - ( w_j)_j)^2\}\\ & \leq c\{\|(w_j)_j - ( w_j)_i\|^2_{l^2(e_{ij } ) } + h_i ( 1+\log \frac{h_j}{h_j})^2\|\nabla\h_j w_j\|^2_{l^2(\om_j)}+\\ & \quad+\frac{h_i}{h_i}\|(w_j)_j - ( w_j)_i\|^2_{l^2(e_{ij } ) } + h_i(1+\log\frac{h}{h})\|\nabla\h_j w_j\|^2_{l^2(\om_j)}\ } , \end { aligned}\ ] ] where we have used ( 4.43 ) in .hence , \|^2_{l^2(e_{ij } ) } + \frac{h_i}{h_i}(1+\log\frac{h}{h})^2\frac{\overline{\al}_j}{\underline{\al}_j}\|\na\h_j w_j\|^2_{l^2(\om_j)}\}. \end { aligned}\ ] ] this immediately gives with the same arguments as in and , we finally obtain with satisfying .let the domain be a unit square . for the experiments , we partition the domain into square subdomains .the distribution of coefficients in each example is presented by figures .we use the proposed feti - dp method for the discontinuous galerkin formulation ( section [ feti - dp for dg ] ) of the problem , and iterate with the preconditioned conjugate gradient ( pcg ) method .the iteration in each test stops whenever the norm of the residual is reduced by a factor of .the penalty parameter is chosen to be in all the experiments .[ example1 ] in our first example , c.f .left picture of fig .[ figex1 ] , the coefficient denotes a binary medium with on a square shaped inclusion ( shaded region ) lying inside one subdomain at a distance of from both the horizontal and the vertical edges of , and in the rest of the domain .we study the behavior of the preconditioner as and varies , respectively .it follows from tab .[ tabex1 ] that the condition numbers are independent of the values of since the coefficient contrast in the boundary layer is exactly equal to .this is consistent with our theoretical results . adopting different fine mesh sizes , we obtain the log - log plot of the condition numbers in terms of for . the left plot of fig .[ figlog ] shows a dependence worse than linear growth , which is expected to become harder as goes finer .this confirms the estimate of that contains a logarithmic factor besides the linear dependence .[ example2 ] the distribution of coefficient is shown in the right picture of fig .[ figex1 ] , with inclusions in two neighbouring subdomains with coefficient values both larger and smaller than in the boundary layers .similar as the above example , we investigate the dependence of the condition numbers on the mesh ratio . the right plot of fig .[ figlog ] tells us the robustness of the quadratic dependence in the estimate of . c. farhat , m. lesoinne , and k. pierson , _ a scalable dual - primal domain decomposition method _ , numerlinear algebra * 7*(2000 ) , 687714 .preconditioning techniques for large sparse matrix problems in industrial applications , minneapolis , mn , 1999 . c. farhat , m. lesoinne , p. le tallec , k. pierson , and d. rixen , _ feti - dp : a dual primal unified feti method i : a faster alternative to the two - level feti method _ , int .methods eng . *50*(2001 ) , 15231544 .b. rivi , _ discontinuous galerkin methods for solving elliptic and parabolic equations : theory and implementations _ , vol .35 of frontiers in applied mathematics , siam , philadelphia , pa , 2008 .
|
in this paper we consider the second order elliptic partial differential equations with highly varying ( heterogeneous ) coefficients on a two - dimensional region . the problems are discretized by a composite finite element ( fe ) and discontinuous galerkin ( dg ) method . the fine grids are in general nonmatching across the subdomain boundaries , and the subdomain partitioning does not need to resolve the jumps in the coefficient . a feti - dp preconditioner is proposed and analyzed to solve the resulting linear system . numerical results are presented to support our theory .
|
calculating or estimating the lyapunov exponents provides a powerful tool for exploring most fundamental properties of dynamical systems in various physical and mechanical applications .the main advantage of this tool is that it allows one to separate chaos from order . if close trajectories in the bounded phase space diverge exponentially , then the motion is chaotic . the maximum rate of this exponential divergence is characterized by the maximum lyapunov exponent .the quantity is the so - called lyapunov time , representing the characteristic time of predictable dynamics .knowledge of the lyapunov time allows one to judge on the possibility for predicting the motion in chaotic domains of phase space . due to the exponential divergence of chaotic orbits ,the trajectory of any dynamical system can not be accurately predicted on timescales much greater than system s lyapunov time ; this determines the importance of methods for estimating the lyapunov exponents and times in physical and mechanical applications . in this article, we consider the problem of estimating the maximum lyapunov exponent of the motion in a multiplet of interacting resonances for the case when the resonances have comparable strength . for describing nonlinear resonances, we use the perturbed pendulum model ( it was introduced in as a `` universal '' one ) . considering the case of interacting resonances of comparable strengthis inspired by the fact that when one applies the perturbed pendulum model of nonlinear resonance in various applications , one usually finds out that the perturbations are not at all weak ; see examples in .for the model of perturbed nonlinear resonance , we take the following paradigmatic hamiltonian : the first two terms in eq .( [ h ] ) represent the hamiltonian of the unperturbed pendulum , where is the pendulum angle ( the resonance phase angle ) , and is the momentum .the periodic perturbations are given by the last two terms ; is the phase angle of perturbation : , where is the perturbation frequency , and is the initial phase of the perturbation .the quantities , , , are constants .the frequency of the pendulum small - amplitude oscillations is given by an important `` adiabaticity parameter '' , measuring the relative frequency of perturbation , is in the well - known phase portrait `` '' of the non - perturbed pendulum , a single domain ( `` cell '' ) of librations , bounded by the non - perturbed separatrix , is present .if the perturbations are `` switched on '' ( i.e. , ) , a section of the phase space of motion can be constructed .let us construct it at , taking the parameters values as follows : , , , .the resulting section is shown in fig .[ fig_p8 ] ; now not one but three domains of librations , i.e. , three resonances , are present .if the perturbation frequency is relatively large ( as in fig . [ fig_p8 ] , where ) , the separation of resonances in the momentum is large and they almost do not interact . on reducing the frequency of perturbation , the resonances approach each other and appreciable chaotic layers emerge in the vicinity of the separatrices ( see fig. [ fig_p5 ] , where ; the value of is as in the previous section ) . as it is well visible in fig .[ fig_p5 ] , the motion in the vicinity of the separatrices is irregular .on reducing further the frequency of perturbation , the layers merge into a single chaotic layer , due to strong overlap of the resonances ( see fig. [ fig_p2 ] , where ) . ) .,scaledwidth=70.0% ] ) .,scaledwidth=70.0% ] ) .,scaledwidth=70.0% ]the chaotic layer theory has applications in various areas of physics , mechanics and , in particular , in celestial mechanics .the key role in this theory is played by the separatrix maps .they represent the motion of a system close to separatrices in a discrete way ( `` stroboscopically '' ) : system s state , set by the `` time '' and `` energy '' variables , is mapped discretely at the moments of passage of the positions of equilibrium by the pendulum describing the resonance .the motion near the separatrices of the perturbed pendulum ( [ h ] ) with asymmetric perturbation ( ) is described by the so - called separatrix algorithmic map : where is given by eq .( [ lambda ] ) , and , .the melnikov arnold integral ( `` ma - integral '' ) is given by the formula see .the quantity denotes the relative ( with respect to the separatrix value ) pendulum energy : .the variable is the phase angle of perturbation .one iteration of map ( [ sam ] ) corresponds to one half - period of pendulum s libration or one period of its rotation .if ( the symmetric case ) , the separatrix algorithmic map reduces to the well - known ordinary separatrix map first written in this form in ; the expression for is formula ( [ w ] ) differs from that given in by the term , which is small for .however , its contribution is significant when is small , i.e. , in the case of adiabatic chaos .an equivalent form of eqs .( [ sm ] ) , used , e.g. , in , is where , ; and in , the theory of separatrix maps was shown to be legitimate for using to describe the motion near separatrices of perturbed nonlinear resonances in the full range of , including its low values .the half - width of the main chaotic layer of the separatrix map ( [ sm1 ] ) in the case of the least perturbed border of the layer was computed as a function of in .the observed dependence follows a piecewise linear law with a transition point at .this transition takes place not only in what concerns the width of the layer , but also in other characteristics of the motion , in particular , in the maximum lyapunov exponent .the clear - cut sharp transition at this point manifests a qualitative distinction between two types of dynamics , `` adiabatic '' ( `` slow '' ) and `` non - adiabatic '' ( `` fast '' ) chaos .the parameter measures the distance between the perturbing and guiding resonances in the units of one quarter of the width of the guiding resonance .therefore , can be regarded as a kind of the resonance overlap parameter .it is important to note that the border between the cases of adiabatic chaos and non - adiabatic chaos does not separate the cases of resonance overlap and resonance non - overlap : the border between the latter cases lies much higher in ; e.g. , in the phase space of the standard map the integer resonances start to overlap , on decreasing , at , i.e. , already at .let us consider a nonlinear resonance in the perturbed pendulum model with several harmonic perturbations ( i.e. , in comparison with hamiltonian ( [ h ] ) , the number of equally - spaced perturbing harmonics may be arbitrary ) : thus the number of resonances in the multiplet is equal to .let us build a separatrix map for hamiltonian ( [ hmulti ] ) with the symmetric perturbations ( ) .if the perturbations are asymmetric , the problem is more complicated , because the separatrix map becomes algorithmic , as in the triplet case .setting and calculating the increment of the energy variable ( analogously to the triplet case , considered in ) gives the result , whereas the increment of the time variable remains the same as in the triplet case .thus the separatrix map ( [ sm ] ) is generalized to a `` multiplet separatrix map '' , given by where where and .the domain of validity of map ( [ msm ] ) ( in describing the near - separatrix motion ) is expected to be usually much smaller than that of map ( [ sm ] ) , because the natural condition of validity generalizes here to the condition .thus , if there is a lot of perturbing harmonics , the maximum allowed amplitudes in the multiplet case must be usually much smaller than the maximum allowed amplitude in the triplet case , at any given value of . also note that in the case of non - adiabatic perturbation ( ) the multiplet map ( [ msm ] ) can be usually replaced by the classical map ( [ sm ] ) for the `` central '' triplet ( with ) , because at high values of the coefficients at are exponentially small with , with respect to .the maximum lyapunov exponent is defined by the formula where is the distance ( in the phase space of motion ) between two nearby initial conditions for two trajectories at the initial instant of time , and is the distance between the evolved initial conditions at time ( e.g. , ) .the art of calculation of the lyapunov exponents ( and , in particular , the maximum lyapunov exponent ) on computers has more than a thirty - year history and during this time it has become an extensive part of applied mathematics ; see reviews in .modern numerical methods for computation of the lyapunov exponents are effective and precise .approaches for analytical estimating the lyapunov exponents were started to be developed relatively recently , beginning with those providing precision by the order of magnitude , and later on providing precision comparable to the numerical methods , though in limited applications .morbidelli and froeschl and nesvorn and morbidelli suggested to estimate the lyapunov time by taking it equal , by the order of magnitude , to the libration / circulation period of the resonant angle , or , in practice , to the period of small - amplitude oscillations on resonance ( i.e. , ) .this estimate has a rather limited domain of validity ; in fact , as we shall see in section [ sec_tvne ] , solely at , and the ratio in both limits and ; besides , rather strongly depends on other parameters , such as the perturbation amplitude .a different approach , based on derivation of a discrete map for a triplet , was proposed by holman and murray .this is also a one - parameter approach , but using an effective resonance overlap parameter instead of or .the strength of perturbation is ignored , only frequencies are taken into account . in some way is analogous to the stochasticity parameter of the standard map ( whose theory is given in ) , though was introduced for a triplet .holman and murray derived heuristic formulas for estimating the maximum lyapunov exponent in the case of moderate resonance overlap , when , and in the case of strong overlap ( the adiabatic case ) , when . in the first case ,the maximum lyapunov exponent was estimated in as ( the frequency of small oscillations on the resonance ) , and in the second case as ( the frequency of external perturbation ) .murray and holman refined somewhat the formula in the case of strong overlap ( ) by introducing a logarithmic dependence on , namely , the function essentially the same as for the standard map ( see subsection [ lemsm ] ) , though derived for a triplet .for the whole range of resonance overlap , , they proposed the following interpolating formula : let ( this value belongs to the case of moderate overlap ) , then , taking , one has . for the standard map , the actual value of at and is ( as we shall see in subsection [ lemsm ] , eq .( [ lstmla1 ] ) ) , and for the triplet it is smaller .thus eq .( [ mh97 ] ) can be used for estimates solely by the order of magnitude , because the perturbation strength , asymmetry of perturbation , and number of resonances in multiplets are ignored in it .it can not be used in the case of weak interaction of resonances , when they do not overlap . in ,an approach for estimating the maximum lyapunov exponent of the chaotic motion in the vicinity of separatrices of a perturbed nonlinear resonance was proposed in the framework of the separatrix map theory .we follow the approach , representing the maximum lyapunov exponent of the motion in the main chaotic layer of system ( [ h ] ) as the ratio of the maximum lyapunov exponent of its separatrix map and the average period of rotation ( or , equivalently , the average half - period of libration ) of the resonance phase inside the layer . for convenience ,we introduce the non - dimensional quantity .then the general expression for is the quantity , by definition , is the lyapunov time . in ,the following four generic kinds of interacting resonances were considered : fast - chaotic resonance triplet , fast - chaotic resonance doublet , slow - chaotic resonance triplet , and slow - chaotic resonance doublet . herewe present formulas for the lyapunov time for these four cases , and then proceed to considering a fifth generic kind , that of infinitely many interacting resonances .assume that and in eq .( [ h ] ) .then one has a symmetric triplet of interacting resonances , and chaos is non - adiabatic .following , we take the dependence of the maximum lyapunov exponent of the separatrix map ( [ sm1 ] ) in the form where is chirikov s constant .the average increment of ( proportional to the average libration half - period , or rotation period ) in the chaotic layer is : is the base of natural logarithms .then , the lyapunov time for the fast - chaotic resonance triplet is given by where is the period of perturbation .in the completely asymmetric case , when or , the maximum lyapunov exponent can be found by averaging the contributions of all separate components of the chaotic layer .the averaged ( over the whole layer ) value of the maximum lyapunov exponent is the sum of weighted contributions of the layer components corresponding to librations , direct rotations and reverse rotations of the model pendulum .the weights are directly proportional to the times that the trajectory spends in the components , and , due to the supposed approximate ergodicity , to the relative measures of the components in the phase space .then , the formula for the lyapunov time for the fast - chaotic resonance doublet is given by where , and , , and are given by formulas ( [ w ] ) , ( [ lsx ] ) , and ( [ tsx ] ) . if , the diffusion across the chaotic layer is slow , and on a short time interval the trajectory of the separatrix map ( [ sm1 ] ) follows close to some current `` guiding '' curve , and this allows one to estimate characteristics of the chaotic layer in a straightforward manner ; in particular , the lyapunov time for this resonance type is given by where ( eq . ( [ c ] ) ) .this formula has specific limits of applicability , namely , the parameter ( approximately equal to in the adiabatic case ) should not be close to . at has , hence the approximate formula for the lyapunov time is in this case , the separatrix algorithmic map ( [ sam ] ) degenerates to the ordinary separatrix map ( [ sm ] ) with , i.e. , mathematically this case is equivalent to the case of slow - chaotic resonance triplet , but with a different ( halved ) value of .the lyapunov time is then given by provided that the parameter is not close to .let us assume that the number of resonances in a resonance multiplet is greater than 3 . in applications , this number can be very large ; then , the multiplet is called a `` supermultiplet '' . if chaos is non - adiabatic ( ) , then one can apply , as an approximation , the formulas given in subsections [ fcrt ] and [ fcrd ] for the triplet and doublet ( depending on the perturbation asymmetry ) cases , because the influence of the `` far away '' resonances is exponentially small with .however , if chaos is adiabatic ( ) , the triplet or doublet approximations do not work and one has to develop a different approach .let us consider a limiting case , namely , the case of infinitely many interacting equally - sized equally - spaced resonances . the standard map describes the motion in an infinite multiplet of equally - sized equally - spaced resonances , as it is clear from its hamiltonian : where .the variables , of map ( [ stm2 ] ) correspond to the variables , of the continuous system ( [ h_stm2 ] ) taken stroboscopically at time moduli ( see , e.g. , ) .for the standard map at , according to .the lower curve shows the function .the middle curve shows the glued functions ( [ lstmla ] ) , and the upper curve is given by eq .( [ lstmla1 ] ) , where .,title="fig:",scaledwidth=70.0% ] + the asymptotic formula for the maximum lyapunov exponent of the standard map at was derived in : .rather precise fitting formulas were obtained in for the dependence at and : where .the functions ( [ lstmla ] ) are depicted in fig .[ msm_fig2 ] . in this plot , they are glued at ; this trick apparently results in underestimating the actual values of in the interval .arranging a better fit for at this interval , one arrives at the formulas which describe the behavior of at ( corresponding to ) much more accurately .thus the lyapunov time in the `` infinitet '' case is given by where dependence ( dots ) for the standard map at .the lower solid curve is given by the separatrix map theory without any correction to the ma - integral ; the upper solid curve is given by the separatrix map theory with the chirikov zero - order correction to the ma - integral ; the middle solid curve is given by the separatrix map theory with the chirikov gelfreich correction to the ma - integral.,title="fig:",scaledwidth=70.0% ] + a well - known important constant of the standard map dynamics is the critical value of the parameter , namely , ; see , e. g. , .it is obvious from figs .[ msm_fig2 ] and [ msm_fig1 ] , that at , i.e. , at below its approximate critical value , the dependence , if smoothed , is close to linear .this is explainable in the framework of the separatrix map theory .indeed , one can find the maximum lyapunov exponent here using formula ( [ tlft ] ) for the fast - chaotic resonance triplet , because at one has and therefore the perturbing resonances non - neighboring the guiding one can be ignored in the first approximation ( their contribution is considered below ) .thus ( eq . ( [ lceh ] ) ) , where is given by eq .( [ lsx ] ) and is given by eq .( [ tsx ] ) . as follows from eq .( [ lsx ] ) , is practically constant at . on the other hand , is directly proportional to at small enough values of , as follows from eq .( [ tsx ] ) ( or see eq .( 6.18 ) in ) . therefore , at small enough values of .however , this linear asymptotic behavior has a slope somewhat less than the average one adopted in approximation ( [ lstmla1 ] ) , where . indeed , a careful inspection of fig .[ msm_fig1 ] indicates that the slope of the smoothed dependence decreases with .let us derive a formula for at .this will be a formula for the upper envelope of the observed `` ragged '' dependence ( which has sharp local minima due to marginal resonances at the borders of the chaotic layer ) , because our theory ( described in section [ fcrt ] ) is valid in the absence of marginal resonances .( the role of marginal resonance in defining the width of the chaotic layer is described in . )we proceed from the basic relation ( [ lceh ] ) , where , is given by eq .( [ lsx ] ) , and is given by eq .( [ tsx ] ) . however , we modify the expression for , which enters in eq .( [ tsx ] ) , changing to , where is a correction factor , introduced by chirikov to account for specific properties of the standard map .thus the formula for attains the form expressing through , one arrives at a formula , derived in for the average half - period of librations ( or , the average period of rotations ) in the chaotic layer of the integer resonance of the standard map ; this formula is as follows : the introduction of the correction factor is necessary for the separatrix - map correct description of the chaotic layer of the integer resonance of the standard map .chirikov s numerical - experimental estimate of the correction factor gave .later on , this factor was found out to be expressed through the so - called lazutkin splitting constant : , where the lazutkin constant . at non - zero ,the stable and unstable separatrices of the integer resonance of the standard map intersect transversally ; lazutkin obtained an asymptotic ( at ) formula for the separatrix splitting angle .the splitting angle at the first intersection of the separatrices with the line is given by where and the first three coefficients are given by the formulas where .taking into account the asymptotic expansion ( [ alpha ] ) , one arrives at where . combining eqs .( [ lceh ] ) , ( [ lsx ] ) , ( [ tsxst ] ) , and ( [ rsxst ] ) , we build a theoretical curve ; it is the middle solid one in fig .[ msm_fig1 ] . for comparison , the lower solid curve in this figureis given by the separatrix map theory without any correction to the ma - integral ( i.e. , ) , and the upper solid curve is given by the separatrix map theory with the chirikov zero - order ( in ) correction to the ma - integral ( i.e. , ) .one can see that the middle curve , built on the basis of the most refined theory , provides the best approximation for the upper envelope of the numerical - experimental relationship , as expected .in this section we verify our theoretical results versus numerical simulations . for computing the maximum lyapunov exponent ( and , generally , the lyapunov spectra ) we use the algorithms and software developed in on the basis of the hqrb numerical method by von bremen et al . for calculation of the lyapunov spectra .the hqrb method is based on the qr decomposition of the tangent map matrix using the householder transformation . for computing the trajectories we use the integrator by hairer et al . , realizing an explicit 8th order runge kutta method ( with the step size control ) due to dormand and prince .let us consider first of all a small perturbation amplitude , namely , we set in eq .( [ hmulti ] ) .the corresponding dependences of the maximum lyapunov exponent , normalized by , are shown in fig .[ msm_fig3a ] for the triplet case ( in eq .( [ hmulti ] ) ) and for the septet case ( in eq .( [ hmulti ] ) ) .the dots and triangles denote the numerical - experimental data obtained for the triplet and septet , respectively .the thin curves show the numerical - experimental data obtained by iterations of the multiplet separatrix map ( [ msm ] ) , solid and dashed for the triplet and septet , respectively .the thick solid curve represents the separatrix map theory ( given by eqs .( [ tlft ] ) and ( [ tlst ] ) ) for the triplet .one can see that the theory is impressively good for the triplet .no theory is yet available for the septet ; however , the multiplet separatrix map data and the results of direct numerical integrations are in obviously good agreement . at ,i.e. , in the domain of non - adiabatic chaos , the theory for the fast - chaotic triplet works good for both triplet and septet , because the perturbing role of the harmonics farther than the neighbors of the guiding resonance is negligible .dependences of the maximum lyapunov exponent , normalized by , in the triplet and septet cases ; . the dots and trianglesshow the numerical - experimental data obtained for the triplet and septet , respectively , by means of numerical integrations of the equations of motion .the thin solid and dashed curves show the numerical - experimental data obtained by iterations of the multiplet separatrix map for the triplet and septet , respectively .the thick solid curve represents the separatrix map theory ( given by eqs .( [ tlft ] ) and ( [ tlst ] ) ) for the triplet . , title="fig:",scaledwidth=70.0% ]+ now let us consider the ultimately large perturbation amplitude , namely , ; in other words , let us consider equally - sized equally - spaced multiplets .we call the amplitude ultimately large , because the case of can be reduced to the case of by changing the choice of the guiding resonance . the standard map theory , given by formulas ( [ lstmla1 ] ) and ( [ tlinf ] ) , can be presumably applied for estimating the maximum lyapunov exponents in multiplets of equally - sized equally - spaced resonances , when the number of resonances is large , assuming that the limiting case describes the situation at . dependences of the maximum lyapunov exponent , normalized by , for multiplets of equally - sized equally - spaced resonances .the dots show numerical - experimental data , and the curves show theoretical functions . the upper solid curve , given by eqs . ( [ lstmla1 ] ) and ( [ kla ] ) , represents the standard map theory for the infinitet ; and the lower solid curve , given by eqs . ( [ tlfd ] ) and ( [ tlsd ] ), represents the separatrix map theory for the doublet.,title="fig:",scaledwidth=70.0% ] + the dependences , both theoretical and numerical - experimental , of the maximum lyapunov exponent ( normalized by ) for several multiplets of equally - sized equally - spaced resonances are shown in fig .[ msm_fig3 ] .one can see that the dependence for the septet occupies an intermediate ( in the vertical axis ) position between the dependence for the doublet and the dependence for the `` infinitet '' , i.e. , for the standard map .the numerical data for the doublet agrees well with the separatrix map theory presented in subsections [ fcrd ] and [ scrd ] , notwithstanding the large perturbation amplitude . comparing the heights of the curves maxima in fig .[ msm_fig3a ] ( where ) and fig .[ msm_fig3 ] ( where ) , one can see that depends strongly on the perturbation amplitude , the difference being obvious ( about three times ) .this emphasizes the fact that taking into account solely the frequencies and is insufficient for analytical estimates of : the perturbation strength must be also taken into account whenever more or less precise estimates of are sought for . inspecting the plots in fig .[ msm_fig3 ] allows one to qualitatively estimate the relative range of the lyapunov exponent values between the doublet , triplet and infinitet cases .let us designate the lyapunov exponents for these three cases as , , and , respectively .one can see that at , i.e. , where the values of are maximal , the ratios and are of the order of 2 . at the maxima of the curves , they are equal to 2.2 and 1.7 , respectively .it is also of interest how do the ratios and behave in the limits and , though tends to zero in the both limits .consider first the limit . from eqs .( [ tlfd ] ) and ( [ tlft ] ) the following asymptotic relations are easily derived for the fast - chaotic doublet and triplet cases , respectively : and thus asymptotically . as pointed out in subsection [ lemsm ] , it is expected that at ; therefore , as well .note that the asymptotic behavior of , given by eq .( [ t3tp ] ) , is somewhat different from the average behavior of ( on the interval ) , expressed in eq .( [ tlinf ] ) . indeed , according to eq .( [ tlinf ] ) , at one has ; i.e. , the coefficient at is 1.65 times less .the difference is explained by the fact that the linear - looking smoothed dependence for the standard map at actually has the slope that weakly decreases with , as also pointed out in subsection [ lemsm ] .thus , as followed from eq .( [ l2l3 ] ) , at one expects ; in other words , the relative range of the lyapunov exponent values , if is large , is rather narrow : the lyapunov exponent in the infinitet is only about 70% greater than that in the doublet .the range does not seem to be so narrow at all in the opposite ( adiabatic ) limit .indeed , from eqs .( [ tlsd ] ) , ( [ tlst ] ) and ( [ tlinf ] ) one finds in this limit that ( whereas ) .however note that this fact is not of much importance for applications , because at . concluding this section ,let us discuss the effect of the perturbation strength in more detail .for the perturbation amplitudes and above the standard poincar melnikov method for calculating the effects associated with the separatrix splitting generally requires corrections .what if the perturbation is ultimately large , i.e. , ? in the doublet case ,the perturbation is completely asymmetric ( ) and for this reason , according to , the correction is zero . for the triplet of arbitrary asymmetry , the correction factor to the separatrix map parameter for system ( [ h ] ) , according to the sim hypothetical formula , is , where .( the value of may be either real or imaginary , depending on the signs of and .the value of is corrected by means of multiplying it by ; i.e. , the product is used instead of . ) in the symmetric triplet case , and the correction factor is .thus the correction factor in the case of three equally - sized equally - spaced resonances is significantly smaller than that in the case of infinitely many equally - sized equally - spaced resonances , where ( see subsection [ lemsm ] ) .dependences of the maximum lyapunov exponent , normalized by , for the cases of equally - sized doublet and equally - sized equally - spaced triplet .the dots show numerical - experimental data , and the curves show theoretical functions .the lower solid curve represents the separatrix map theory ( given by eqs .( [ tlfd ] ) and ( [ tlsd ] ) ) for the doublet .the upper solid curve represents the separatrix map theory for the triplet ( given by eqs .( [ tlft ] ) and ( [ tlst ] ) ) with the sim correction ; the middle thin dashed curve is the same but without the sim correction . ,title="fig:",scaledwidth=70.0% ] + fig .[ msm_fig4 ] shows the dependences of the maximum lyapunov exponent , normalized by , for the cases of equally - sized doublet and equally - sized equally - spaced triplet .the dots show numerical - experimental data , and the solid curves show theoretical functions .the lower solid curve represents the separatrix map theory ( given by eqs .( [ tlfd ] ) and ( [ tlsd ] ) ) for the doublet .the upper solid curve represents the separatrix map theory for the triplet ( given by eqs .( [ tlft ] ) and ( [ tlst ] ) ) with the sim correction ; the middle thin dashed curve is the same but without the sim correction .one can see that taking into account the sim correction provides a much better fit to the numerical data , as expected .note that the resonances in the infinitet ( the case of the standard map ) start to overlap , on decreasing , at , i.e. , at ( see section [ sxmap ] ) .therefore , the ranges in in figs .[ msm_fig3a][msm_fig4 ] almost completely correspond to the overlap condition , except at , i.e. , at .the basic conclusion following from our numerical experiments , described in this section , is that at any given value of the adiabaticity parameter ( which controls the degree of interaction / overlap of resonances in the resonance multiplet ) the value of the maximum lyapunov exponent in the multiplet of equally - spaced equally - sized resonances is minimal in the doublet case and maximal in the infinitet case .this is consistent with the separatrix map and standard map theories : as it is clear from fig .[ msm_fig3 ] , the theoretical curves for the doublet and infinitet serve as the lower and upper bounds for all our numerical data on the lyapunov exponents in the multiplets .resonances with planets are ubiquitous in the motion of asteroids ( see , e.g. , ) ; of particular interest are the so - called mean motion resonances with jupiter , i.e. , the resonances between orbital periods of an asteroid and jupiter ( note that jupiter is the largest planet in the solar system and is closest , among the giant planets , to the main asteroid belt ; `` mean motion '' is the mean orbital frequency ) .the hamiltonian of the motion of an asteroid with negligible mass in the gravitational field of the sun and jupiter , in the plane of jupiter s orbit , in the vicinity of a high - order mean motion resonance with jupiter , can be approximated in the perturbed pendulum model as where , , .the leading resonant angle , where and are the mean longitudes of an asteroid and jupiter .( definitions of the orbital elements see , e.g. , in . )the action - like variable is canonically conjugated to .the quantity is assumed to be a constant parameter ; and are asteroid s semimajor axis and eccentricity ; , i.e. , is minus the longitude of asteroid s perihelion ; its time derivative is assumed to be constant .the units are chosen in such a way that the gravitational constant , the total mass ( sun plus jupiter ) , and jupiter s semimajor axis are all equal to 1 ; jupiter s mass in the total mass units is ; .jupiter s mean motion ; i.e. , the adopted time unit is equal to part of jupiter s orbital period .the integer non - negative numbers and define the resonance : the ratio is equal to the ratio of mean motions of an asteroid and jupiter in the exact resonance ; is the resonance order . according to eq .( [ hm ] ) , the mean motion resonance splits in a cluster of subresonances . for the coefficients of the resonant termsone has where , .jupiter s current eccentricity is .the frequency of small - amplitude oscillations on subresonance is and the perturbation frequency is cf . . as an example we take asteroid 522 helga , which is famous to exhibit `` stable chaos '' : i.e. , its computed lyapunov time is rather small ( yr ) , but numerical experiments do not reveal any gross changes of its orbit on cosmogonic time scales .helga is known to be in the 12/7 mean motion resonance with jupiter .we take necessary data on , , and the perihelion frequency for this asteroid in the `` numb.syn '' catalogue of the astdys web service .the value of is defined by the value of ; thus one finds yr . to apply the separatrix map theory, one should identify the guiding subresonance in the multiplet .as such , it is natural to choose the subresonance that has the maximum amplitude ( i.e. , the maximum value of ) .we find that the guiding subresonance in the sextet is the third one ( ) , consequently the perturbing neighbors have numbers and 3 .thus we find the separatrix map parameters : , .therefore , we model the multiplet by a fast - chaotic triplet .the relative strength of perturbation is rather strong : . applying eq .( [ tlft ] ) , one has yr .on the other hand , the standard map theory gives an estimate for the lyapunov time from below .according to eq .( [ kla ] ) , ; thus one has for 522 helga : , and , as follows from eq .( [ tlinf ] ) , yr .values of the lyapunov time , computed in integrations in the full ( accounting for perturbations from all major planets ) problem are 6900 yr and 6860 yr ( astdys ) .obviously , the standard map theory is closer to these `` actual '' values .this is because the number of resonances in the multiplet is large and the relative strength of perturbation is not far from 1 , i.e. , to the value characteristic for the standard map hamiltonian .in this article , the problem of estimating the maximum lyapunov exponents of the motion in a multiplet of interacting resonances has been considered for the case when the resonances have comparable strength .the corresponding theoretical approaches have been considered for the multiplets of two , three , and infinitely many interacting resonances ( i.e. , doublets , triplets , and `` infinitets '' ) .the analysis has been based on the theory of separatrix and standard maps .we have introduced a `` multiplet separatrix map '' , valid for description of the motion in the resonance multiplet under certain conditions .the separatrix map approach is suitable for the multiplet of any number of resonances , when their interaction is weak or moderate ( i.e. , the separation of resonances with respect to their sizes is large enough ) , as well as for the multiplet of two or three resonances ( doublet or triplet ) , when the degree of interaction is arbitrary , including the case of strong overlap .the standard map approach is suitable for the multiplet of a large number of equally - sized equally - spaced resonances with arbitrary degree of interaction / overlap .we have presented explicit analytical formulas for the lyapunov times for the following five generic resonance multiplet types : fast - chaotic resonance triplet , fast - chaotic resonance doublet , slow - chaotic resonance triplet , slow - chaotic resonance doublet , and , for both cases of fast and slow chaos , infinitet of equally - sized equally - spaced resonances .good performance of the presented analytical formulas in the domains of their validity has been demonstrated by means of comparison with direct numerical integrations of the original hamiltonian systems . in numerical experimentswe have shown that , at any given value of the adiabaticity parameter , the value of the maximum lyapunov exponent in the multiplet of equally - spaced equally - sized resonances is minimal in the doublet case and maximal in the infinitet case .this is consistent with the developed theory .an example of application of the developed theory has been given , concerning asteroidal dynamics in high - order mean motion resonances with jupiter .the author is thankful to the referee for useful remarks .this work was supported in part by the programmes of fundamental research of the russian academy of sciences `` fundamental problems in nonlinear dynamics '' and `` fundamental problems of the solar system studies and exploration '' .the computations were partially carried out at the st .petersburg branch of the joint supercomputer centre of the russian academy of sciences .shevchenko , in : a.milani , g.b.valsecchi , and d.vokrouhlick ( eds . ) , near earth objects , our celestial neighbors : opportunity and risk ( proc . iau symp .236 ) , cambridge univ . press , cambridge , 2007 , pp .
|
the problem of estimating the maximum lyapunov exponents of the motion in a multiplet of interacting nonlinear resonances is considered for the case when the resonances have comparable strength . the corresponding theoretical approaches are considered for the multiplets of two , three , and infinitely many resonances ( i.e. , doublets , triplets , and `` infinitets '' ) . the analysis is based on the theory of separatrix and standard maps . a `` multiplet separatrix map '' is introduced , valid for description of the motion in the resonance multiplet under certain conditions . in numerical experiments it is shown that , at any given value of the adiabaticity parameter ( which controls the degree of interaction / overlap of resonances in the multiplet ) , the value of the maximum lyapunov exponent in the multiplet of equally - spaced equally - sized resonances is minimal in the doublet case and maximal in the infinitet case . this is consistent with the developed theory . keywords : hamiltonian dynamics ; chaotic dynamics ; resonances ; lyapunov exponents ; separatrix map ; standard map
|
stochastic processes have been used to model a great variety of systems in disciplines as disparate as physics , genomics , finance , climatology and social sciences .one possible classification of stochastic processes takes into account the properties of their conditional probability densities . in this respect ,markov processes play a central role in the modeling of natural phenomena . in the framework of discrete timestochastic processes , a process is said to be a markov process if the conditional probability density depends only on the last value at and not on the previous values at , at , etc . more generally , the transition probability of any markov process fulfills the chapman - kolmogorov equation .it is worth noting that a markov process is fully determined by the knowledge of the probability density function ( pdf ) of the process and the transition probability .when the markovian process is continuous both in space and time , the time evolution of the pdf is described by a fokker - planck ( fp ) equation .such level of simplicity is rather unique among stochastic processes .in fact , a non - markovian process is characterized by an infinite hierarchy of transition probabilities . in this case , the time evolution of the pdf is described by a master equation rather than a simpler fp equation .another classification of stochastic processes considers the nature of correlation of the random variable . under this classification ,random variables are divided in short - range and long - range correlated variables .short - range correlated variables are characterized by a finite mean of time - scales of the process whereas a similar mean time - scale does not exist for long - range correlated variables .an equivalent definition can be given by considering the finiteness or infiniteness of the integral of the autocorrelation function of the random process .in the presence of long - range correlation , the time integral of the process is a superdiffusive stochastic process showing where and is a constant .superdiffusive stochastic processes have been observed in several physical systems .a classical example is richardson s observation that the relative separation of two particles moving in a turbulent fluid at time follows the relation .other examples include anomalous kinetic in chaotic dynamics due to flights and trapping , dynamics of aggregate of amphiphilic molecules , dynamics of a tracer in a two - dimensional rotating flow , non - coding regions of complete genomes and volatility in financial markets .several stationary markovian processes are short - range correlated . in fact , the paradigmatic markovian process is the ornstein uhlembeck ( ou ) one , whose autocorrelation function is the exponential function where is the time - scale of the process .although in the ou process there is one single time - scale , a general markovian stationary process can be multi - scale , i.e. it may admit either a discrete or a continuum set of time - scales .in the last case , when the largest time - scale is removed to infinity the process can even be long - range correlated . the paradigmatic markovian process with power - law autocorrelation function is given by the family of processes considered in ref .these are stationary markovian power - law correlated processes that were introduced in the context of diffusion in optical lattices and semiclassically describe the motion of atoms in a one - dimensional optical lattice formed by two counterpropagating laser beams perpendicularly polarized . for a certain choice of the relevant paramentersthe processes become long range correlated . the existence of a power - law decaying autocorrelation function in the processes of ref . is intimately related to the existence of power - law tails in the stationary pdf .this is easily understood by considering that the processes of ref . describe particles moving in a confining smoluchowski potential which asymptotically grows like . if one compare such slow growth with the one associated to the ou process , whose smoluchowski potential grows like , it is easy to recognize that in the case of ref . ( i ) a particle can reach positions far away from the center of the potential because it is subject to a relatively weaker force and ( ii ) if a particle reaches a position , then it is not suddenly recalled towards the center of the potential and therefore it can explore for relatively long times the regions around .loosely speaking , the time - series of the processes of ref . can show persistencies and clustering of extreme events .such processes perfectly fit the features of the model proposed in ref . , where long - range dependencies are shown to explain the clustering of extreme events .however , one could have in principle slowly decaying autocorrelation functions without necessarily observing the occurrence of extreme events .one such example is given by the fractional brownian motion ( fbm ) which is a stochastic process where the autocorrelation function decays like a power - law and the stationary pdf is gaussian .in this paper , in the context of markovian processes , we give explicit examples of power - law correlated stationary processes where the stationary pdf shows tails which are gaussian or exponential .we will introduce such processes starting from appropriate coordinate transformations of an additive processes introduced in ref . .the paper is organized as follow . in section [ meto ]we review the eigenfunction methodology used to analyze the correlation properties of a given stochastic process and introduce a specific power - law correlated process with power - law tails . in section [ gauss ] and [ exp ]we will present examples of power - law correlated stochastic processes with gaussian and exponentail tails in the stationary pdf respectively . in section [ concl ]we will draw our conclusions .in this section we will briefly review the family of stochastic processes introduced in ref . and whose ergodicity properties have been investigated in ref .a similar class of such processes have been considered in ref . .let us consider a continuous markovian stochastic process whose pdf is described by the fp equation with constant diffusion coefficient . for the sake of simplicity , in this study we set . in general , the eigenvalue spectrum of the fp equation describing a stationary process consists of a discrete part and a continuous part \lambda_c,+\infty[ ] and in the region , .the time - step was . in the top panelwe show the results for the autocorrelation function .the solid ( red ) line shows the theoretical prediction obtained from eq . , while the open circles show the result of the numerical simulations . by performing a nonlinear fit ( dashed blue line ) , the autocorrelation function shows an asymptotic decay compatible with a power - law , with . in the inset of the top panelwe show the numerical simulation ( circles ) relative to the mean square displacement . a nonlinear fit ( solid blue line )shows that with .the bottom panel shows the stationary pdf of the process . againthe solid ( red ) line shows the theoretical prediction of eq . , while the open circles show the result of the numerical simulations . ] those shown in fig . are time average numerical simulations performed according to the relation : where is the length of the simulated time - series and is one realization of the process . indeed , in order to improve the statistical reliability of our numerical simulations , in the region we have also averaged over a number m of different realizations of the process : the data shown in the figure are the mean and the standard deviations of the autocorrelation values computed in each iteration for each time lag .the values of are in the region ] and in the region , .the time - step was . in the top panelwe show the results for the autocorrelation function .the solid ( red ) line shows the theoretical prediction obtained from eq . , while the open circles show the result of the numerical simulations . by performing a nonlinear fit ( dashed blue line ), the autocorrelation function shows an asymptotic decay compatible with a power - law , with . in the inset of the top panelwe show the numerical simulation ( circles ) relative to the mean square displacement . a nonlinear fit ( solid blue line )shows that with .the bottom panel shows the stationary pdf of the process . againthe solid ( red ) line shows the theoretical prediction of eq . , while the open circles show the result of the numerical simulations . ] those shown in fig . are time average numerical simulations performed according to eq . . differently from the previous case ,when simulationg the process we directly consider the langeving equation of eq .again , in order to improve the statistical reliability of our numerical simulations , in the region we have also averaged over a number m of different realizations of the process , according to eq . .the data shown in the figure are the mean and the standard deviations of the autocorrelation values computed in each iteration for each time lag .the values of are in the region ] .this would result in a different shape of the stationary pdf in that region . when doing that , the asymptotic power - law behaviour of the autocorrelation function is not modified . in this paper we preferred to consider a linear transformation and in the region $ ] only because this allows us to analytically obtain the eigenfunctions on the whole real axis and to obtain a numerical theoretical prediction for the autocorrelation function of the stochastic processes considered . starting from the process of eq . , stationary pdfs with tails different from exponential or gaussian ones can be obtained by introducing appropriate coordinate transformations . in all casesthe autocorrelation functions can be obtained , at least numerically , by using the same approach illustrated in this paper . to our knowledge , this is the first evidence of power - law correlated stationary markovian processes with gaussian or exponential tails in the stationary pdf .it is worth remarking that the existence of power - law correlated processes with gaussian tails does not contraddict the doob theorem , because the doob theorem deals with the case when the process admits a stationary pdf and a 2point conditional transition probability which are both gaussian on the whole real axis and non singular . in our casewe only have gaussian tails in the stationary pdf .our results help in clarifying that even in the context of markovian processes long - range dependencies are not necessarily associated to the occurrence of extreme events .it is worth mentioning that the processes introduced in section [ gauss ] and section [ exp ] are in the basin of attraction of the gumbel distribution , although the one of eq . is in the basin of attraction of the frechet distribution .moreover , our results can be relevant in the modeling of complex systems with long memory .in fact , processes with long - range interactions are often modeled by means of the fractional brownian motion ( fbm ) , multifractal processes , memory kernels and other .here we provide simple processes associated to langevin equations thus showing that memory effects can still be modeled in the context of continuous time stationary markovian processes , i.e. even assuming the validity of the chapman - kolmogorov equation .r. durbin , s. eddy , a. krogh and g. mitchison _ biological sequence analysis _ , ( cambridge university press , cambridge , 2001 ) .bouchaud and m. potters _ theory of financial risk and derivative pricing : from statistical physics to risk management _ , ( cambridge university press , cambridge , 2003 ) .
|
in this paper we give explicit examples of power - law correlated stationary markovian processes where the stationary pdf shows tails which are gaussian or exponential . these processes are obtained by simply performing a coordinate transformation of a specific power law correlated additive process , already known in the literature , whose pdf shows power - law tails . we give analytical and numerical evidence that although the new processes ( i ) are markovian and ( ii ) have gaussian or exponential tails their autocorrelation function still shows a power - law decay where grows with with a law which is compatible with . when the process , although markovian , is long - range correlated . our results help in clarifying that even in the context of markovian processes long - range dependencies are not necessarily associated to the occurrence of extreme events . moreover , our results can be relevant in the modeling of complex systems with long memory . in fact , we provide simple processes associated to langevin equations thus showing that long memory effects can be modeled in the context of continuous time stationary markovian processes .
|
packet sampling is an integral part of passive network measurement on today s internet .the high traffic volumes on backbone networks and the pressure on routers has resulted in the need to control the consumption of resources in the measurement infrastructure .this has resulted in the definition and use of estimated statistics by routers , generated based on sampling packets in each direction of each port on the routers .the aims of this paper is to analyse the effects of the sampling process as operated by netflow , the dominant standard on today s routers .there are three constraints on a core router which lead to the use packet sampling : the size of the record buffer , the cpu speed and the record look - up time . in , it is noted that in order to manage and analyse the performance of a network , it is enough to look at the basic statistical measures and summary statistics such as average range , variance , and standard deviation .however , in this paper we analyse both analytically and practically the accuracy of the inference of original characteristics from the sampled stream when higher order statistics are used .this paper focuses on the inference of original network traffic characteristics for flows from a sampled set of packets and examines how the sampling process can affect the quality of the results . in this context , a flow is identified specifically , as the tuple of the following five key fields : source ip address , destination ip address , source port number , destination port number , layer 4 protocol type . a router at the core of an internet link is carrying a large number of flows at any given time .this pressure on the router entails the use of strict rules in order to export the statistics and keep the router memory buffer and cpu resources available to deal with changes in traffic patterns by avoiding the handling of large tables of flow records .rules for expiring netflow cache entries include : * flows which have been idle for a specified time are expired and removed from the cache ( 15 seconds is default ) * long lived flows are expired and removed from the cache ( 30 minutes is default ) * as the cache becomes full a number of heuristics are applied to aggressively age groups of flows simultaneously * tcp connections which have reached the end of byte stream ( fin ) or which have been reset ( rst ) will be expired distributions studies have been done extensively in literature . in brief conclusion ,internet traffic is believed to have heavy - tailed distribution , self - similar nature , long range dependence .sampling has the following effects on the flows : * it is easy to miss short flows * mis - ranking on high flows * sparse flow creation _ packet sampling : _ + the inversion methods are of little to no use in practice for low sampling probability , such as ( 1 packet in 100 ) or smaller , and become much worse as becomes smaller still .for example , on the abilene network , 50% sampling was needed to detect the top flow correctly ._ flow sampling : _+ preserves flows intact and the sampling is done on the flow records . in practice , any attempt to gather flow statistics involves classifying individual packets into flows . all packet meta - data has to be organised into flows before sampling can take place .this involves more cpu load and more memory if one uses the traditional hash table approach with one entry per flow .new flow classification techniques , such as bitmap algorithms , could be applied but there is no practical usage in this manner currently .in this section we look at a more detailed analysis of the effect of sampling as performed by netflow on higher order statistics of the packet and flow size distributions . for the analysis of packet sampling applicationis used by netflow , we emulated the netflow operation on a 1 hour oc-48 trace , collected from the caida link on of april 2003 .this data set is available from the public repository at caida .the trace comprises of 84579462 packets with anonymised source and destination ip addresses . an important factor to rememberer in this workis the fact that the memory constraint on the router has been relaxed in generating the flows from the sampled stream .this means that there maybe more than tens of thousands of flow keys present at the memory at a given time , while in netflow , the export mechanism empties the buffer list regularly which can have a more severe impact on the resultant distribution of flow rates and statistics. table [ tabledt ] illustrates the data rates per interval of measurement .inverted data rates , by dividing by the sampling probability , are shown as ..the statistical properties on data rates [ cols="^,^,^,^,^",options="header " , ] [ tablept ] the kurtosis decreases in all of the considered examples .this means that the reconstructed streams are more homogeneous and less prone to outliers when compared with the original traces .thus , more of the variance in the original traces in packet size can be attributed to infrequent packets that have inordinately big packets that were missed in the sampling process , and thus the variance in the reconstructed stream consists more of homogeneous differences and not large outliers . however , both the reconstructed and unsampled streams are leptokurtic and thus tend to have long , heavy tails .the two - sample ks test is one of the most useful and general non - parametric methods for comparing two samples , as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples .a cdf was calculated for the number of packets per flow and the number of octets per flow for each of the 120 sampling intervals of 30 seconds each , both for the sampled / inverted and unsampled streams .then , a two - sample kolmogorov - smirnov test with significance level was performed between the 120 unsampled and the 120 sampled & inverted distributions . in every case the distributions before and after sampling and inversion were found to be significantly different , and thus it is very clear that the sampling and inversion process significantly distorts the actual flow behaviour of the network .the effects of sampling on network traffic statistics can be measured from different perspectives . in this sectionwe will cover the theories behind the sampling strategy and use some real data captures from caida in an emulation approach to demonstrate the performance constraints of systematic sampling .the great advantage of sampling is the fact that the first order statistics do not show much variation when the sampling is done at consistent intervals and from a large pool of data .this enables the network monitoring to use the sampled statistics to form a relatively good measure of the aggregate measure of network performance .figure [ compare ] displays the data rates , in number of bytes seen per 30 second interval , on the one hour trace .the inverted data is also shown with diamond notation , showing the statistics gathered after the sampled data is multiplied by the sampling rate .the black dots display the relative error per interval , .figure [ pktscompare ] displays the packet rates , the number of packets per 30 second interval , versus the sampled and inverted packet rates . in this figure, it can be observed that the inversion does a very good job at nearly all times and the relative error is negligible .this is a characteristics of systematic sampling and is due to the central limit theorem .it can be readily seen that the recovery of packet rates by simple inversion is much better than the recovery of data rates .this is because sampling one in a thousand packets deterministically can be trivially inverted by multiplying by the sampling rate ( 1000 ) : we focus on packet level measurement , as opposed to a flow level measurement . if the whole traffic flow is collapsed into a single link , then if we sample one packet out every thousand and then multiply that by the sampling rate , we will get the total number of packets in that time window .we believe that the small differences that we can see in figure [ pktscompare ] are due to the fact that at the end of the window some packets are lost ( because their ` representative ' was not sampled ) or overcounted ( a ` representative ' for 1000 packets was sampled but the time interval finished before they had passed ) .we believe these errors happen between measurement windows in time , i.e. they are window - edge effects .the inversion property described above does not hold for measuring the number of bytes in a sampling interval .simple inversion essentially assumes that all packets in a given flow are the same size , and of course this assumption is incorrect .it is to be expected that the greater the standard deviation of packet size over an individual flow , the more inaccurate the recovery by simple inversion will be regarding the number of bytes per measurement interval .figures [ dataerr ] and [ pkterror ] displays the standard error rate on data rate and packet rate recovery respectively , in different measurement intervals .figure [ pcktscdf ] , displays the cdf of packet size distribution in all the flows formed from the sampled and unsampled streams .the little variation in the packet size distribution conforms to the findings of the previous section where it was discussed that the packet sampling has low impact on the packet size distribution .figure [ flowpckts]:1 shows the effect that the distribution of packet lengths can have on the distribution of flow lengths when periodic packet sampling is applied . as flows reconstructed from a sampled packet stream are predominantly formed by just one packet, their length distribution follows that of single packets ( figure [ pcktscdf ] ) . that is the reason for the sharp jump near 1500 octets , as this characteristic originates from the maximum frame size in ethernet networks .& length in bytes [ right ] per flow , original vs inverted , title="fig:",scaledwidth=45.0% ] & length in bytes [ right ] per flow , original vs inverted , title="fig:",scaledwidth=45.0% ] from figure [ flowpckts]:2 , it can be readily seen that , in the sampled stream , more than 90 percent of flows consist of a single packet , whereas in the unsampled case a much grater diversity in flow lengths exists for small flows .this is due to the fact that simple packet - based deterministic sampling under - represents short flows , and those short flows that are indeed detected by the procedure after sampling usually consist of a single packet .thus , short flows are either lost or recovered as single packet flows , and long flows have their lengths reduced .in this paper we have reviewed the effects of sampling and flow record creation , as done by netflow , on the traffic statistics which are reported by such a process .it is inevitable that systematic sampling can no longer provide a realistic picture of the traffic profile present on internet links .the emergence of applications such as video on - demand , file sharing , streaming applications and even on - line data processing packages prevents the routers from reporting an optimal measure of the traffic traversing them . in the inversion process, it is a mistake to assume that the inversion of statistics by multiplication by the sampling rate is an indicate of even the first order statistics such as packet rates .an extension to this work and the inversion problem entails the use of more detailed statistics such as port numbers and tcp flags in order to be able to infer the original characteristics from the probability distribution functions of such variables .this will enable a more detailed recovery of original packet and data rates for different applications .the inference of such probabilities , plus use of methods such as bayesian inference , would enable a forecasting method which would enable the inversion of the sampled stream in near real time . in a related work, we will be looking at alternative flow synthesis schemes , looking at techniques replacing the netflow , such as use of hashing techniques using bloom filters .the use of a light weight flow indexing system will allow for a larger number of flows to be present at the router , possibly increasing the memory constraints and allowing for a higher sampling rate , which will in turn lead to more accurate inversion .there has been a great deal of worked done on analysis of sampling process and inversion problem .choi et al . have explored the sampling error and measurement overhead of netflow in though they have not looked at inversion process . in ,the authors have compared the netflow reports with those obtained from snmp statistics and packet level traces , but without using the sampling feature of netflow which is perhaps the dominant version in use nowadays .estan et al . have proposed a novel method of adapting the sampling rate at a netflow router in order to keep the memory resources at a constant level .this is done by upgrading the router firmware , which can be compromised by an attacker injecting varying traffic volume in order to take down the router .also this work has not considered the flow length statistics which are the primary focus of our work .hohn et al . have proposed a flow sampling model which can be used in an offline analysis of flow records formed from an unsampled packet stream . in this modelthe statistics of the original stream are recovered to a great extent .however the intensive computing and memory resources needed in this process prevents the implementation of such a scheme on highspeed routers .they prove it impossible to accurately recover statistics from a packet sampled stream , but based on the assumption of packets being independent and identically distributed roughan at has looked at statistical processes of active measurement using poisson and uniform sampling and has compared the theoretical performance of the two methods .papagiannaki et al . at have discussed the effect of sampling on tiny flows when looking at generation of traffic matrices .authors at have been looking at anomaly detection using flow statistics , but without sampling . in and , authors have looked at inferring the numbers and lengths of flows of original traffic that evaded sampling altogether .they have looked at inversion via multiplication .the authors would like to acknowledge caida for providing the trace files .this work is conducted under the masts ( epsrc grant gr / t10503 ) and the 46paq project ( epsrc grant gr / s93707 ) .will leland , murad taqqu , walter willinger , and daniel wilson , on the self - similar nature of ethernet traffic ( extended version ) , ieee / acm transactions on networking , vol .1 , pp . 1 - 15 , february 1994 . sommer , r. and feldmann , a. 2002 .netflow : information loss or win ? . in proceedings of the 2nd acm sigcomm workshop on internet measurment ( marseille , france , november 06 - 08 , 2002 ) .acm press , new york , ny , 173 - 174 .doi= http://doi.acm.org/10.1145/637201.637226 barakat , c. , iannaccone , g. , and diot , c. 2005 .ranking flows from sampled traffic . in proceedings of the 2005 acm conference on emerging network experiment and technology ( toulouse , france , october 24 - 27 , 2005 ) .acm press , new york , ny , 188 - 199 .performance and fault management ( cisco press core series ) by paul l della maggiora ( author ) , james m. thompson ( author ) , robert l. pavone jr .( author ) , kent j. phelps ( author ) , christopher e. elliott ( editor ) , publisher : cisco press ; 1st edition , isbn : 1578701805 brauckhoff , d. , tellenbach , b. , wagner , a. , may , m. , and lakhina , a. 2006 .impact of packet sampling on anomaly detection metrics . in proceedings of the 6th acm sigcomm on internet measurement ( rio de janeriro , brazil ,october 25 - 27 , 2006 ) .acm press , new york , ny , 159 - 164 .
|
_ the high volume of packets and packet rates of traffic on some router links makes it exceedingly difficult for routers to examine every packet in order to keep detailed statistics about the traffic which is traversing the router . sampling is commonly applied on routers in order to limit the load incurred by the collection of information that the router has to undertake when evaluating flow information for monitoring purposes . the sampling process in nearly all cases is a deterministic process of choosing 1 in every packets on a per - interface basis , and then forming the flow statistics based on the collected sampled statistics . even though this sampling may not be significant for some statistics , such as packet rate , others can be severely distorted . however , it is important to consider the sampling techniques and their relative accuracy when applied to different traffic patterns . + the main disadvantage of sampling is the loss of accuracy in the collected trace when compared to the original traffic stream . to date there has not been a detailed analysis of the impact of sampling at a router in various traffic profiles and flow criteria . in this paper , we assess the performance of the sampling process as used in netflow in detail , and we discuss some techniques for the compensation of loss of monitoring detail . _
|
it is now well known that many deterministic worst - case robust analysis and synthesis problems are np hard , which means that the exact analysis and synthesis of the corresponding robust control problems may be computational demanding . on the other hand, the deterministic worst - case robustness measures may be quite conservative due to overbounding of the system uncertainties . as pointed out in by khargonekar and tikku ,the difficulties of deterministic worst - case robust control problems are inherent to the problem formulations and a major change of the paradigm is necessary .an alternative to the deterministic approach is the probabilistic approach which has been studied extensively by stengel and co - workers , see for example , , and references therein . aimed at breaking through the np - hardness barrier and reducing the conservativeness of the deterministic robustness measures , the probabilistic approach has recently received a renewed attention in the work by barmish and lagoa , barmish , lagoa , and tempo , barmish and polyak , khargonekar and tikku , bai , tempo , and fu , tempo , bai , and dabbene , yoon and khargonekar , zhu , huang and doyle , chen and zhou and references therein .in addition to its low computational complexity , the advantages of randomized algorithms can be found in the flexibility and adaptiveness in dealing with control analysis or synthesis problems with complicated constraints or in the situation of handling nonlinearities .the robust control analysis and synthesis problems under constraints are , in general , very hard to deal with in the deterministic framework . for example, it is well - known that a multi - objective control problem involving mixed and objectives are very hard to solve even though there are elegant solutions to the pure or problems . in this paper, we first show that most of the robust control problems can be formulated as constrained optimal synthesis or robust analysis problems .since the exact robust analysis or synthesis is , in general , impossible , we seek a ` reliable ' solution by using randomized algorithms . roughly speaking , by ` reliability 'we mean how the solution resulted by randomized algorithms approaches the exact one . in this paper , we measure the degree of ` reliability ' in terms of accuracy and confidence level . actually , terminologies like ` accuracy ' and ` confidence level ' have been used in and where accuracy is referred as an upper bound of the absolute _ volume _ of a subset of parameter space . however , in this paper , we emphasis that the accuracy is an upper bound for the _ ratio _ of _ volume _ of the _ constrained subset _ with respect to the volume of parameter space .for example , when estimating the minimum of a quantity over the ratio may be where is an estimate resulted by randomized algorithms for quantity .we can see that the ratio of volume is a better indicator of the ` reliability ' than the absolute .based on this measure of ` reliability ' , we propose two different approaches aimed at seeking a solution to the robust analysis or optimal synthesis problem with a certain a priori specified degree of ` reliability ' .one is the * direct approach*. the key issue is to determine the number of samples needed to be generated from the parameter space for a given reliability measure .actually , khargonekar and tikku in have applied similar approach to stability margin problem , though the measure of ` reliability ' is in terms of the absolute volume . in that paper ,a sufficient condition is derived on the sample size required to come up with a ` reliable ' estimate of the robust stability margin ( see theorem 3.3 in ) . in this paper, we also derive the bound of sample size and give the sufficient and necessary condition for the existence of minimum distribution - free samples size .our result shows that , the bound of sample size necessarily involves .thus estimating becomes essential .unfortunately , estimating is time - consuming and the resulted sample size is not accurate . to overcome this difficulty, we propose and strongly advocate another approach the * indirect approach*. the key issue is to determine the _ constrained sample size _ , which is the number of samples needed that fall into the constrained subset .we derive bounds of constrained sample size and give the sufficient and necessary condition for the existence of minimum distribution - free constrained samples size .the bounds do not involve and can be computed exactly .this result makes it possible to obtain a reliable solution without estimating the volume of the constrained parameter subset .this paper is organized as follows .section 2 presents the problem formulation and motivations . in section 3, we derive the exact distribution of related order statistics without the continuity assumption .distribution free tolerance interval and estimation of quantity range is discussed in section 4 .section 5 gives the minimum sample size under various assumptions .let }^{t} ] we can now consider several robustness problems : * robust stability : let and where denotes the -th eigenvalue of .then the system is robustly stable if * stability margin : assume that belongs to the class of allowable perturbations which has a certain block structure . for a given real number , let denote the subset of perturbations in with size at most , i.e. , the _ robustness measure _ is defined as the smallest allowable perturbation that destabilizes the feedback interconnection .let be an upper bound for .define parameter space by and constrained subset by let it follows that the stability margin problem is equivalent to computing * robust performance : suppose is stable for all .define then the robust performance problem is to determine if is satisfied for some prespecified . * performance range : let be a given set of parameters such that is stable for all . define again .then the problem of determining the range of the system s performance level can be formulated as finding .as another example , consider the problem of designing a controller for an uncertain system .suppose that is a vector of controller parameters to be designed and that the controller is connected with in a lower lft setup .let the transfer function of the whole system be denoted as .suppose that has the following state space realization ] .let the observations which belong to be denoted as .define order statistics as the smallest one of the set of observations , i.e. , . obviously , it is reasonable to take as an estimate for and as an estimate for if is sufficiently large . henceforth , we need to know which guarantees and ( or ) .we call _ constrained sample size_. * * direct approach * let be i.i.d samples generated by the same distribution function . define .let be the number of the elements in .then is a random number . if we denote the elements of as .define order statistics as the smallest one of the set of observations , i.e. , .in particular , let and .we need to know which guarantees and ( or ) .we call _ global sample size_.define . to compute the probabilitiesinvolved in section , it is important to know the associated distribution of any random variables where is order statistics in the context of the indirect approach .first , we shall established the following lemma .[ main3 ] let u be a random variable with uniform distribution over ] .then the number of samples of u which fall into $ ] is .it is easy to see that the event is equivalent to event .furthermore , the event is equivalent to the event .therefore , [ main1 ] let and , , , .define and .define and . then and the last equality holds if and only if .define and where can be arbitrary small .. we can show that if .in fact , if this is not true , we have . because can be arbitrarily small , we have .notice that , we have . on the other hand , by definition we knowthat and thus , which is a contradiction .notice that is nondecreasing and right - continuous , we have and and that event is equivalent to the event .furthermore , event is equivalent to event which is defined by constraints . for every ,delete constraint if there exists such that .let the remaining constraints be where .since all constraints deleted are actually redundant , it follows that event is equivalent to event .now let be the number of observations which fall into .then the number of observations which fall into is .it is easy to see that the event is equivalent to the event .furthermore , the event is equivalent to event . therefore }^{j_{s}}\;{[1- f_{u}({{\alpha}^{'}_{k^{'}}}^{- } ) ] } ^{n-\sum_{l=1}^{k^{'}}j_{l}}\\ & = & \sum_{(j_{1},\cdots , j_{k^ { ' } } ) \in { \bf i } _ { { i}^{'}_{1 } , \cdots , { i}^{'}_{k^ { ' } } } } g_{j_{1 } , \cdots , j_{k^{'}}}\left(\phi^{'}_{1 } , \cdots,\phi^{'}_{k^{'}}\right).\end{aligned}\ ] ] now consider event . for every , delete constraint if there exists such that .notice that and if , the remaining constraints must be where and .since all constraints deleted are actually redundant , it follows that event is equivalent to event . by theorem in andlemma [ main3 ] therefore , .it follows that notice that .we have by the definitions of and , we know that is the closure of , i.e. , and that their lebesgue measures are equal .it follows that notice that , we have and hence where the equality holds if and only if , i.e. , . for the special case of and that is absolutely continuous , can be obtained by combining _ probability integral transformation theorem _ and theorem in .however , in robust control problem , the continuity of is not necessarily guaranteed .for example , is not continuous when uncertain quantity equals to a constant in an open set of . we can come up with many uncertain systems in which the continuity assumption for the distribution of quantity is not guaranteed .since it is reasonable to assume that is measurable , theorem [ main1 ] can be applied in general to tackle these problems without continuity assumption by a probabilistic approach .in addition , theorem [ main1 ] can be applied to investigate the minimum computational effort to come up with a solution with a certain degree of ` reliability ' for robust analysis or optimal synthesis problems under constraints . from the proof of theorem [ main1 ], we can see that is not related to the knowledge of , thus we have the following corollary . [ main9 ]. then .in robust analysis or synthesis , it is desirable to know function because it is actually the distribution function of quantity for .however , the exact computation of function is in general impossible .we shall extract as much as possible the information of from observations .let for .[ cccesu ] with the equality holds if and only if such that .moreover , with the equality holds if and only if such that .let .let the cumulative distribution function of be and define order statistics as the -th smallest one of the set of observations , i.e. , .obviously , for any .it is also clear that , which leads to apply theorem [ main1 ] to the case of , we have where . therefore , the equality holds if and only if such that because such a exists if and only if .it follows that with the equality holds if and only if such that .the second part follows by applying theorem [ main1 ] to the case of .it is important to note that the two conditions in theorem [ cccesu ] are much weaker than the continuity assumption which requires that for any there exists such that .the difference is visualized in figure [ fig ] . in general, it is important to know the probability of a quantity falling between two arbitrary samples . to that end, we have [ general ] let .suppose in any open set of .then since the condition that in any open set of is equivalent to the absolute continuity assumption of ( see the proof of theorem in ) , the proof of corollary [ general ] can be completed by applying theorem [ main1 ] to the case of , , and is continuous .the important issue of the randomized algorithms to robust analysis or optimal synthesis is to determine the minimum computational effort required to come up with a solution with a certain degree of ` reliability ' .first , we consider this issue for the indirect approach .notice that with in the context of the indirect approach . by corollary [ main9 ] ,we know that apply theorem [ cccesu ] to the case of , we have with the equality holds if and only if such that .therefore = 1-(1-\varepsilon\rho)^{n}\ ] ] with the equality holds if and only if such that . finally , notice that if and only if .this completes the proof .it should be noted that sufficiency part of the preceding theorem has been obtained in in the context of estimating robust stability margin . by the similar argument as that of theorem [ mm ] , we have the following result for estimating ( or determine parameter achieving ) . by corollary [ main9 ] and corollary [ ccc ] , we have .\end{aligned}\ ] ] therefore , \right)\\ & = & 1-\sum_{i=0}^{n}{n \choose i } { \rho}^{i}{(1-\rho)}^{n - i}{(1-\varepsilon)}^{i-1}[1+(i-1)\varepsilon]\\ & = & 1-\frac{1}{1-\varepsilon}\sum_{i=0}^{n}{n \choose i } { ( ( 1-\varepsilon)\rho)}^{i}{(1-\rho)}^{n - i}+\frac{\varepsilon}{1-\varepsilon}\sum_{i=0}^{n}{n \choose i } { ( ( 1-\varepsilon)\rho)}^{i}{(1-\rho)}^{n - i}\\ & & -n\varepsilon\rho\sum_{i=1}^{n}{n-1 \choose i-1 } { ( ( 1-\varepsilon)\rho)}^{i-1}{(1-\rho)}^{n-1-(i-1)}\\ & = & 1-\frac{1}{1-\varepsilon}{(1-\varepsilon\rho)}^{n } + \frac{\varepsilon}{1-\varepsilon}{(1-\varepsilon\rho)}^{n } -n\rho\varepsilon{(1-\varepsilon\rho)}^{n-1}\\ & = & 1-{(1-\varepsilon\rho)}^{n-1}[1+(n-1)\varepsilon\rho]\\ & = & 1-\mu(n),\end{aligned}\ ] ] which implies that if and only if .e. w. bai , r. tempo , and m. fu , `` worst - case properties of the uniform distribution and randomized algorithms for robustness analysis , '' _ proc .of american control conference _ , pp .861 - 865 , albuquerque , new mexico , june , 1997 .b. r. barmish , c. m. lagoa , and r. tempo , `` radially truncated uniform distributions for probabilistic robustness of control systems , '' _ proc .of american control conference _ , pp .853 - 857 , albuquerque , new mexico , june , 1997 .x. chen and k. zhou , `` on the probabilistic characterization of model uncertainty and robustness '' _ proc .36th ieee conference on decision and control _ , no .3816 - 3821 , san diego , california , 1997 .p. p. khargonekar and a. tikku ,`` randomized algorithms for robust control analysis and synthesis have polynomial complexity '' proceedings of the 35th conference on decision and control , pp .3470 - 3475 , kobe , japan , december 1996 .o. toker and h. zbay ( 1995 ) .`` on the np - hardness of the purely complex computation , analysis / synthesis , and some related problems in multidimensional systems , '' _ proc .american control conference _ ,seattle , washington , pp .447 - 451 .a. yoon and p. p. khargonekar , `` computational experiments in robust stability analysis , '' _ proc . of 36th ieee conference on decision and control_ , pp . 3260 - 3265 , san diego , california , 1997 .
|
in this paper , we consider robust control using randomized algorithms . we extend the existing order statistics distribution theory to the general case in which the distribution of population is not assumed to be continuous and the order statistics is associated with certain constraints . in particular , we derive an inequality on distribution for related order statistics . moreover , we also propose two different approaches in searching reliable solutions to the robust analysis and optimal synthesis problems under constraints . furthermore , minimum computational effort is investigated and bounds for sample size are derived .
|
ultra - fast dynamics in condensed matter in a picosecond ( ps ) to a 100-ps regime on atomic- to meso - scales is still inaccessible for studies using any known experimental probe .a gap remains in experimental capabilities between the low - frequency ( visible and ultraviolet light ) and high - frequency ( x - rays and neutrons ) inelastic scattering techniques .figure [ fig000 ] shows how the time - length space or the relevant energy - momentum space of excitations in condensed matter is accessed by different inelastic scattering probes : neutron ( ins ) , x - ray ( ixs ) , ultraviolet ( iuvs ) , and brillouin ( bls ) ; as well as how the remaining gap could be closed by enhancing inelastic x - ray scattering capabilities .ultra - high - resolution ixs ( uhrixs ) has the potential to enter the unexplored dynamic range of excitations in condensed matter .this would , however , require achieving a very high spectral resolution on the order of 0.1 mev , and momentum transfer resolution around 0.01 nm ( light green area in fig . [ fig000 ] ) . in approaching this goal ,a novel ixs spectrometer has been demonstrated recently ; the spectral resolution improved from 1.5 mev to 0.6 mev , the momentum transfer resolution improved from 1 nm to 0.25 nm ( dark - green and green areas in fig .[ fig000 ] , respectively ) , and the spectral contrast improved by an order of magnitude compared to the traditional ixs spectrometers .the gap became narrower , but did not close .( 1,0.455)(0,0 ) ( 0.,0.00 ) ) and relevant energy - momentum ( ) space of excitations in condensed matter and how it is accessed by different inelastic scattering probes : neutron ( ins ) , x - ray ( ixs ) , ultraviolet ( iuvs ) , and brillouin ( bls ) .the ultra - high - resolution ixs spectrometer presented in ref . entered the previously inaccessible region marked in green .the novel capabilities discussed in the present paper will enable ixs experiments with even higher resolution , 0.1-mev and 0.01-nm , in the region marked in light green , and will close completely the existing gap between the high - frequency and low - frequency probes .the energy and the momentum transfers from initial to final photon / neutron states are measured in inelastic scattering experiments , schematically shown in the oval inset.,title="fig:",scaledwidth=50.0% ] the outstanding problems in the condensed matter physics , such as the nature of the liquid to glass transitions , have yet to be fully addressed . herewe propose an approach of how this problem could be solved , and how uhrixs spectrometers could become efficient imaging optical devices .this approach is a further development of the proposal presented in .( 1,0.30)(0,0 ) ( 0.025,0.00 ) in a typical ixs experiment , x - rays incident on a sample are monochromatized to a very small bandwidth corresponding to a desired energy resolution .the spectral analysis of photons scattered from a sample is performed by an x - ray analyzer , featuring the same spectral bandwidth and acting like a spectral slit .monochromatization from approximately a -ev to a -mev bandwidth results in a dramatic reduction of the photon flux generated by undulator sources at synchrotron radiation facilities , typically by more than five orders of magnitude. the angular acceptance , mrad , of the analyzer is much large than the angular acceptance , , of the monochromator ; however , it is still orders of magnitude smaller than the total solid angle of scattering by the sample .it is also much smaller than the -mev window desired for the spectral analysis .all this results in very small countrates hz in ixs experiments .further improvements to the 0.1-mev resolution using such an approach would only result in yet another substantial reduction of the countrate and time - consuming experiments .a possible solution to this problem would be to create a spectrometer that would not only feature the high spectral resolution , but would also be capable of imaging x - ray spectra in a broad spectral window .we will refer to such optical devices as x - ray spectrographs .czerny - turner type spectrographs are now standard in infrared , visible , and ultraviolet spectroscopies . in its classical arrangement , a spectrographis comprised of four elements [ see fig .[ fig001](a ) ] : ( 1 ) a collimating mirror , m , that collects photons from a radiation source , s , and collimates the photon beam ; ( 2 ) a dispersing element , de , such as a diffraction grating or a prism , which scatters photons of different energies into different directions due to angular dispersion ; ( 3 ) a curved mirror , m , that focuses photons of different energies into different locations due to linear dispersion ; and ( 4 ) a spatially sensitive detector , det , placed in the focal plane to record the whole photon spectrum . the feasibility of hard x - ray angular - dispersive spectrographs of the czerny - turner type has been discussed in . a hard x - ray equivalent of the diffraction grating is a bragg diffracting crystal with diffracting atomic planes at an asymmetry angle to the entrance crystal surface [ see fig . [ fig001](b ) ] .angular dispersion rates attainable in a single bragg reflection are typically small , / mev , and are the main obstacle to realizing hard x - ray spectrographs .the angular dispersion rate can be enhanced dramatically , by almost two orders of magnitude , by successive asymmetric bragg reflections compared to that in a single bragg reflection . an enhanced angular dispersion rate in multi - crystal arrangements is crucial for the feasibility of hard x - ray angular - dispersive spectrographs .an x - ray angular - dispersive spectrograph was demonstrated experimentally in , using the so - called multi - crystal collimation - dispersion - wavelength - selection ( cdw ) optic , achieving spectral resolution of better than with 9.1 kev x - ray photons .however , the spectral window in which the cdw optic permitted imaging x - ray spectra was small , about . increasing the spectral window of ultra - high - resolution x - ray spectrographs , is extremely important . in pursuing this goal , and in seeking solutions to this problem, a theory of hard x - ray spectrographs is developed here . in section [ rtm ] , a ray - transfer matrix technique applied to propagate x - rays through complex optical x - ray systems in the paraxial approximation .the following systems are considered : successive bragg reflections from crystals ( section [ successive - bragg - reflections ] ) , focusing system ( section [ focusing - system ] ) , focusing monochromators ( section [ focusing - monochromators ] ) , and finally czerny - turner - type spectrographs ( section [ spectrograph - section ] ) . solutions for broadband hard x - ray imaging spectrographs are considered in section [ broadband - spectrographs ] .several `` diffraction grating '' designs for hard x - ray spectrographs are proposed to ensure a high energy resolution , broad spectral window of imaging , and large angular acceptance .spectrographs with an energy resolution of mev and a spectral window of imaging up to mev are shown to be feasible for ixs applications in section [ cdw - uhrix ] and section [ two - cristal - uhrix ] . in section [ two - cristal - rixs ] , a spectrograph with a 1-mev spectral resolution and 85-mev spectral imaging windowis considered for cu k - edge resonant ixs ( rixs ) applications .the main goal of this article is to develop a theory of czerny - turner - type hard x - ray spectrographs . the conceptual optical scheme of the czerny - turner - type spectrographs is presented in fig .[ fig001](a ) . in the hard x - ray regime , the role of the diffraction grating is played by a single crystal in asymmetric bragg diffraction scattering geometry , as shown in fig .[ fig001](b ) , or by an arrangement of several single crystals .one possible example of multi - crystal arrangements discussed in , although is not the only possibility , is shown in fig .[ fig001](c ) .the purpose of the theory is to calculate the spectral resolution and other performance characteristics of hard x - ray spectrographs , and their dependence on physical parameters of constituent optical elements . in approaching the main goal, we consider optical systems starting with simple ones , such as a focusing element and bragg reflection from a crystal , and proceed to more complex systems , such as successive bragg reflections from multiple crystals , focusing systems , focusing monochromators , and finally spectrographs .we will use a ray - transfer matrix technique to propagate paraxial x - rays through optical structures . in a standard treatment , a paraxial ray in a particular reference plane of an optical system ( the plane perpendicular to the optical axis )is characterized by its distance from the optical axis and by its angle or slope with respect to that axis .the ray is presented by a two - dimensional vector .interactions with optical elements are described by dimensional matrices .the ray vector at an input reference plane ( source plane ) is transformed to at the output reference plane ( image plane ) , where is the `` abcd '' matrix of an element placed between the reference planes .angular dispersion in bragg reflection from asymmetrically cut crystals results in deviation of the beam from the unperturbed optical axis due to a change , , in the photon energy from to .this causes `` misalignment '' of the paraxial optical system , which can be conveniently described by a matrix by adding additional coordinate to vector .table [ tab2 ] presents ray - transfer matrices used in this paper . in the first three rows , 13 ,matrices are given for simple elements of the spectrograph , such as propagation in free space , thin lens or focusing mirror , and bragg reflection from a crystal . in thefollowing rows ray - transfer matrices are shown for arrangements composed of several optical elements , such as successive multiple bragg reflections from several crystals , rows 45 ; focusing system , row 6 ; focusing monochromators , rows 78 ; and finally spectrographs , row 9 , on which the paper is focused .the matrices of the multi - element systems are obtained by successive multiplication of the matrices of the constituent optical elements .all the ray - transfer matrices are presented in the right - handed coordinate system with the -axis looking in the direction of the optical axis both before and after each optical element , as illustrated in fig .[ fig011 ] on an example of a bragg reflecting crystal .this absolute reference system is retained through all interactions with all optical elements .we use the convention that positive is the counterclockwise sense of angular variations of the ray slope in the plane . for bragg reflections , is understood as a small angular deviation from a nominal glancing angle of incidence to the reflecting atomic planes of the crystal ; is understood as a small angular deviation from the nominal glancing angle of reflection .the angles and define the optical axis .the angle is determined by bragg s law , while is determined by the relationship equation is a consequence , first , of the conservation of the tangential components with respect to the entrance crystal surface for the momentum , , of the incident x - ray photon and the momentum , , of the photon bragg reflected from the crystal with a diffraction vector [ see fig . [ fig001](b ) ] .it is also a consequence of the conservation of the photon energies .the reflecting atomic planes are at an asymmetry angle , , to the entrance crystal surface .the asymmetry angle , , is defined here to be positive in the geometry shown in figs .[ fig011](a ) and [ fig011](b ) , and negative in the geometry with reversed incident and reflected x - rays ( not shown ) .[ cols= " < , < , < , < " , ] with the -axis always looking in the direction of the optical axis ( dash - dotted line ) both before and after each optical element , a bragg reflecting crystal in this particular case . by definition , positive is the counterclockwise sense of angular variations of ray slopes in the plane . shownare examples of an optical element `` bragg reflection from a crystal '' with ( a ) counterclockwise deflection , for which deflection sign ; and ( b ) clockwise deflection of the reflected beam with . in the input reference system , the incident x - ray beam ( wavy vector line )impinges onto the crystal at a glancing angle of incidence to the bragg reflecting planes and with coordinate .it is reflected at a glancing angle of reflection with coordinate , as seen in the output reference system . both and change signs upon reflection ., scaledwidth=50.0% ] for the bragg - reflection matrix the nonzero elements , , are calculated from eq . as follows : by using the following variations : , , ( see caption to fig .[ fig011 ] ) .the angular dispersion rate in eq . describes how the photon energy variation changes the reflection angle at a fixed incidence angle .the deflection sign factor allows for the appropriate sign , depending on the scattering geometry .it is defined to be if the bragg reflection deflects the ray counterclockwise [ fig .[ fig011](a ) ] .it is if the reflected ray is deflected clockwise [ fig .[ fig011](b ) ] .asymmetry factor in eq. describes , in particular , how the beam size and beam divergence change upon bragg reflection .the ray transfer matrix for a bragg reflection from a crystal presented in row 3 of table [ tab2 ] is equivalent to that introduced by matsushita and kaminaga , with the exception for different signs of the elements and the additional deflection sign factor .positive absolute values were used in local coordinate systems in .here we use the absolute coordinate system to correctly describe transformations in multi - element optical arrangements .the choice of the absolute coordinate system is especially important to allow for inversion of the transverse coordinate and inversion of the slope when an optical ray is specularly reflected from a mirror or bragg reflected from a crystal . because of such inversion and as well as and have opposite signs , as shown in figs .[ fig011](a ) and [ fig011](b ) .a negative value of the asymmetry factor , , in the bragg reflection ray transfer matrix reflects this inversion upon each bragg reflection .the bragg diffraction matrix is similar to the ray - transfer matrix of the diffraction grating ( see , e.g. , ) .the similarity is because both the asymmetry factor , , and the angular dispersion rate , , are derived from eq . , which coinsides with the well - know in optics grating equation .the magnification factor , , used in the diffraction grating matrix is equivalent to . the ray - transfer matrix of a thin lens has a focal distance , , as a parameter .compound refractive lenses can be used for focusing and collimation in the hard x - ray regime , and described by such a matrix to a certain approximation .alternatively , ellipsoidal total reflection mirrors could be applied , which transform radiation from a point source at one focal point to a point source located at the second focal point .the ray - transfer matrix of an ellipsoidal mirror has a structure identical to the ray - transfer matrix of a thin lens ; however , , where and are the distances from the center of the section of the ellipsoid employed to the foci of the generating ellipse .the basic ray matrices given in the first three rows of table [ tab2 ] can be combined to represent systems that are more complex .the ray - transfer matrix describing successive bragg reflections from different crystals has a structure identical to that of the single bragg reflection ray - transfer matrix ; however , the asymmetry factor , , and the angular dispersion rate , , are substituted by the appropriate cumulative values , and , respectively , as defined in row 4 of table [ tab2 ] .the cumulative angular dispersion rate , , derived in the present paper coincides with the expression first derived in using an alternative approach .it should be noted that the ray - transfer matrix presented in table [ tab2 ] , row 4 , was derived neglecting propagation through free space between the crystals . with nonzero distances between the crystals and ( ) taken into account , the ray - transfer matrix of successive bragg reflections changes to , as presented in row 5 of table [ tab2 ] .most of the elements of the modified ray - transfer matrix still remain unchanged , except for elements and , which become nonzero .these elements are defined by recurrence relations in the table .nonzero distances between the crystals result in an additional change of the linear size of the source image due to an angular spread , and in an spatial transverse shift of the image ( linear dispersion ) due to a spectral variation . in the focusing system [ see graph in row 6 of table [ tab2 ] ] a source in a reference source plane at a distance downstream a lens or an elliptical mirror is imaged onto the reference image plane at a distance upstream of the lens .the ray - transfer matrix of the focusing system is a product of the ray - transfer matrices of the free space , the thin lens , and another free space matrix .if defined in table [ tab2 ] for the focusing system parameter , the classical lens equation is valid : in this case , the system images the source with inversion and a magnification factor independent of the angular spread of rays in the source plane .rows 78 in table [ tab2 ] present ray - transfer matrices of focusing monochromators , optical systems comprising a lens or an elliptical mirror , and an arrangement of crystals , respectively .we will distinguish between two different types of focusing monochromators .if the lens is placed upstream of the crystal arrangement , we will refer to such optic as a focusing monochromator , i , presented in row 7 of table [ tab2 ] .if the lens is placed downstream , this optic will be referred to as focusing monochromator , ii , presented in row 8 .the focusing monochromator i with a single crystal was introduced in , and its performance was analyzed using the wave theory developed there .the ray - transfer matrix approach used in the present paper leads to similar results , except for diffraction effects being neglected here .we consider here a general case with a multi - crystal arrangement .the ray - transfer matrix presented in table [ tab2 ] was derived neglecting propagation through free space between the crystals .the following expressions are valid for the elements of the ray - transfer matrix of the focusing monochromator i if nonzero distances between the crystals of the monochromator are taken into account : the main difference is that the parameter has to be substituted by .the nonzero distances between the crystals also change the linear dispersion rate from to .if the focusing condition is fulfilled ( assuming the system with zero free space between crystals ) , the following relationship is valid for the focal and other distances involved in the problem : without the crystals , the image plan would be at a distance from the lens , in agreement with eq . .the presence of the crystal changes the position of the image plane to . such behavior for the focusing monochromator - i system was predicted in ; it is related to the ability of asymmetrically cut crystals to change the beam angular divergence and linear size and thus the virtual position of the source .if the focusing condition is fulfilled , eq . is valid and , as a consequence , the focusing monochromator i images a source spot of size into a spot of size for each monochromatic component .if the source is not monochromatic , its image by photons with energy is shifted transversely as a result of linear dispersion , by from the source image position produced by photons of energy .the monochromator spectral resolution can be determined from the condition that the monochromatic source image size [ eq . ] , is equal to the source image shift [ eq . ] : here and in the rest of the paper it is assumed that the source image size can be resolved by the position - sensitive detector . in a particular case of , be approximated by . as a result, the expression for the energy resolution can be simplified to a large dispersion cumulative rate , a small cumulative asymmetry factor , a large distance from the source to the lens , and a small source size are advantageous for better spectral resolution .this result is in agreement with the wave theory prediction , generalized to a multi - crystal monochromator system .all these results can be further generalized in a straightforward manner to account for nonzero spaces between the crystals , using eqs . .in the focusing monochromator - ii system the focusing element is placed downstream of the crystals system [ see graph in row 8 of table [ tab2 ] ] . the ray - transfer matrix presented in table [ tab2 ]is derived neglecting propagation through free space between the crystals .the following expressions are valid for the elements of the ray - transfer matrix if nonzero distances between the crystals of the monochromator are taken into account : elements , , , and have the same form as in table [ tab2 ] , but with the distance parameter replaced by .elements obtain additional correction terms .if the focusing condition is fulfilled ( we further assume an idealized case of a system with zero free space between crystals ) , the following relationship is valid for the focal and other distances involved in the problem : without the crystals , the source should be at a distance upstream of the lens to achieve focusing at a distance of downstream the lens , in agreement with eq . .the presence of the crystals changes the virtual position of the source plane , which will now be located at a distance from the lens . from the crystal , using the notations of the present paper . ] therefore , unlike the monochromator - i case , in which the crystals change the virtual image plane position , the crystals in the monochromator - ii system change the virtual source plane position . using a process similar to that used to derive these values for the monochromator - i system , we obtain the following expressions for the image size , the transverse image shift ( linear dispersion ) , and for the spectral resolution of the monochromator - ii system : interestingly, the expression for the energy resolution of the monochromator - ii system [ eq . ] is equivalent to that of the monochromator - i system given by eq . .we recall , however , that eq .was derived for a particular case of , while eq .is valid in general case .we would like to emphasize one particular interesting case . if ( i.e. , the source position coincides with the position of the crystal system ) , then , what results in zero linear dispersion rate .this property can be used to suppress linear dispersion , if it is undesirable .it often happens when a crystal monochromator is combined with a focusing system .this conclusion is strictly valid , provided nonzero distances between the crystals of the monochromator are neglected .the results derived above , can be further generalized to take the nonzero spaces between the crystals into account by applying eqs . . in this sectionwe consider spectrographs in a czerny - turner configuration with the optical scheme shown in fig .[ fig001 ] , or alternatively in the graph in table [ tab2 ] , row 9 . in the first step ,the source , , is imaged with the collimating mirror ( lens ) onto an intermediate reference plane at distance from the mirror .the image is calculated using the focusing system ray - transfer matrix with the assumption that the source is placed at the focal distance , , from the collimating mirror . in the second step ,transformations by the crystal optic ( dispersing element of the spectrograph ) are described by the ray - transfer matrix .we assume at this point that the distances between the crystals are negligible . in the third step ,the focusing mirror ( lens ) with a focal length placed at distance from the crystal system produces the source image in the focal plane , as described by the ray - transfer matrix .the final source image is described by a spectrograph matrix that is a product of the tree matrices from table [ tab2 ] .the spectrograph ray - transfer matrix in given in row 9 of table [ tab2 ] .remarkably , element of the spectrograph matrix is zero .this means that for a monochromatic light the spectrograph is working as a focusing system , concentrating all photons from a point source into a point image , independent of the initial angular size of the source . using matrix element , we calculate that the spectrograph projects a monochromatic source with a linear size into an image of linear size if the source is not monochromatic , the source image produced by the photons with energy is shifted transversely due to linear dispersion by from the source image by photons with energy .the spectrograph spectral resolution , , can be determined from the condition that the monochromatic source image size [ eq . ] , is equal to the source image shift [ eq . ] : a large cumulative dispersion rate , a small cumulative asymmetry factor , a large focal distance of the collimating mirror , and a small source size are advantageous for better spectral resolution .comparing eq .with eqs . and , we note that the spectral resolution of the focusing monochromators and of the spectrograph are described by the same expressions , with the only difference being that the source - lens distance is in the case of the spectrograph , and in the case of the monochromators .we therefore reach an interesting conclusion : the spectral resolution of the focusing monochromators and spectrographs can be equivalent . however , their angular acceptance and spectral efficiency , may be substantially different . the ray - transfer matrix theory does not take into account spectral and angular widths of the bragg reflections involved .they are , however , often limited typically to relatively small ev mev spectral and to mrad angular widths .the collimating optic of the spectrograph produces a beam with an angular divergence from a source with a linear size of ( independent of the angular size of the source ) .if is chosen to be smaller than the angular acceptance of the crystal optic , the spectrograph may accept photons from a source with a large angular size .the focusing monochromators , which use only one lens ( mirror ) in their optic , do not have such adaptability to sources with large angular size .focusing monochromators can work efficiently only with sources of small angular size , smaller than the angular acceptance of the crystal optic .therefore , spectrographs are preferable spectral imaging systems to work with sources of large angular size .this is exactly the requirement for the analyzer systems of the ixs instruments . in the following sections we will therefore consider only spectrographs in application to ixs . the spectrograph ray - transfer matrix presented in table [ tab2 ] was derived neglecting propagation through free space between the crystals .it turns out that only matrix elements and have to be changed if nonzero distances between the crystals of the spectrograph are taken into account : however , this leaves intact the results of the analysis presented above , because these elements were not used to derive eqs . .a perfect x - ray imaging spectrograph for ixs applications should have a high spectral resolution , ( ) ; a large spectral window of imaging , ; and a large angular acceptance , mrad .czerny - turner - type spectrographs are large - acceptance - angle devices in contrast to focusing monochromators , as discussed in detail in section [ spectrograph - section ] .therefore , in this section we will consider czerny - turner - type spectrographs as spectral imaging systems for ixs spectroscopy .to achieve required spectral resolution , the `` diffraction grating '' parameters , and ; the focal length , , of the collimating optic ; and the source size , , have to be appropriately selected using eq . .we will discuss this in more detail later in this section .the key problem is how to achieve large spectral window of imaging ( i.e. , how to achieve broadband spectrographs ) . in the ray - transfer matrix theorypresented above , infinite reflection bandwidths of the optical elements have been assumed . in reality , bragg reflection bandwidths are narrow .they are determined in the dynamical theory of x - ray diffraction in perfect crystals ( see , e.g. , ) .therefore , we have to join ray - transfer matrix approach with the dynamical theory to tackle the problem of the spectrograph bandwidth . in the following sections, we will consider two types of multi - crystal dispersing elements that may be used as `` diffraction gratings '' of the broadband hard x - ray spectrographs with very high spectral resolution .czerny - turner - type hard x - ray spectrographs using the cdw optic as the dispersing element has been introduced in .three - crystal cdw - optic schematics are shown in figs .[ fig0012](b)-[fig0012](c ) , while fig .[ fig0012](a ) shows its four - crystal modification cddw comprising two d - crystal elements .( 1,0.46)(0,0 ) ( 0.0,0.00 ) the cdw optic in general and cddw optic in particular may feature the cumulative dispersion rates , , greatly enhanced by successive asymmetric bragg reflections .the enhancement is described by the equation from row 4 of table [ tab2 ] : it tells that the dispersion rate of the optic composed of the first crystals can be drastically enhanced , provided successive crystal s asymmetry factor . in the example discussed in ,the cddw optic was considered , for which the cumulative dispersion rate was enhanced almost by two orders of magnitude compared to that of a single bragg reflection . as a consequence , the ability to achieve very high spectral resolution mev was demonstrated .however , the spectral window in which that particular cddw optic permitted the imaging of x - ray spectra was only mev .here we introduce x - ray spectrographs with the dispersing elements using the cdw optic , which feature a more than an order - of - magnitude increase ( compared to the case ) in the spectral window of imaging , and simultaneously a very high spectral resolution mev . a spectrograph with a spectral resolution mev requires a dispersing element ( de in fig .[ fig001 ] ) , featuring the ratio mev/ [ see eq . ] . herewe assume that the source size on the sample m , and focal distance m. small and large are favorable .however , a value that is too small may result in an enlargement by of the transverse size of the beam after the dispersing element , which is a too big , and therefore may require focusing optic with unrealistically large geometrical aperture .in addition , a value that is too small may result in a monochromatic image size that is too small [ see eq . ] , which may be beyond the detector s spatial resolution .because of this , we will keep ; therefore , / mev in the examples considered below . with ,the monochromatic image size is expected to be m , which can be resolved by modern position - sensitive x - ray detectors .it is also important to ensure that the angular acceptance of spectrograph s dispersing element is much larger than the angular size of the source . & & & & & & + element ( e ) & & & & & & & + [ material ] & & deg & deg & mev & & & + + c [ c * ] & ( 1 1 1 ) & -17.3 & 19.26 & 574 & 22 & -0.057 & -0.03 + d [ si ] & ( 8 0 0 ) & 81.9 & 89.5 & 27 & 341 & -1.13 & 1.63 + d [ si ] & ( 8 0 0 ) & 81.9 & 89.5 & 27 & 341 & -1.13 & -1.63 + w [ c * ] & ( 1 1 1 ) & 14.6 & 19.25 & 574 & 22 & -6.88 & 0.22 + & & & & + & mev & & & + & 2.5 & 62 & 0.5 & 24.6 + + c [ c * ] & ( 1 1 1 ) & -17.3 & 19.26 & 574 & 22 & -0.057 & -0.03 + d [ si ] & ( 8 0 0 ) & 86.0 & 89.5 & 27 & 341 & -1.29 & 3.58 + w [ c * ] & ( 1 1 1 ) & 14.55 & 19.25 & 574 & 22 & -6.79 & -0.22 + & 2.4 & -60 & -0.5 & -24.9 + + c [ c * ] & ( 1 1 1 ) & -17.3 & 19.26 & 574 & 22 & -0.057 & -0.03 + d [ si ] & ( 8 0 0 ) & 86.0 & 89.5 & 27 & 341 & -1.29 & 3.58 + w [ ge ] & ( 2 2 0 ) & 15.0 & 19.84 & 1354 & 53 & -6.77 & -0.22 + & 5.8 & -144 & -0.5 & 24.8 + + c [ c * ] & ( 1 1 1 ) & -17.3 & 19.26 & 574 & 22 & -0.057 & -0.03 + d [ si ] & ( 8 0 0 ) & 86.0 & 89.5 & 27 & 341 & -1.29 & 3.58 + w [ ge ] & ( 1 1 1 ) & 9.0 & 12.0 & 3013 & 70 & -6.86 & -0.13 + & 7.5 & -187 & -0.5 & 25 . + based on the dumond diagram analysis , the spectral bandwidth of the cddw optic can be approximated by the following expression : here values represent angular widths of bragg reflections from crystal elements ( e = c , w ) in the symmetric scattering geometry . for the cddw optic , ; ; ; and .therefore , using eq ., assuming typical designs with , and , the largest dispersing rates are achieved by d - crystals , [ see eq . ] , while the dispersion rates and of the c- and w - crystal elements can be neglected in eq .. as a result , the cumulative dispersion rate can be then approximated by and the critical for spectrograph s spectral resolution ratio [ see eq . ] by equation shows that to achieve a broadband spectrograph it is important to use the w - crystal with a large intrinsic angular width and a small asymmetry factor ; however , asymmetry factor should not be too small , in order to keep , as discussed above . favorably, the variation of does not change the spectral resolution , according to eqs . and .using the c - crystal with a large intrinsic angular width , and as small as possible asymmetry factor is also advantageous for achieving the large bandwidth .however , these values are optimized first of all with a purpose of achieving a large angular acceptance of the cddw optic .equations are also valid for the three - crystal cdw optic , if factors of are replaces everywhere by factors of , and is replaced by .therefore , similar conclusions regarding the bandwidth are true for the cdw optic , containing one d - crystal element .examples of multi - crystal cddw and cdw `` diffraction gratings '' , ensuring mev ( i.e. , spectral windows of imaging 25 to 75 times broader than the target spectral resolution mev ) are given in table [ tab1 ] .the spectral transmittance functions calculated using the dynamical diffraction theory are shown in fig .[ fig0012 ] .the largest increase in the width of the spectral window of imaging is achieved in those cases in which ge crystals are used for w - crystal elements , the crystals that provide the largest values .low - indexed asymmetric bragg reflections from thin diamond crystals , c * , are proposed to use for the c - crystal elements , to ensure low absorption of the beam propagating to the w - crystal upon bragg back - reflection from the d - crystal , similar to how diamond crystals were used in a hybrid diamond - silicon cddw x - ray monochromator .the above examples are not necessarily best and final .further improvements in the spectral resolution and the spectral window of imaging are still possible through changing crystal parameters and crystal material .the best strategy of increasing the spectral window of imaging is to choose a crystal material with the largest possible angular acceptance , , of the w - crystal [ eq ] .here we suggest ge , but a different material could also work ( e.g. , pbwo ) .the spectral window of imaging can be further increased by decreasing the asymmetry factor , , of the w - crystal while simultaneously keeping , and therefore , at the same low level .this should be possible as long as the transverse size of the beam which has been increased by after the cdw optic can be accepted by the focusing optic , and the monochromatic image size decreased by [ see eq . ]can be still resolved by the detector .in addition , with the bragg angle , , of the d - crystal chosen very close to the cdw optic becomes exact back - scattering . in this case , a littrow - type spectrograph in an autocollimating configuration , using common crystal for c- and w- elements , could be used .this is similar to the czerny - turner - type spectrograph but with a common collimator and focusing mirror . in the asymmetric scattering geometry with angle [ see fig .[ fig001](b ) ] the relative spectral width and angular width of the bragg reflection region become compared to the appropriate values and valid in the symmetric scattering geometry with and ( see , e.g. , ) .the spectral and angular bragg reflection widths increase by a factor compared to symmetric case values , provided bragg reflections with asymmetry parameters ( ) are used .it is therefore clear that to realize spectrographs with broadest possible spectral window of imaging it is advantageous to use asymmetric bragg reflections with asymmetry parameters in the range of . in the current section, we will study a few particular cases .we start , however , with drawbacks of using bragg reflections with asymmetry factors in the range of .first , they have a smaller angular dispersion rates [ see eq . ] than those with ( compare with cases discussed in the previous section ) .second , they enlarge the transverse beam size of x - rays upon reflection by a factor , as a consequence of the phase space conservation ( see matrix elements of the bragg reflection ray - transfer matrices in rows 35 of table [ tab2 ] ) .transverse beam sizes that are too large may unfortunately require unrealistically large geometric aperture of the -focusing mirrors ( lenses ) of the spectrographs .third , bragg reflections with also reduce the monochromatic image size [ see eq . ] and thus may pose stringent requirement on the detector s spatial resolution , which should be better than . as example, we consider a multi - crystal dispersing element of the czerny - turner - type spectrographs composed of identical crystals in the ( + + ... ) scattering geometry ( , , , , ... ) .all crystals are assumed to have the same angular dispersion rate [ see eq .] , and the same asymmetry factors [ see eq . ] . using the equations from row 4 of table [ tab2 ], we obtain for the cumulative dispersion rate , and the asymmetry factor of the multi - crystal ( + + ... ) dispersing element : increasing the number of crystals , , with asymmetry parameters , results in a rapid decrease of ; however , this does not increase as much .therefore , in the following examples we restrict ourselves to considering solely two - crystal ( + )-type dispersing elements , as shown schematically in figure [ fig008 ] .( 1,0.33)(0,0 ) ( 0.0,0.00 ) ) .identical bragg reflections with asymmetry parameter are used , with crystal parameters presented in tables [ tab3](a ) and [ tab3](b ) , and with spectral transmittance functions presented in figs .[ fig0017](a ) and [ fig0017](b ) , respectively ., title="fig:",scaledwidth=50.0% ] for a spectrograph with two - crystal ( + )-type dispersing elements , the expressions for the spectral resolution [ see eqs . and - ] and for the monochromatic image size [ see eqs . and ] on the detector become bragg reflections with close to , small , small source size , and large focal distance , , are the factors that improve the energy resolution of the spectrograph . a large focal distance , , of the spectrograph s focusing mirror helps to mitigate the requirement for the spatial resolution of the position sensitive detector .the spectral window of imaging and the angular acceptance of the spectrograph for each monochromatic spectral component is given by eq .in the first approximation . using bragg reflections with a large relative spectral widths , , anda small is advantageous for achieving a broad spectral window of imaging . in the following , we consider two examples of the spectrographs in the czerny - turner configuration with two - crystal ( + )-type dispersing elements . the first one , which is appropriate for uhrixs applications , is studied in section [ two - cristal - uhrix ] . the second example , relevant to high - resolution cu k - edge rixs applications ,is discussed in section [ two - cristal - rixs ] .( 1,0.28)(0,0 ) ( 0.0,0.00 ) of the ( + )-type two - crystal dispersing elements de , schematically shown in fig . [fig008](a ) and ( b ) , respectively .transmittance is calculated using the dynamical theory of bragg diffraction with crystal parameters from tables [ tab3](a ) and [ tab3](b ) .angular spread of incident x - rays is in both cases .black spectral lines with a 0.1-mev width in ( a ) and with a 1-mev width in ( b ) represent the spectral resolution of the spectrographs in the particular configurations highlighted by magenta dots in figs .[ fig009 ] and [ fig009899 ] , respectively.,title="fig:",scaledwidth=50.0% ] ( 1,1.0)(0,0 ) ( 0.0,-0.02 ) ) : ( a ) spectral resolution ; ( b ) image to source size ratio ; and ( c ) the lateral beam size enlargement by crystal optics , shown as a function of bragg s angle and the asymmetry angle .a particular case is presented for m , m , and m. magenta dots highlight the case with the spectrograph resolution mev , , and , attainable with crystal parameters presented in table [ tab3](a ) .the ( 008 ) bragg reflections from si crystals enable a `` diffraction grating '' with a angular acceptance and mev imaging window for kev x - rays ., title="fig:",scaledwidth=50.0% ] & & & & & & + element ( e ) & & & & & & & + [ material ] & & deg & deg & mev & & & + + d [ si ] & ( 8 0 0 ) & 88 . & 89 . & 27 & 169 & -0.34 & -4.2 + d [ si ] & ( 8 0 0 ) & 86 . & 89 . & 27 & 169 & -0.34 & 4.2 + & & & & + & mev & & & + & 47 & 266 & 0.11 & 5.63 + + d [ ge ] & ( 3 3 7 ) & 83.45 & 86.05 & 41.8 & 67 & -0.25 & -1.2 + d [ ge ] & ( 3 7 7 ) & 83.45 & 86.05 & 41.8 & 67 & -0.25 & + 1.2 + & 85 & 134 & -0.06 & 1.5 + here , as in section [ cdw - uhrix ] , we study possible solutions to broadband spectrographs for ixs applications that require an ultra - high spectral resolution of mev , and a momentum transfer resolution nm discussed in section [ intro ] .we use eqs . and to plot two - dimensional ( 2d ) graphs with spectrograph characteristics as a function of bragg s angle and the asymmetry angle : spectral resolution in fig .[ fig009](a ) , image to source size ratio in fig .[ fig009](b ) , and the lateral beam size enlargement by crystal optics in fig .[ fig009](c ) .a particular case is considered with m , m , and m. configurations with equal energy resolution are highlighted by black lines for some selected values .magenta dots highlight a specific case with the spectral resolution mev , , and , achieved by selecting and ( ) .specifically , the ( 008 ) bragg reflection of x - rays with average photon energy kev from si crystals [ see table [ tab3](a ) , figs .[ fig0017](a ) and [ fig008](a ) ] enable a `` diffraction grating '' with a spectral window of imaging mev and angular acceptance for each monochromatic component .the angular spread of x - rays incident on the crystal is , independent of the angular spread of x - rays incident on the collimating optic .this number is much less than the crystal angular acceptance , which makes the optic very efficient .the expected monochromatic image size is m , which can be resolved by the state - of - the - art position sensitive x - ray detectors with single photon sensitivity .a very good energy resolution of mev simultaneously requires a very high momentum transfer resolution -nm to resolve photon - like excitations in disordered systems ( see fig .[ fig000 ] ) .this limits the angular acceptance on the collimating optic to mrad , where nm is the momentum of a photon with energy kev .the geometrical aperture of the collimating optic therefore can be small , mm , assuming m. the geometrical aperture of the focusing optic should be much larger , because the beam size increases by a factor of to mm .however , optics with such apertures are feasible , in particular if advanced grazing incidence mirrors are used .we note also that the cumulative dispersion rate of the two - crystal dispersing element is / mev .hence , the total angular spread of x - rays after the dispersion element within the imaging window mev is , which can be totally captured by the state - of - the - art mirrors .it should be noted that focusing is required only in one dimension , like for the spectrographs discussed in section [ cdw - uhrix ] .this property can be used to simultaneously image the spectrum of x - rays along the -axis and the momentum transfer distribution along the -axis ( see fig . [ fig011 ] ) , using a 2d position sensitive detector .the spectrograph with the two - crystal ( + )-type dispersing element introduced in the present section has almost an order - of - magnitude broader spectral window of imaging compared to that of the spectrograph with the cdw dispersing element , as discussed in section [ cdw - uhrix ] .however , its realization requires a focusing mirror with larger geometric aperture and larger focal distance .( 1,1.04)(0,0 ) ( 0.0,0.00 ) for a particular case of m , m , and m. the ( 337 ) bragg reflection of the -kev x - rays from ge crystals with crystal parameters presented in table [ tab3](b ) provide a `` diffraction grating '' featuring a 133- angular acceptance and a 85-mev bandwidth . in this case, the spectrograph resolution should be mev , , and ., title="fig:",scaledwidth=50.0% ] having the bragg angle as close as possible to is advantageous , because this allows for better spectral resolution [ see eq . ] and simultaneously smaller beam size enlargement , , by the dispersing element , and not too much reduction of the image to source size ratio .this property was used in the example of the spectrograph intended for ixs applications discussed in the previous section ( see fig .[ fig009 ] ) .rixs , unlike ixs , requires specific photon energies , which are defined by transitions between specific atomic states . as a consequence ,there is usually limited flexibility in the choice of bragg s angle magnitude . here, we show that in such conditions high - resolution hard x - ray spectrographs in the czerny - turner configuration are also feasible , yet , with certain limitations . as an example , we consider a spectrograph for cu k - edge rixs applications , which requires x - rays with photon energies kev .figure [ fig008](b ) shows a schematic of the spectrograph s two - crystal ( + )-type dispersing element ; fig .[ fig009899 ] displays properties of the spectrograph as a function of bragg and asymmetry angles .magenta dots highlight a specific configuration that results in a mev spectral resolution , and a mev spectral window of imaging .table [ tab3](b ) presents crystal parameters in this configuration .the spectral resolution of the selected rixs spectrograph is an order of magnitude inferior to that of the ixs spectrograph discussed in the previous section .the -mev value is first of all a compromise between as small as possible spectral resolution and a beam cross - section that is not overly enlarged by the dispersing element . in our case the enlargement is already significant : and will require focusing optic with large geometric aperture .an overly large deviation of the bragg angle from ( imposed by ge crystal properties and fixed photon energy ) does not allow for smaller beam size .second , to ensure the larger angular acceptance of the spectrograph important for rixs applications , the focal distance of the collimating optic , which is critical for better spectral resolution [ see eqs . and ]is chosen , m , much smaller that in the ixs case .the rixs spectrograph introduced here features an order - of - magnitude better spectral resolution compared to the resolution available with the state - of - the - art rixs spectrometers . such high resolution could be useful in studying collective excitations in condensed matter systems in various fields , primarily in high- superconductors .we have developed a theory of hard x - ray czerny - turner - type spectrographs using bragg reflecting crystals in multi - crystal arrangements as dispersing elements . using the ray - transfer matrix technique , spectral resolution and other performance characteristics of spectrographsare calculated as a function of the physical parameters of the constituent optical elements .the dynamical theory of x - ray diffraction in crystals is applied to calculate spectral windows of imaging .several optical designs of hard x - ray spectrographs with broadband spectral windows of imaging are proposed and their performance is analyzed .specifically , spectrographs with an energy resolution of mev are shown to be feasible for ixs spectroscopy applications .dispersing elements based on cdw optic may provide spectral windows of imaging , mev and compact optical design . two - crystal ( + )-type dispersing elements may provide much larger spectral windows of imaging mev. however , this may require focusing optic with a large geometrical aperture , and a large focal length .in another example , a spectrograph with a 1-mev spectral resolution and -mev spectral window of imaging is introduced for cu k - edge rixs applications .ray - transfer matrices derived in the paper for optics comprising focusing , collimating , and multiple bragg - reflecting crystal elements can be used for the analysis of other x - ray optical systems , including synchrotron radiation beamline optics , or x - ray free - electron laser oscillator cavities .
|
a spectrograph is an optical instrument that disperses photons of different energies into distinct directions and space locations , and images photon spectra on a position - sensitive detector . spectrographs consist of collimating , angular dispersive , and focusing optical elements . bragg reflecting crystals arranged in an asymmetric scattering geometry are used as the dispersing elements . a ray - transfer matrix technique is applied to propagate x - rays through the optical elements . several optical designs of hard x - ray spectrographs are proposed and their performance is analyzed . spectrographs with an energy resolution of 0.1 mev and a spectral window of imaging up to a few tens of mevs are shown to be feasible for inelastic x - ray scattering ( ixs ) spectroscopy applications . in another example , a spectrograph with a 1-mev spectral resolution and -mev spectral window of imaging is considered for cu k - edge resonant ixs ( rixs ) .
|
in wireless communication networks , multiple users often share a common channel and contend for access . to resolve the contention problem , many different medium access control ( mac ) protocolshave been devised and used .recently , the selfish behavior of users in mac protocols has been studied using game theory. there have been attempts to understand the existing mac protocols as the local utility maximizing behavior of selfish users by reverse - engineering the current protocols ( e.g. , ) .it has also been investigated whether existing protocols are vulnerable to the existence of selfish users who pursue their self - interest in a non - cooperative manner .non - cooperative behavior often leads to inefficient outcomes .for example , in the 802.11 distributed mac protocol , dcf , and its enhanced version , edcf , competition among selfish users can lead to an inefficient use of the shared channel in nash equilibria .similarly , a prisoner s dilemma phenomenon arises in a non - cooperative game for a generalized version of slotted - aloha protocols .in general , if a game has nash equilibria yielding low payoffs for the players , it will be desirable for them to transform the game to extend the set of equilibria to include better outcomes .the same idea can be applied to the game played by selfish users who compete for access to a common medium . if competition among selfish users brings about a network collapse , then it is beneficial for them to design a device which provides incentives to behave cooperatively .game theory discusses three types of transformation : 1 ) games with contracts , 2 ) games with communication , and 3 ) repeated games .a game is said to be with contracts if the players of the game can communicate and bargain with each other , and enforce the agreement with a binding contract .the main obstacle to apply this approach to wireless networking is the distributed nature of wireless networks . to reach an agreement, users should know the network system and be able to communicate with each other .they should also be able to enforce the agreed plan .a game with communication is the one in which players can communicate with each other through a mediator but they can not write a binding contract . in this case , a correlated equilibrium is predicted to be played . studies correlated equilibria using a coordination mechanism in a slotted aloha - type scenario . unlike the first approach , this does not require that the actions of players be enforceable. however , to apply this approach to the medium access problem , signals need to be conveyed from a mediator to all users , and users need to know the correct meanings of the signals .a repeated game is a dynamic game in which the same game is played repeatedly by the same players over finite or infinite periods. repeated interactions among the same players enable them to sustain cooperation by punishing deviations in subsequent periods .a main challenge of applying the idea of repeated games to wireless networks is that the users should keep track of their past observations and be able to detect deviations and to coordinate their actions in order to punish deviating users .besides the three approaches above , another approach widely applied to communication networks is pricing .a central entity charges prices to users in order to control their utilization of the network .nash equilibria with pricing schemes in an aloha network are analyzed in . implementing a pricing schemerequires the central entity to have relevant system information as well as users benefits and costs , which are often their private information . eliciting private informationoften results in an efficiency loss in the presence of the strategic behavior of users as shown in . even in the case where the entity has all the relevant information, prices need to be computed and communicated to the users . in this paper , we propose yet another approach using a stackelberg game .we introduce a network manager as an additional user and make him access the medium according to a certain rule . unlike the stackelberg game of in which the manager ( the leader ) chooses a certain strategy before users ( followers ) make their decisions , in the proposed stackelberg game he sets an intervention rule first and then implements his intervention after users choose their strategies .alternatively , the proposed stackelberg game can be considered as a generalized stackelberg game in which there are multiple leaders ( users ) and a single follower ( the manager ) and the leaders know the response of the follower to their decisions correctly . with appropriate choices of intervention rules ,the manager can shape the incentives of users in such a way that their selfish behavior results in cooperative outcomes . in the context of cognitive radio networks, proposes a related stackelberg game in which the owner of a licensed frequency band ( the leader ) can charge a virtual price for using the frequency band to cognitive radios ( followers ) . the virtual price signals the extent to which cognitive radios can exploit the licensed frequency band .however , since prices are virtual , selfish users may ignore prices when they make decisions if they can gain by doing so . on the contrary , in the stackelberg game of this paper ,the intervention of the manager is not virtual but it results in the reduction of throughput , which selfish users care about for sure .hence , the intervention method provides better grounds for the network manager to deal with the selfish behavior of users . and use game theoretic models to study random access .their approach is to capture the information and implementation constraints using the game theoretic framework and to specify utility functions so that a desired operating point is achieved at a nash equilibrium .if conditions under which a certain type of dynamic adjustment play converges to the nash equilibrium are met , such a strategy update mechanism can be used to derive a distributed algorithm that converges to the desired operating point . however, this control - theoretic approach to game theory assumes that users are obedient . in this paper ,our main concern is about the selfish behavior of users who have innate objectives . because we start from natural utility functions and affect them by devising an intervention scheme , we are in a better position to deal with selfish users .furthermore , the idea of intervention can potentially lead to a distributed algorithm to achieve a desired operating point . by formulating the medium access problem as a non - cooperative game ,we show the following main results : 1 . because the nash equilibria of the non - cooperative game are inefficient and/or unfair , we transform the original game into a stackelberg game , in which any feasible outcome with independent transmission probabilities can be achieved as a stackelberg equilibrium .a particular form of a stackelberg intervention strategy , called total relative deviation ( trd)-based intervention , is constructed and used to achieve any feasible outcome with independent transmission probabilities .the additional amount of information flows required for the transformation is relatively moderate , and it can be further reduced without large efficiency losses . the rest of this paper is organized as follows .section 2 introduces the model and formulates it as a non - cooperative game called the contention game .nash equilibria of the contention game are characterized , and it is shown that they typically yield suboptimal performance . in section 3 , we transform the contention game into another related game called the stackelberg contention game by introducing an intervening manager .we show that the manager can implement any transmission probability profile as a stackelberg equilibrium using a class of intervention functions .section 4 discusses natural candidates for the target transmission probability profile selected by the manager . in section 5, we discuss the flows of information required for our results and examine the implications of some relaxations of the requirements on performance .section 6 provides numerical results , and section 7 concludes the paper .we consider a simple contention model in which multiple users share a communication channel as in .a user represents a transmitter - receiver pair .time is divided into slots of the same duration .every user has a packet to transmit and can send the packet or wait .if there is only one transmission , the packet is successfully transmitted within the time slot .if more than one user transmits a packet simultaneously in a slot , a collision occurs and no packet is transmitted .we summarize the assumptions of our contention model . 1 .a fixed set of users interacts over a given period of time ( or a session ) .time is divided into multiple slots , and slots are synchronized .a user always has a packet to transmit in every slot .the transmission of a packet is completed within a slot .a user transmits its packet with the same probability in every slot .there is no adjustment in the transmission probabilities during the session .this excludes coordination among users , for example , using time division multiplexing .there is no cost of transmitting a packet .we formulate the medium access problem as a non - cooperative game to analyze the behavior of selfish users .we denote the set of users by . because we assume that a user uses the same transmission probability over the entire session , the strategy of a user is its transmission probability , and we denote the strategy of user by and the strategy space of user by ] if .suppose that user chooses .then it is playing its best response while other users are also playing their best responses , which establishes the sufficiency part . to prove the necessity part ,suppose that is a nash equilibrium and for all .since , is not a best response to , which is a contradiction . ' '' '' if a nash equilibrium has only one user such that , then and for all where can be as large as .if there are at least two users with the transmission probability equal to 1 , then we have for all .let , u_j = 0 \ ; \forall j \neq i \} ] , which gives his transmission probability when the strategy profile of the users is . can be interpreted as the level of intervention or punishment by the manager when the users choose .note that the level of intervention by the manager is the same for every user .we assume that the manager has a specific `` target '' strategy profile , that his transmission has no value to him ( as well as to others ) , and that he is benevolent .one representation of his objective is the payoff function of the following form : this payoff function means that the manager wants the users to operate at the target strategy profile with the minimum level of intervention .we call the transformed game the _ stackelberg contention game _ because the manager chooses his strategy before the users make their decisions on the transmission probabilities . in this sense, the manager can be thought of as a stackelberg leader and the users as followers .the specific timing of the stackelberg contention game can be outlined as follows : 1 .the network manager determines his intervention function .2 . knowing the intervention function of the manager , the users choose their transmission probabilities simultaneously .observing the strategy profile of the users , the manager determines the level of intervention using his intervention function .the transmission probabilities of the manager and the users determine their payoffs .timing 1 happens before the session starts .timing 2 occurs at the beginning of the session whereas timing 3 occurs when the manager knows the transmission probabilities of all the users .therefore , there is a time lag between the time when the session begins and when the manager starts to intervene .payoffs can be calculated as the probability of successful transmission averaged over the entire session , multiplied by valuation .if the interval between timing 2 and timing 3 is short relative to the duration of the session , the payoff of user can be approximated as the payoff during the intervention using the following payoff function : the transformation of the contention game into the stackelberg contention game is schematically shown in figure 2 .the figure shows that the main role of the manager is to set the intervention rule and to implement it .the users still behave non - cooperatively maximizing their payoffs , and the intervention of the manager affects their selfish behavior even though the manager does neither directly control their behavior nor continuously communicate with the users to convey coordination or price signals . in the stackelberg routing game of ,the strategy spaces of the manager and a user coincide .if that is the case in the stackelberg contention game , i.e. , if the manager chooses a single transmission probability before the users choose theirs , then this intervention only makes the channel lossy but it does not provide incentives for users not to choose the maximum possible transmission probability . hence , in order to provide an incentive to choose a smaller transmission probability , the manager needs to vary his transmission probability depending on the transmission probabilities of the users .a stackelberg game is analyzed using a backward induction argument .the leader predicts the nash equilibrium behavior of the followers given his strategy and chooses the best strategy for him .the same argument can be applied to the stackelberg contention game .once the manager decides his strategy and commits to implement his transmission probability according to , the rest of the stackelberg contention game ( timing 24 ) can be viewed as a non - cooperative game played by the users .given the intervention function , the payoff function of user can be written as in essence , the role of the manager is to change the non - cooperative game that the users play from the contention game to a new game , which we call the _ contention game with intervention . understanding the non - cooperative behavior of the users given the intervention function , the manager will choose that maximizes his payoff .we now define an equilibrium concept for the stackelberg contention game .an intervention function of the manager and a profile of the transmission probabilities of the users constitutes a _ stackelberg equilibrium _ if ( i ) is a nash equilibrium of the contention game with intervention and ( ii ) and . combining ( i ) and ( ii ) , an equivalent definition is that ( is a stackelberg equilibrium if is a nash equilibrium of and .condition ( i ) says that once the manager chooses his strategy , the users will play a nash equilibrium strategy profile in the resulting game , and condition ( ii ) says that expecting the nash equilibrium strategy profile of the users , the manager chooses his strategy that achieves his objective .as we have mentioned earlier , the manager can choose only one level of intervention that affects the users equally .a question that arises is which strategy profile the manager can implement as a stackelberg equilibrium with one level of intervention for every user .we answer this question constructively .we propose a specific form of an intervention function with which the manager can attain any strategy profile with for all .the basic idea of this result is that because the strategy of the manager is not a single intervention level but a function whose value depends on the strategies of the users , he can discriminate the users by reacting differently to their transmission probabilities in choosing the level of intervention .therefore , even though the realized level of intervention is the same for every user , the manager can induce the users to choose different transmission probabilities . to construct such an intervention function , we first define the _ total relative deviation ( trd ) of from _ by since determines the transmission probability of the manager , its range should lie in ] is used to obtain the `` trimmed '' value of trd between 0 and 1 .the trd - based intervention can be interpreted in the following way .the manager sets the target at . as long as the users choose small transmission probabilities so that the trd of from does not exceed zero, the manager does not intervene .if it is larger than zero , the manager will respond to a one - unit increase in by increasing by units until the trd reaches 1 .the manager determines the degree of punishment based on the target transmission probability profile .if he wants a user to transmit with a low probability , then his punishment against its deviation is strong .[ prop1]proposition constitutes a stackelberg equilibrium . _* proof * _ : we need to check two things .first , is a nash equilibrium of .second , .it is straightforward to confirm the second . to show the first , the payoff function of user given others strategies is it can be seen from the above expression that is increasing on , reaches a peak at , is decreasing on , and then stays at 0 on .therefore , user s best response to is for all , and thus constitutes a nash equilibrium of the contention game with trd - based intervention , . ' '' '' any feasible payoff profile of the contention game with for all can be achieved by a stackelberg equilibrium .corollary 1 resembles the folk theorem of repeated games in that it claims that any feasible outcome can be attained as an equilibrium .incentives not to deviate from a certain operating point are provided by the manager s intervention in the stackelberg contention game , while in a repeated game players do not deviate since a deviation is followed by punishment from other players . in proposition 2, we have seen that is a nash equilibrium of the contention game with trd - based intervention .however , if other nash equilibria exist , the outcome may be different from the one that the manager intends .in fact , any strategy profile with for at least one is still a nash equilibrium of .the following proposition characterizes the set of nash equilibria of that are different from those of .[ prop1]proposition consider a strategy profile with for all . is a nash equilibrium of the contention game with trd - based intervention if and only if either or _ * proof * _ : see appendix a. ' '' '' transforming to does not eliminate the nash equilibria of the contention game . rather , the set of nash equilibria expands to include two classes of new equilibria .the first nash equilibrium of proposition 3 is the one that the manager intends the users to play .the second class of nash equilibria are those in which the sum of relative deviations of other users is already too large that no matter how small transmission probability user chooses , the level of intervention stays the same at 1 .since is chosen to satisfy for all and satisfies , it follows that for all ., we will use instead of when there is no confusion . ] for the second class of nash equilibria in proposition 3 , for all because .therefore , the payoff profile of the second class of nash equilibria is _ pareto dominated _ by that of the intended nash equilibrium in that the intended nash equilibrium yields a higher payoff for every user compared to the second class of nash equilibria .the same conclusion holds for nash equilibria with more than one user with transmission probability 1 because every user gets zero payoff . finally , the remaining nash equilibria are those with exactly one user with transmission probability 1 .suppose that .then the highest payoff for user is achieved when for all . denoting this strategy profile by , the payoff profile of pareto dominated by that of if .we have seen that there are multiple nash equilibria of the contention game with trd - based intervention and that the nash equilibrium in general yields higher payoffs to the users than other nash equilibria .if the users are aware of the welfare properties of different nash equilibria , they will tend to select .suppose that the users play the second class of nash equilibria in proposition 3 for some reason .if the stackelberg contention game is played repeatedly and the users anticipate that the strategy profile of the other users will be the same as that of the last period , then it can be shown that under certain conditions there is a sequence of intervention functions convergent to that the manager can employ to have the users reach the intended nash equilibrium , thus approaching the stackelberg equilibrium .[ prop1]proposition suppose that at the manager chooses the intervention function and that the users play a nash equilibrium of the second class . without loss of generality ,the users are enumerated so that the following holds : suppose further that for each , either or holds . at ; define assume that the manager employs the intervention function ^ 1 ] if there does not exist ^s ] .we assume that each interval contains its right end point . for simplicity, we will consider intervals of the same length .that is , , \left(\frac{1}{m } , \frac{2}{m}\right ] , \ldots , \left(\frac{m-1}{m } , 1 \right ] \right\} ] for all .suppose that the manager only observes which interval in each belongs to .in other words , the manager observes instead of such that . in this case , the level of intervention is calculated based on rather than .it means that given , would be the same for any if and belong to the same .since any is weakly dominated by , the users will choose their transmission probabilities at the right end points of the intervals in .this in turn will affect the choice of a target by the manager .the manager will be restricted to choose such that for all .then the manager can implement with the intervention function , where is set equal to . in summary , the quantized observation on restricts the choice of by the manager from to . .( b ) . ,scaledwidth=70.0% ] figure 3 shows the payoff profiles that can be achieved by the manager with quantized observation .when the number of intervals is moderately large , the manager has many options near or on the pareto efficiency boundary .we modify the stackelberg contention game to analyze the case where the manager observes noisy signals of the transmission probabilities of the users .let ] , independently over .suppose that the manager chooses a target such that ] instead of when the manager observes . if is a nash equilibrium of the contention game with intervention when is perfectly observable to the manager and = g(\mathbf{p}) ] whereas .since is kinked at , the noise in will distort the incentives of the users to choose .the manager can implement his target at the expense of intervention with a positive probability . if the manager adopts the following intervention function where , then is a nash equilibrium of the contention game with intervention , but the average level of intervention at is = g(\tilde{\mathbf{p } } ) = \frac{\epsilon q}{1 + \epsilon q } > 0,\end{aligned}\ ] ] which can be thought of as the efficiency loss due to the noise in observations . .( b ) . ,scaledwidth=70.0% ] figure 4 illustrates the set of payoff profiles that can be achieved with the intervention function given by ( [ eq : modtrd ] ) .as the size of the noise gets smaller , the set expands to approach the pareto efficiency boundary .we consider the case where the manager can observe only the frequency of the slots that are not accessed by any user .if the users transmit their packets according to , then the manager observes only the aggregate probability . in this scenario ,the intervention function that the manager chooses has to be a function of , and this implies that the manager can not discriminate among the users .the trd - based intervention function allows the manager to use different reactions to each user s deviation . in the effective region where the trd is between 0 and 1 ,one unit increase in results in units increase in . however , this kind of discrimination through the structure of the intervention function is impossible when the manager can not observe individual transmission probabilities .this limitation forces the manager to treat the users equally , and the target has to be chosen such that for all .if the manager uses the following intervention function , ^ 1\end{aligned}\ ] ] then he can implement with as a stackelberg equilibrium .hence , if the manager only observes the aggregate probability , this prevents him from setting the target transmission probabilities differently across users . .( b ) heterogeneous users with and .,scaledwidth=70.0% ] figure 5 shows the payoff profiles achieved with symmetric strategy profiles , which can be implemented by the manager who observes the aggregate probability .we now relax requirement u and assume that user can observe only the aggregate probability . even though the users do not know the exact form of the intervention function of the manager , they are aware of the dependence of on their transmission probabilities and try to model this dependence based on their observations . specifically , user builds a conjecture function \rightarrow [ 0,1] ] such that with at least one inequality strict .first , notice that setting and will violate one of the two inequalities .the inequality for user will not hold if , and the one for user will not hold if . hence ,both and are necessary to have both inequalities satisfied at the same time .we consider four possible cases. * case 1 .* and since , ( [ eq : useri ] ) is violated .* case 2 . * and equation ( [ eq : userj ] ) is violated .* case 3 . * and since , .hence , ( [ eq : useri ] ) and ( [ eq : userj ] ) become we consider the contour curves of and going through in the -plane .the slope of the contour curve of at is and that of is .there is no area of mutual improvement if and only if which is equivalent to .* and since , as long as .hence , ( [ eq : useri ] ) and ( [ eq : userj ] ) become the slope of the contour curve of at is and that of is therefore , there is no that satisfies ( [ eq : useri ] ) and ( [ eq : userj ] ) at the same time .the `` if '' part is trivial because a strategy profile that is coalition - proof with respect to the grand coalition is pareto efficient . to establish the `` only if '' part , we will prove that if for a given strategy profile there exists a coalition that can improve the payoffs of its members then its deviation will not hurt other users outside of the coalition , which shows that the original strategy profile is not pareto efficient . consider a strategy profile and a coalition that can improve upon by deviating from to .let the transmission probability of the manager after the deviation by coalition .since choosing instead of yields higher payoffs to the members of , we have for all with at least one inequality strict .we want to show that the members not in the coalition do not get lower payoffs as a result of the deviation by , that is , suppose .we can see that and for all because the right - hand side of ( [ eq : coal ] ) is strictly positive . combining this inequality with ( [ eq : coal ] ) yields for all , which implies .we can write for some for . then . ([ eq : coal ] ) can be rewritten as for all .simplifying this gives for all .summing these inequalities up over , we get where is the number of the members in .this inequality simplifies to , which is a contradiction .99 j .- w .lee , a. tang , j. huang , m. chiang , and a. r. calderbank , `` reverse - engineering mac : a non - cooperative game model , '' _ ieee journal on selected areas in communications _25 , no . 6 , pp . 11351147 , 2007 .g. tan and j. guttag , `` the 802.11 mac protocol leads to inefficient equilibria , '' in _ proceedings of the 24th annual joint conference of the ieee computer and communications societies ( infocom 2005 ) _ , vol .1 , pp . 111 , miami , fl , usa , march 2005 .r. t. ma , v. misra , and d. rubenstein , `` modeling and analysis of generalized slotted - aloha mac protocols in cooperative , competitive and adversarial environments , '' in _ proceedings of the 26th ieee international conference on distributed computing systems ( icdcs 06 ) _ , lisboa , portugal , july 2006 .y. jin and g. kesidis , `` a pricing strategy for an aloha network of heterogeneous users with inelastic bandwidth requirements , '' in _ proceedings of the 39th annual conference on information sciences and systems _ , princeton ,nj , usa , march 2002 .d. wang , c. comaniciu , and u. tureli , `` a fair and efficient pricing strategy for slotted aloha in mpr models , '' in _ proceedings of the 64th ieee vehicular technology conference _ , pp .24742478 , montral , canada , september 2006 .m. bloem , t. alpcan , and t. baar , `` a stackelberg game for power control and channel allocation in cognitive radio networks , '' in _ proceedings of the 1st international workshop on game theory in communication networks ( gamecomm2007 ) _ , nantes , france , october 2007 .l. chen , t. cui , s. h. low , and j. c. doyle , `` a game - theoretic model for medium access control , '' in _ proceedings of the 3rd international wireless internet conference _, austin , tx , usa , october 2007 . l. chen , s. h. low , and j. c. doyle , contention control : a game - theoretic approach , in _ proceedings of the 46th ieee conference on decision and control _ , pp .34283434 , new orleans , la , usa , december 2007 . a. h. mohsenian - rad , j. huang , m. chiang , and v. w. s. wong , `` utility - optimal random access without message passing , '' _ ieee transactions on wireless communications _ , vol . 8 , no .3 , pp . 10731079 , 2009 .m. agalj , s. ganeriwal , i. aad , and j .-hubaux , `` on selfish behavior in csma / ca networks , '' in _ proceedings of the 24th annual joint conference of the ieee computer and communications societies ( infocom 2005 ) _ , vol .4 , pp . 25132524 , miami , fl , usa , march 2005 .j. hu and m. p. wellman , `` online learning about other agents in a dynamic multiagent system , '' in _ proceedings of the 2nd international conference on autonomous agents _ , pp .239246 , minneapolis , mn , usa , may 1998 .
|
interactions among selfish users sharing a common transmission channel can be modeled as a non - cooperative game using the game theory framework . when selfish users choose their transmission probabilities independently without any coordination mechanism , nash equilibria usually result in a network collapse . we propose a methodology that transforms the non - cooperative game into a stackelberg game . stackelberg equilibria of the stackelberg game can overcome the deficiency of the nash equilibria of the original game . a particular type of stackelberg intervention is constructed to show that any positive payoff profile feasible with independent transmission probabilities can be achieved as a stackelberg equilibrium payoff profile . we discuss criteria to select an operating point of the network and informational requirements for the stackelberg game . we relax the requirements and examine the effects of relaxation on performance .
|
the statistical theory of out - of - equilibrium systems is one of the most challenging , rapidly evolving , and an interdisciplinary domain of modern research .its fundamental importance is rooted in the fact that the majority of the situations encountered in nature ( in its broadest sense , including physical , chemical , and biological systems ) are nonequilibrium ones .such systems exhibit very complex and often counter - intuitive behaviors , resulting from a generic interplay between a large number of degrees of freedom , nonlinearities , noise " or perturbations of various origins , driving forces and dissipation ( corresponding to various levels of coarse - grained description ) .usually , noise and stochastic equations appear in the modeling when one concentrates the study on a few relevant variables ( hereafter called system under study " ) and approximates the effects of the eliminated degrees of freedom through a random force " or noise " with prescribed statistical properties .in particular , these eliminated degrees of freedom have a characteristic time scale , that is translated into the specific correlation time of the noise that intends to mimic them . in many situations of interest ,the characteristic response time of the system under study is much larger than this specific time scale of the eliminated variables . following the seminal works of einstein , langevin and smoluchowsky ,the noise is then safely modeled as a gaussian white noise ( gwn ) that is -correlated in time . in this casethe model system is generically referred to as a brownian particle " .the brownian motion under the action of the gwn is a wiener process , and a detailed mathematical analysis can be carried on the basis of langevin or fokker - planck equations .the wiener process and the gwn are , of course , stochastic processes of fundamental importance .however , they do not exhaust all the situations one may be called to model .indeed , there are cases when the eliminated degrees of freedom are _ slow _ on the time scale of the studied system . then one has to mimic the result of the coarse - graining over such a set of slow variables through a _ colored noise _ ,i.e. , a noise that has a non - negligible correlation time ( and thus a non - flat power spectrum ) .such noises have been studied in great detail in zero - dimensional systems , and their specific properties are known to have a profound influence on the behavior of these systems .generically , they lead the system out of equilibrium ( they break the detailed balance in the configuration space of the system ) .the effect of the color of the noise on noise - induced transitions and phase transitions continues to be documented , and has been found to be quite important , e.g. , it can alter the type of transition and lead to re - entrance phenomena .other noise - induced effects have also been found to be sensitive to the correlation time of the noise , like the stochastic resonance , the synchronization of several noisy dynamical units , and the directed transport in ratchets .the two most commonly discussed examples of colored noise are the ornstein - uhlenbeck process and the dichotomous markov noise ( dmn ) .although the ornstein - uhlenbeck process is easily invoked in view of doob s theorem , the purpose of this review is to show that _ dmn has particular virtues and interest . _ but before proceeding with this argumentation , let us define the dmn and describe its main stochastic properties .the dmn is a very simple _ two - valued stochastic process , with constant transition rates between the two states _ .figure [ figure1 ] illustrates a realization of dmn .+ the constancy of the transition rates corresponds to exponentially distributed waiting times in the two states , ( i.e. , the transitions are driven by poisson renewal processes ) and dmn is _markovian_. it is therefore completely characterized by the initial state and the matrix of the transition probabilities , {i , j=\pm}&=&\left[\mbox{probability}(\xi(t)=ia_i|\xi(0)=ja_j)\right]_{i , j=\pm } \nonumber\\ & & \nonumber\\ & = & \left [ \begin{array}{cc } p_{--}(t ) & \;\;\ ; p_{-+}(t)\\ p_{+-}(t ) & \;\;\ ; p_{++}(t ) \end{array } \right]= \tau_c \left [ \begin{array}{cc } k_++k_-e^{-t/\tau_c } & \;\;\ ; k_+(1-e^{-t/\tau_c})\\ k_-(1-e^{-t/\tau_c } ) & \;\;\ ; k_-+k_+e^{-t/\tau_c } \end{array } \right]\,,\end{aligned}\ ] ] where is the characteristic relaxation time to the stationary state of the dmn . in the foregoingwe shall be exclusively concerned with _ stationary dmn _, for which the stationary probabilities of the two states are with the corresponding mean value moreover , we shall generally consider zero - mean dmn , , in order to avoid any systematic bias introduced by the noise in the dynamics of the driven system .the stationary temporal autocorrelation function of dmn is exponentially - decaying corresponding to a _finite correlation time _ and an amplitude " the power spectrum is thus a lorenzian related to the characteristic time scale , so the dmn is indeed _colored_. a particular case of dmn , that will be widely used in the foregoing , is that of the _ symmetric _ dmn , for which and , so that the correlation time is , and the amplitude " , with , moreover , .let us underline yet another important property of the stationary , zero - mean dmn , namely that in appropriate limits it reduces either to a _ white shot noise _( wsn ) , or to a _ gaussian white noise _ ( gwn ) , see ref . for a detailed discussion .figure [ figure2 ] represents schematically these relationships .+ consider the stationary _ asymmetric _ dmn of zero mean , characterized by _ three independent parameters _( e.g. , and ) , with .then , by taking the limit one recovers the wsn driven by a poisson process ( also called the campbell s process ) ; this one is characterized by _ two parameters _( and , e.g. , ) .considering further on the limit with but , wsn reduces to a gwn ; this latter is characterized by a _single parameter _ , the noise amplitude " .note also that the _ symmetric _ dmn can reduce directly ( i.e. , without passing through the stage of a wsn ) to the gwn by taking simultaneously the limits , such that represents the amplitude of the gwn .there is no mapping possible between dmn and the ornstein - uhlenbeck process : indeed , one can not eliminate the non - gaussian character of the dmn while keeping a finite correlation time .the two noises have distinct statistical properties , a point that is also generally reflected in the response properties of the dynamical systems driven by these noises .however , as indicated by the central limit theorem , the superposition of suitably - scaled , independent dmns , converges in the limit to an ornstein - uhlenbeck process .this property may be used to tackle systems driven by an ornstein - uhlenbeck noise ( that are notoriously difficult to study ) by constructing approximate solutions corresponding to the superposition of several dmns .this review is intended to present dmn as a _tool for modeling stochastic processes_. we shall thus try to clarify its applicability , flexibility , and limits , as well as to describe a few prototypical applications .several essential points , briefly resumed below , will be resulting from this review . *as it will become clear , systems driven by dmn are _ encountered in a wide variety of physical and mathematical models_. besides the fashion - effect " ( peaked around the 1970s ) , there are deeper reasons to this and we shall address them .+ * a basic point is that _ dmn mimics the effects of finite correlation time ( colour ) of the noise in a very simple way_. it constitutes thus a good alternative to the widely - spread ornstein - uhlenbeck process , which is often quite untractable analytically .so , the dmn reflects the effect of the eliminated _ slow _ degrees of freedom on the dynamics of the relevant variables under study .refer for example to refs . for detailed discussions on this essential point of the modeling of stochastic systems .+ moreover , the interplay between the _ intrinsic _ time scale of the dmn and other time scales present in the problem ( e.g. , characteristic relaxation time of the system under study , external periodic perturbations , etc . ) may lead to nontrivial effects ( e.g. , multistability and hysteretic behavior , stochastic resonance , synchronization effects , etc . ) .these effects are absent when a white noise is acting ( even multiplicatively ! ) on the system . +* one has to realize that _dmn may be directly a very good representation of a simple and frequently - encountered physical situation _, namely that of the _ thermally activated transitions between two configurations / states _ of a system ( as far as the intra - configuration " motion is unimportant , filtered - out or coarse - grained ) . + a very clear illustrationis offered by the small electronic devices ( e.g. , mosfet , mim , jfet , point - contact resistances , etc . ) , as described in the review article ref .the active volume of such devices is so small that they contain only a reduced number of charge carriers ; then the alternate capture and emission of carriers at individual defect sites generates _ measurable _ discrete jumps in the device resistance .these jumps are naturally modeled as a dmn process .their study provides a powerful mean of investigating the nature of defects at the interfaces , the kinetics of carriers capture and emission ( see , e.g. , ref . for a recent experimental example ) , and has demonstrated the defect origin of low - frequency noise in these devices , etc .+ * generically , dmn _ drives the system out - of - equilibrium _( i.e. , it breaks the detailed balance in the configuration space of the system ) .therefore it may lead to novel behaviors that are not accessible in equilibrium systems ( like , for example , directed transport out of microscopic fluctuations ) .in particular , memory effects may become important the driven system is typically non - markovian . + * on one hand , dmn is sufficiently simple " so that it _ often allows for full analytical description _ , and it represents therefore a good research , as well as didactical tool. the emphasis of our presentation will thus be , to the largest possible extent , on _ analytically - solvable _ models and on _ exact results _ , as stated in the title of this review .+ moreover , the simplicity " of dmn allows to dissect the essential mechanisms at work behind various nonequilibrium , often seemingly complex phenomena .one can thus reveal the minimal scenario " and ingredients for the appearance of these processes . + * on the other hand , dmn is sufficiently rich " so that it leads to _ highly nontrivial and varied nonequilibrium behaviors in nonlinear systems_. this illustrates once more the fundamental fact that stochasticity may affect the dynamics more strongly than just a simple perturbation around the deterministic evolution : it may induce _ qualitatively _ different behaviors . + * as it was mentioned above , dmn _ reduces , in appropriate limits , to wsn and to gwn _ , and thus offers an alternative way of solving problems connected with these noises ( in particular , master equation for dmn is often simpler to solve than the fokker - planck equation for gwn ) .+ let us also note that the relationship of dmn with the gwn played a role in the famous ( and unfounded ! ) _ it versus stratonovich dilemma _ , see refs . and for pertinent comments on this point .+ * from the point of view of numerical simulations , dmn has the advantage of _ being easy to implement as an external noise with finite support_. + * last , but surely not least , systems driven by dmn point to _ interesting practical applications_. despite an impressive literature on the systems driven by dmn , the main lines and subjects of interest are easy to infer .the type of systems that are generically studied are _ zero - dimensional dichotomous flows _ , corresponding to a single stochastic variable driven by a dmn that may be additive or multiplicative , as described in sec .. we shall be first addressing in sec .iii the problem of the transient ( time - dependent ) characteristics of such flows , that might be important for some practical applications and/or finite observation times .we shall classify the solvable " cases and discuss in detail the seminal example of the dichotomous diffusion on a line .we shall turn afterwards to the more productive " study of the asymptotic , long - time behavior of the dichotomous flows . as it will become clear , both the physics and the mathematical approach are very different if the asymptotic dynamics exhibits or not _ unstable critical points _ this represents a main point of this review .the standard old theories " were limited to the absence of unstable critical points , as described in sec .most of the results referred to the celebrated noise - induced transitions and phase transitions that we shall briefly present . in sec .v we turn to the situations when the asymptotic dynamics presents unstable critical points .we show how calculations have to be modified in order to deal with such situations and illustrate the method on three prototypical examples , namely the hypersensitive response , the rocking ratchet , and the stochastic stokes drift .section vi will be devoted to a brief presentation of escape statistics ( mean first - passage time problems , resonant activation over a fluctuating barrier , etc . ) . in sec .vii we shall discuss the stochastic resonance phenomenon and its dmn - induced enhancement . section viii is devoted to some comments on spatial patterns induced by dmn forcing , while sec .ix describes briefly random maps with dmn .a few conclusions and perspectives are relegated to sec .x. a last warning refers to the fact that this review is non - exhaustive ( and we do apologize for the omissions ) , pedagogical to a large extent , and rather non - technical .we emphasized subjects that count amongst the most recent in this domain and in which we were directly involved .but , of course , all important earlier results are also described in detail , for the sake of a correct perspective on the field .a zero - dimensional dichotomous flow corresponds to the temporal evolution of some characteristic scalar variable of the system under study , whose velocity switches at random between two dynamics : the + " dynamics , , and the " dynamics , ( the dot designates the time - derivative ) .this process can be described by the following stochastic differential equation : where is a realization of the dmn taking the values with transition rates between these values , and . if is a constant , the dmn acts _ additively _ ;otherwise , for constant , the noise is _multiplicative_. we shall consider throughout the paper only _ constant _ transition rates , although some recent stochastic nonequilibrium models of protein motors use -dependent transition rates , see ref .most of our results can be easily generalized to cover these situations too .moreover , if not stated explicitely otherwise , we shall be working with a _ symmetric dmn _ , and . for the simplicity of the presentation, we shall often refer to as the position of an overdamped particle " ; but one should keep in mind that its actual nature depends on the system under study ( i.e. , that can be a spatial coordinate , a current , a concentration , a reaction coordinate , etc . ) .it is a stochastic process , and for most of the practical purposes its properties can be essentially described through the properties and evolution equation of the probability distribution function . indeed , one is in general interested by the the _ mean over the realizations of the dmn _ of an arbitrary function of the stochastic variable , a type of quantity that we shall denote throughout the text by : in the nonstationary regime of the dichotomous flow this is an explicit function of time , as discussed below .one is , of course , tempted to address first the question of the non - stationary , transient stochastic behavior of the variable , i.e. , that of the temporal evolution of its probability density from a given initial state ( a given initial probability distribution function ) to the ( presumably existent ) asymptotic stationary state with the corresponding stationary probability distribution function . the master equation for the probability density of the compound ( or vectorial ) stochastic process ] is markovian .it is in this very point that lies the importance of knowing whether a dichotomous flow is solvable : if one can reconstruct a markovian property at some level of differentiation of , then the knowledge of a finite - number of initial conditions for the probability density and its time - derivatives are enough in order to determine entirely the subsequent evolution of . as discussed in detail in refs . and , the _ condition of solvability _ is related to the behavior of the differential operators \;,\quad { { \cal b}}=-\partial_x[g(x)\, ... ]\ ] ] and the hierarchy of their comutators , \quad ( n\geqslant 1)\;,\quad \mbox{with}\ ; \quad { { \cal c}_0}={{\cal b}}\,.\ ] ] if this hierarchy closes , then satisfies a finite - order differential equation in time , etc . ) .see ref . for the details of their derivation . ] . in more detail ,if with some constants , then satisfies a partial differential equation of order in time . if the linear combination involves as well , i.e. , if then the order of the equation is . despite the seeming simplicity of these criteria , one should realize that they are quite restrictive and thus the classes of solvable cases are rather reduced . in particular, the following cases are _ not solvable _ :\(a ) the cases with an additive dmn ( constant ) and a nonlinear drift term ( nonlinear in ) .\(b ) when either or is a polynomial of order , does not satisfy _ in general _ a finite - order equation ( although it may do so in special cases of specific relationships between the coefficients of the polynomials ) .a few examples of _ solvable cases _ : \(a ) when , i.e. , , the process can be mapped , through a nonlinear transformation , onto a pure _ dichotomous diffusion _ for which the time - dependent solution is well - known ( see the subsection below ) .( a1 ) a subcase of interest is that of , so that .this corresponds to the so - called delayed evolution , in which the deterministic dynamics governed by the flow is interrupted at random instants and remains frozen at its current value , till the noise switches back and the " dynamics is continued .a closely - related type of flow , called interrupted evolution , is characterized by the fact that in the quiescent " state ( ) the stochastic variable is reset to a random value drawn from a fixed distribution .see ref . for more details and applications of these two types of flows .( a2 ) when , i.e. , , the problem reduces again to the dichotomous diffusion .this case is of interest , e.g. , in problems involving the exchange of stability between two critical points of the alternate dynamics , see ref . .\(b ) another case presented in the literature , see ref . , is that of , i.e. , , when obeys a second - order differential equation . the early - day hongler s model , see ref . , with and fall in this class . an important warning , however : one should be aware of the fact that solvable " in the sense indicated here above does not imply , in general , that one can express in a simple , closed algebraic form of some standard " functions . indeed , as explained in refs and , this is merely an exceptional situation , and probably the only case that was completely explored till now is that of the dichotomous diffusion ( and processes that reduce to it through some change of variables ) .it describes the stochastic position of a particle whose velocity is a dmn , i.e. , it is represented by the following stochastic differential equation : the corresponding master equation for the probability densities is ( for the case of a symmetric dmn with transition rate ) .this simple process is an example of so - called _ persistent diffusion on a line _ as detailed in refs .indeed , it can be obtained as the continuum limit of a persistent " random walk on a lattice , i.e. , a random walk for which the transition probabilities left or right at a given stage depend on the _ direction _ of the preceding jump .this means that the jump probabilities have a memory " of the previous state of the system , and therefore is no longer markovian .however , according to the general discussion in sec .iii.a , the vectorial process ] of the two configurations , one can obtain the ratio of the transition rates , since at equilibrium = k_- [ a^{(2)}] ] ; ( ii ) or come from and are driven by the flow ; this takes a time and happens with probability \right\} ] ( integer ) .thus , as far as the right - left symmetry is not broken , there is no current flowing through the system , see the upper panel of the figure .suppose now that we apply a small external force that breaks right - left symmetry but can not induce escape " from a finite interval in neither of the separate dynamics ( corresponding to the effective potentials ) .however , switching between these dynamics allows crossing " of the unstable fixed points , and thus the appearance of running solutions with a finite average velocity .see the lower panel of fig .[ figure15 ] .+ + from the point of view of the mathematics , the presence of unstable fixed points induces some serious difficulties in the calculation of the asymptotic probability distribution function .this is the main reason why , with a few exceptions , see refs . ( and also refs . for the related problem of the mean - first passage time ) , the problem of dichotomous flows with unstable critical points was generally not approached .we have recently made progress towards identifying the source of spurious divergences that arise in the usual analytical treatment of the problem , see refs . , and we are now in the position to consider this situation as well .consider thus a periodic dichotomous flow , with an asymptotic state characterized by a nonzero stationary flow through the system .one notices ( see ref . for further details ) that the master equation for the stationary probability distribution function : {st}'(x ) + \left[{(f_+f_-)'}-{(f_+f_- ) } ( \mbox{ln}|f_+-f_-|)'+k(f_++f_-)\right]{p_{st}(x)}\nonumber\\ & & \nonumber\\ & & \hspace{3cm}=\frac{\langle \dot{x } \rangle}{l } \left[2k+\frac{f_+-f_-}{2}\;\left(\frac{f_++f_-}{f_+-f_-}\right)'\;\right ] \label{master1}\end{aligned}\ ] ] becomes _ singular at the critical points of . the crucial point of the problem resides in finding the _ correct _ solution to this equation , i.e. , the one that has _ acceptable mathematical and physical properties _ ( see below , and also ref .a blind application of the method of variation of parameters to this differential equation leads to a solution of the form : , \label{case13}\ ] ] where is a constant of integration that arises from the general solution to the homogeneous part of eq .( [ master1 ] ) , the second contribution is the particular solution of the full inhomogeneous equation , is an arbitrary point in , and we have defined the functions : \right\ } , \nonumber\\ k(u , v;w)&=&\int_v^udz\ ; \left[\displaystyle \frac{2k}{f_{+}(z)-f_{-}(z)}+ \left(\frac{f_{+}(z)+f_{-}(z)}{2\left[f_{+}(z)-f_{-}(z)\right]}\right ) ' \right ] \nonumber\\ & & \times \mbox{sgn } \left[\frac{f_{+}(z)f_{-}(z)}{f_{+}(z)-f_{-}(z)}\right ] \;g(w , z ) \label{case14}\end{aligned}\ ] ] ( sgn is the signum function ) .the point is that using a _unique _ integration constant over the whole period ] * the fact that the probability density is periodic : with the eventual exception of some singularity points ( see the simple example below ) .this condition is determined , of course , by the supposed periodicity of the dichotomous flow . *the usual normalization condition one imposes to a probability density ( in this case , restricted to a spatial period ) : it can be shown that , under rather general conditions are continuous over the spatial period .under special circumstances , even this condition can be relaxed , as it will becom clear from the examples we shall be considering in the next subsections . ] these requirements ensure the _ existence and uniquenness _ of a well - defined stationary periodic probability density . in order to understand the mathematical mechanism involved in the elimination of the unphysical divergences appearing at the unstable critical points of the dynamics ,let us consider the simplest possible situation , namely when both are continuous over the spatial period , and only one of the two alternate flows has two critical points .for example , see fig .[ figure16 ] , has no fixed points , while has a stable and an unstable critical points ( to fix ideas , suppose ) .+ consider the expression of in eq .( [ case13 ] ) , for which we shall choose _ different integration constants _ in each of the separate intervals , , and ] , but ] and symmetric about , .then , which allows us to concentrate on a half - period . in this case , the "dynamics has an unstable fixed point , and the + " dynamics has a stable critical point ( with ) in ] ( that can be afterwards extended by periodicity to the whole real axis ) : '(z ) \right .\nonumber\\ & & \nonumber\\ & & \hspace{3cm}\times\ , \left.\exp\left[-\displaystyle\int_z^x dw\frac{2kf}{f^2-v^2(w)}\right]\right\}\,.\label{hyper6}\end{aligned}\ ] ] at the unstable fixed point , the probability density is continuous , at the stable fixed point , depending on the transition rate , the probability density is either continuous , ( for ) , or divergent but integrable ( for ) .it is the presence of these fixed points , and in particular the divergences of that cause a highly nonlinear conductivity of the system , as discussed below .finally , from the normalization of , the average asymptotic velocity is obtained as '(z)\nonumber\\ & & \nonumber\\ & & \hspace{5cm}\times\,\left .\exp\left[-\displaystyle\int_z^xdw\frac{2kf}{f^2-v^2(w ) } \right]\right\}^{-1}\ , . \label{hyper11}\end{aligned}\ ] ] these results for and the mean current are general and exact .however , they still involve triple integrals , and are too complicated to offer a picture of what is going on in the system .we shall thus consider further the particular case of a _ piecewise profile of _ , as represented in fig .[ figure17 ] .+ = 12.cm in this case , the integrals can be evaluated explicitely , and it is found that the behavior of the system is extremely rich .the response to the external perturbation is highly - dependent , in a non - monotonic way , on the transition rate of the dmn , more precisely on the control parameter ( for the significance of and refer to fig .[ figure17 ] ) .one notices the existence of four different regimes for the response , as illustrated in fig .[ figure18 ] .+ + = 12.cm \(a ) a first trivial _ linear response _ regime corresponds to very high applied forces , when the details of the substrate potential are forgotten " , and .all curves in fig . [ figure18 ]approach this limit with increasing .+ ( b ) a second _ linear response _regime appears when , i.e. , for very high transition rate of the dmn , such that the effects of the fluctuating forces are smeared out .this regime is visible on fig .[ figure18 ] for .+ in none of these regime is the response dependent on the characteristics of the dmn .the following two regimes are , on the contrary , far from trivial and appear only in the presence of critical points of the alternate dynamics .+ ( c ) the _ adiabatic regime _ of constant response appears for very slow switching rates , more precisely when in - between two flips of the potenatial ( refer to fig .[ figure15 ] ) the particles have enough time to move between two succesive extrema of the potential , and eventually to wait in a minimum of the potential till the next flip , that will put them in motion again .therefore , the condition for this regime is that the average time between switches is much longer than the typical escape time from a region close to a maximum of the potential , ; therefore : .then the mean velocity is simply half of the spatial period of the substrate potential divided by the mean switching time , ( i.e. , it is independent of the applied force and is directly proportional to ) : this regime is well seen on fig .[ figure18 ] for .+ ( d ) finally , the _ hyper - nonlinear _ regime which is realized for small forcing , . in this case , the particles manage to advance to the next minimum of the potential only in the exponentially rare cases when the dmn persists in the same state for sufficiently long time , much longer than both and ( see again fig . [ figure15 ] and see above for the meaning of ) .the mean velocity then falls rapidly to zero with decreasing : i.e. , it is inversely proportional to , and the corresponding diverging susceptibility indicates the highly - nonlinear and sensitive character of the response in this region .this regime appears for all the values of and sufficiently small , as seen on the inset of fig .[ figure18 ] .we are therefore again in a situation that is contrary to the intuition and to what is usually encountered in equilibrium systems , namely _ a strongly nonlinear response for small forcing , and a linear response for large forcing_. one of the paradigms of out - of - equilibrium system ( with an overwhelming literature these last years ) is the _ ratchet effect_. as it was already briefly mentioned above , a _ ratchet _ is , roughly speaking , a device that allows to get work ( i.e. , directed transport ) out of fluctuations .although one can think of macroscopic ratchets ( e.g. , self - winding wrist - watches , wind - mills ) , more interesting from the conceptual point of view are the _ microscopic rectifiers _ , for which microscopic thermal fluctuations are relevant . indeed ,while the second law of thermodynamics rules out directed transport ( apart from transients ) in a spatially - periodic system in contact with a single heat bath , there is no such fundamental law that prohibits stationary directed transport in a system driven out - of - equilibrium by a deterministic or stochastic forcing .such a driving forcing can be provided , for example , by a dmn that can act either multiplicatively , or additively . besides the _ breaking of detailed balance _ ,a further indispensable requirement for directed transport is _ breaking of spatial symmetry _ , see ref .there are three main possible mechanisms of symmetry - breaking , namely ( i ) a built - in asymmetry of the system ( in the absence of the driving perturbation ) ; ( ii ) an asymmetry induced by the perturbation ; ( iii ) a dynamically - induced asymmetry , as a collective effect , through an out - of - equilibrium symmetry - breaking phase transition .the case of an out - of - equilibrium driving by a _ multiplicative _dmn corresponds to the so - called _ flashing ratchet _ : generically , an overdamped brownian particle ) jumps dichotomously , at random , between two asymmetric periodic potentials : +\xi_{gwn}(t)\,,\ ] ] where is the dmn with transition rate , is a gaussian white noise , and , of course , . the particular case of , i.e. , a flat potential corresponding to a free diffusion in the " dynamics , is called _ on - off ratchet_. this results in a net flow of the particles , in a direction that is determined by the asymmetry of .various models , aspects , and experimental realizations of flashing or on - off ratchets , including sometimes the effect of inertia , were discussed in refs . , see also ref . .systems driven out of equilibrium by an _ additive _ dmn belong to the class of the so - called _ rocking ratchets _ : an asymmetric basic potential is rocked by a zero - mean additive force ( a dmn in the cases of interest to us ) : this leads generically to an asymmetry in the nonlinear response of the system , and thus to a systematic ( directed ) motion , be it in the presence or in the absence of a thermal noise .inertia of particles was also find to have an important effect on the direction of the drift .the literature on various model - realizations and applications is huge , and we cite here only a few references , , and also ref . for further examples . as an illustration, we propose here a very simple analytically solvable model of a _ rocking ratchet _ , see ref . for a detailed discussion .it is described by the following stochastic differential equation : with an _ additive and asymmetric _dmn , that takes the values with transition rate between these states .in order to fix ideas , we shall suppose . here with , and .this corresponds to an overdamped particle gliding in a rocked sawtooth potential , as depicted qualitatively in fig .[ figure19 ] .+ it is obvious that three distinct situations are possible , depending on the values of the dmn . in all these casesone can obtain closed analytical expressions for the stationary probability density and the mean asymptotic velocity .we refer the reader to ref . for the exact results , while here we shall comment on a few features .one can have : + ( a ) a regime of _ strong forcing _, when there is no critical point in any of the alternate dynamics , and running solutions appear for both tilts . with our conventions , this corresponds to and .there results a non - zero current through the system , determined by the interplay between the characteristics of the noise and those of the basic potential ( in particular , its bias , if any ) .one can also consider the gwn limit , for which already known results are recovered in a very simple way . in particular , the current through the system is strictly detemined by the bias of , such that , when the deterministic potential is unbiased , one recovers the equilibrium state with a boltzmann distribution and no current .+ ( b ) the regime of _ intermediate forcing _, when there are critical points in only one of the alternate dynamics . in our case, this happens when and , and only the + " dynamics has critical points .it is obvious that the sign of the flow is determined by that of the dynamics without fixed points ( " in our case ) ; sometimes the current may thus be opposite to the bias of the basic potential .one notices the possibility of current reversal when varying the amplitude of the of the noise ( at fixed ) , i.e. , when passing from regime ( a ) to regime ( b ) .as a limiting case of both ( a ) and ( b ) regimes , one can consider the white shot - noise limit ( see sec .i.a ) , and recovers easily the results of ref . .+ ( c ) finally , the regime of _ weak forcing _, when both alternate dynamics have critical points .there is not too much interest in it , since there is no flow through the system .additional thermal noise is needed to generate rectified motion , and this problem has been addressed ( mainly in the case of adiabatically slow forcing ) in refs . . _stokes drift _ refers to the systematic motion that a tracer acquires in a viscous fluid under the action of a longitudinal wave traveling through this fluid , see the original reference .the _ deterministic effect _ ( that does not account for the fluctuations or perturbations in the system ) has a simple intuitive explanation , as illustrated through the example of fig .[ figure20 ] .+ = 10.cm consider a longitudinal square - like wave of wavelength , propagating with velocity , and an overdamped tracer particle , that is entrained with a force while in the crest part of the wave , and while in the trough part ( with the situation of physical relevance ) .the suspended particle spends a longer time in the regions of the wave train where it is driven in the direction of propagation of the wave , namely ] .therefore , the particle is driven on the average in the direction of the wave propagation , with , the deterministic value of the stokes drift .this effect has been studied in various practical contexts , ranging from the motion of tracers in meteorology and oceanography , to the doping impurities in crystal growth , see the citations in refs . .recent studies , see refs . and references therein , show the importance the _ stochastic effects _ may have on stokes drift .the thermal diffusion of the dragged particles , as well as the application of a coloured external perturbation modify markedly both the _ direction _ and the _ magnitude _ of the drift velocity .we introduced a very simple , analytically tractable model for a stochastic stokes drift , described by a dichotomous flow as : here corresponds to the block - wave represented in fig .[ figure20 ] , and is a symmetric dmn of values and transition rate .one can perform a simple transformation of variables ( which corresponds to going to the wave co - moving frame ) , , through which the model can be mapped onto a _ rocking ratchet _ problem ( as described in the previous section , with an asymmetric basic sawtooth potential and an additive dmn ) : with for and for . as discussed previously , the behavior of the system , in particular the solution of the associated master equation for the stationary probability density and the asymptotic drift velocity , depend on whether or not there are critical points in the dichotomous dynamics .there are two important effects that appear due to the stochastic dmn forcing : ( i ) _ the enhancement of the drift as compared to its deterministic value _ ; ( ii ) the possibility of _ drift reversal _ when modifying the amplitude of he noise .refer to for the detailed calculations .also , noise induces a nonlinear dependence of the drift on the amplitude of the wave .therefore , if several waves are present , their contributions _ are not additive _ , which is a generic feature of stochastic stokes drift , contrary to its deterministic counterpart . in particular , as illustrated qualitatively in fig .[ figure21 ] , if two orthogonally - propagating but otherwise identical waves are present , one can induce a significant change in the direction and magnitude of the resulting drift simply by changing the transition rate of the dmn .this is an effect that may have important practical applications , e.g. , in directing doping impurities in crystal growth .+ = 11.cmthe mean first - passage time ( mfpt ) represents the mean value of the time at which a stochastic variable , starting at a given initial value , reaches a pre - assigned threshold value for the first time .it is a concept with many applications in physics , chemistry , engineering ranging from the decaying of metastable and unstable states , nucleation processes , chemical reaction rates , neuron dynamics , self - organized criticality , dynamics of spin systems , diffusion - limited aggregation , general stochastic systems with absorbing states , etc ., as discussed in ref . .mfpt expressions were obtained in closed analytical form for some particular markovian and non - markovian stochastic processes , including one - dimensional fokker - planck processes , continuous - time random walks or persistent random walks with nearest and next - to - nearest neighbour jumps , and birth - and - death processes , see refs . for some examples . for general non - markovian processesthe problem of the mfpt is delicate and intricate , as spelled out first in ref . . however , for general dichotomous flows one can obtain exact results , using various approaches and techniques ( backward equations , stochastic trajectory counting and analysis ) , see refs . .consider an overdamped particle that starts at in inside some interval ] , provided that the initial value of the dmn was , respectively . the coupled equations for are readily found to be : \,\frac{dt_+(x_0)}{dx_0}-k(t_+-t_-)=-1\,,\\ & & \left[f(x_0)-ag(x_0)\right]\,\frac{dt_-(x_0)}{dx_0}-k(t_--t_+)=-1\,.\end{aligned}\ ] ] the true difficulty here consists in assigning the correct _ boundary conditions _ , corresponding either to absorbing or to ( instantaneously ) reflecting boundaries , and also in treating the critical points of the alternate + " and " dynamics . for further details see the above - cited references . the particular case of the so - called _ bistability driven by _dmn , when derives from a bistable potential , , received a lot of attention , see refs . . in these systemsthe escape over the potential barrier out of the attraction domain of one of the two minima of the bistable potential is driven by an external additive or multiplicative dmn .consider the interval ] ( for example , and , ] ( i.e. , the free diffusion on a line ) , we also mention the recent results of refs . in which the problem of escape time of a particle from a metastable state with a fluctuating barrier , in the presence of both thermal noise and dichotomous noise , was solved in full analytical detail .the related problem of steady - state distributions and the exact results for the diffusion spectra were addressed in refs . . finally ,general equations for computing the effective diffusion coefficient in randomly switching potentials are derived for arbitrary mean rate of the potential switching and arbitrary intensity of the gaussian white noise , refs .generally speaking , the _ stochastic resonance _ ( sr ) phenomenon refers to the enhanced sensitivity " of a ( nonlinear ) system to a small deterministic periodic forcing in the presence of an optimal amount " of noise see ref . for a ( by now incomplete ) review of various model - systems with sr and their possible practical applications .such a definition is very broad , and till now there is no agreement about the precise signature of the sr , the necessary conditions of its occurrence , as well as the right " quantifiers , see refs . for further comments on this point .there is therefore a huge and varied literature on sr and its numerous realizations . in particular , for systems with dmn - driving see refs . for a few examples .a canonical model for sr is an overdamped particle in a symmetric double - well potential ( with ) , driven simultaneously by an additive gwn and an additive , weak periodic signal , see fig .[ figure24 ] for an illustration .hand - waving " argumentation for the appearance of sr would be the following .one one side , for too low gwn intensities , the thermally - activated jumps between the two wells are too rare and the particles do not take benefit of the alternate decrease of the potential barrier ( on one side or another ) due to the external signal ; on the other side , for too large gwn intensities , the jumps between the wells are very frequent ( a large number take place during one period of the external signal ) and thus , again , the response of the system is not synchronized with and does not benefit of the external signal .however , for intermediate noise intensities , the thermally - activated transition rates are comparable with the rocking rate of the potential , and the particles take advantage of the alternate decrease of the potential barrier , resulting in an enhanced response of the system to the applied external perturbation .+ = 9.cm as shown , for example , in ref . , both the spectral power amplification ( spa ) ( that represent the weight of the signal part in the output power spectrum ) , and the signal - to - noise ratio ( snr ) ( the spa rescaled by the input power ) represent good measures of the sr . indeed , both of them show a _ nonmonotonous behavior with a maximum as a function of the gwn intensity_. as it was recently shown , see refs . , the addition of a dmn has very important effects on the behavior of the system : ( i ) dmn can synchronize the switching time between the two stable states of the double - well ( i.e. , for a certain interval of the gwn intensity , the mean switching rate of the system is locked to the switching rate of the dmn ) , a phenomenon corresponding , in the limit of a weak external perturbation , to the _ resonant activation _ described in sec .vi.b ; ( ii ) moreover , the _ sr is greatly enhanced by the dmn _( i.e. , the spa and/or the snr can reach larger maximal values as compared to the case when no dmn is present ) . following refs . , let us illustrate these results on a simplified model .we shall neglect the intrawell motion , and describe only the belonging of the particle to one or the other wells of through the two - valued stochastic variable .the thermally - induced transition rate between the two - wells is given by , where is the height of the potential barrier , and is the properly - scaled gwn intensity .an additive dmn of low switching rate modifies these transition rates , considering the four - state stochastic process one can write down the following master equation : \ , .\label{msre}\ ] ] therefore , the mean switching rate ( msr ) of the output is found to be : with ] .if then , according to the general considerations above , the system becomes marginally unstable with respect to the onset of patterns of wavevector .note that may become bistable only for _ nonlinear _ systems ( i.e. , for potentials that are at least quadratic ) . of course, by continuity arguments one may expect the onset of the spatial patterns for finite switching rates , too , but the marginal stability condition eq .( [ effective ] ) will become dependent on . when the global switching is not random , but periodic in time , see ref . , besides the stationary spatial patterns one will also obtain periodic spatio - temporal patterns .the case of a potential that has a spatial , quenched dichotomous disorder was also shown to lead to spatial patterns , see ref . . moreover , as pointed out in refs . , these studies are relevant for other situations that lead to pattern formation , such as turing instabilities in reaction - diffusion systems .the influence of noise on _ discrete - time _ dynamical systems ( maps ) is much less - documented than for the continuos - time ones .aspects like the shift , broadening , or even suppression of bifurcations , the behavior of the invariant densities and of the lyapunov exponents near the onset of chaotic behavior , and the destabilization of locally stable states by noise have been documented , in general , for weak gwn , see refs . and references therein for examples . with few exceptions ,see refs . , the effects of a finite correlation time of the noise , such as a dmn , have not been addressed in general .for example , ref . considers the logistic map with a _ dichotomously fluctuating parameter _ : is a dichotomous noise with a probability of repeating the same value in the next iteration ( corresponds to the white noise limit , while and correspond to two deterministic limits ) , and .such a system may derive from a continuous - time one which is driven by a random sequence of pulses of constant duration ( this duration thus constitutes the time step of the map ) .the influence of the correlation time " of the noise on the dynamics is found to be quite dramatic : by simply varying it one can obtain all transitions , from chaos to regular motion .however , this field definitely calls for further investigations .the main goal of this review was to present the dmn as a flexible and very efficient tool in modeling out - of - equilibrium situations , allowing to emphasize the richness and variety of the encountered phenomena .we hope that the reader will find in it an useful ingredient in his / her modeling of various experimental situations , out of which only a small part was reported in this paper . in this respect, we are thinking of the cutting - edge techniques that allow to go to the direct study and visualization of microscopic and nanoscopic systems ( including biological , living systems ) : we expect coloured noise ( and , in particular , dmn ) to play an important role in modeling the interaction of such small systems with their surroundings .we also emphasize a few other opened conceptual problems . a first one would be the effect of inertia on dichotomous flows , and , more general , the statistical properties of several coupled dichotomous flows .the problem of fluctuation - like theorems for dmn - driven systems is to be addressed in further details .also , as it was mentioned above , the study of maps driven by dmn is still an opened field .we are grateful to profs .christian van den broeck , michel droz , katja lindenberg , ryoichi kawai , venkataraman balakrishnan , avadh saxena , mamad malek mansour , florence baras , and max - olivier hongler for over - the - years discussions on dmn - related subjects .we also thank franois coppex for precious help in preparing the manuscript and acknowledge partial support from the swiss national science foundation .f. jlicher , force and motion generation of molecular motors : a generic description , in _ transport and structure : their competitive roles in biophysics and chemistry _ , eds .mller , j. parisi and w. zimmermann , lecture notes in physics ( springer - verlag , berlin , 1999 ) .v. balakrishnan , solvability of dichotomous flows , dichotomous diffusion and generalizations , in _ noise in complex systems and stochastic dynamics _ , _ proc .spie _ * vol .5114 * , eds .l. schimansky - geier , derek abbott , a. neiman , and c. van den broeck ( 2003 ) . c. van den broeck , from stratonovich calculus to noise - induced phase transitions , in _ lecture notes in physics _484 , eds .l. schimansky - geier and t. pschel ( springer - verlag , berlin , 1997 ) , pp . 7 - 14 . c. van den broeck and i. bena , parametric resonance revisited , in _stochastic processes in physics , chemistry , and biology _ , eds . j. a. freund and t. poschel , lecture notes in physics vol .557 ( springer - verlag , berlin , 2000 ) .results for the case of white shot noise , that can be obtained as a limit of dmn , were discussed in : t. czernik and j. luczka , rectified steady flow induced by white shot noise : diffusive and non - diffusive regimes , _ ann .( leipzig ) _ * 9 * , 721 ( 2000 ) . o. v. gerashchenko , s. l. ginzburg , and m. a. pustovoit , multiplicative - noise - induced amplification of weak signals in a system with on - off intermittency , _ j. explett . _ * 67 * , 997 ( 1998 ) .o. v. gerashchenko , s. l. ginzburg , and m. a. pustovoit , experimental observation of noise - induced sensitivity to small signals in a system with on - off intermittency , _ eur . j. phys .b _ * 15 * , 335 ( 2000 ) .j. olarrea , j. m. r. parrondo , and f. j. de la rubia , escape statistics for systems driven by dichotomous noise .the imperfect pitchfork bifurcation as a case study , _phys . _ * 79 * , 683 ( 1995 ) .p. reimann and p. hnggi , surmounting fluctuating barriers : basic concepts and results , in _ stochastic dynamics _ ,lecture notes in physics , vol .484 , eds .l. schimansky - geier and t. pschel ( springer - verlag , berlin , 1997 ) .p. reimann , r. bartussek , and p. hnggi , reaction rates when barrier fluctuate : a singular perturbation approach , _ chem .phys . _ * 235 * , 11 ( 1998 ) .n. v. agudov , a. a. dubkov and b. spagnolo , escape from a metastable state with fluctuating barrier , _ phys .a _ * 325 * , 144 ( 2003 ) . j. m. r. parrondo _ et al _ , paradoxical games , ratchets , and related phenomena , in _ advances in condensed matter and statistical mechanics _ , eds .e. korutcheva and r. cuerno ( nova science publishers , 2003 ) .h. s. wio and katja lindenberg , noise induced phenomena , in _ modern challenges in statistical mechanics : patterns , noise , and the interplay of nonlinearity and complexity _ , aip conf .658 , eds .v. m. krenke and k. lindenberg ( aip , new york , 2003 ) .
|
nonequilibrium systems driven by additive or multiplicative dichotomous markov noise appear in a wide variety of physical and mathematical models . we review here some prototypical examples , with an emphasis on _ analytically - solvable _ situations . in particular , it has escaped attention till recently that the standard results for the long - time properties of such systems can not be applied when unstable fixed points are crossed in the asymptotic regime . we show how calculations have to be modified to deal with these cases and present a few relevant applications the hypersensitive transport , the rocking ratchet , and the stochastic stokes drift . these results reinforce the impression that dichotomous noise can be put on a par with gaussian white noise as far as obtaining analytical results is concerned . they convincingly illustrate the interplay between noise and nonlinearity in generating nontrivial behaviors of nonequilibrium systems and point to various practical applications .
|
the study of social networks , representing interactions between humans or groups , is a subject of broad research interest . in recent years, tools from network analysis have been applied to sports .for example , developed a network approach to quantify the performance of individual players in soccer . studied the complex network structure of brazilian soccer players . introduced a general model - free approach to elucidate the outcome of a soccer match .network analysis tools have been applied to football ( ; ) , baseball ( ; ) and basketball ( ; ) . studied the head - to - head matchups between major league baseball pitchers and batters as a bipartite network ( ) .the advantage of a network representation of any real system is that it gives the global view of the entire system and the interaction between individuals reflecting self - emergent phenomena . in this paperwe apply tools of social network analysis to cricket .cricket is a popular sport around the world and is played mostly in the erstwhile english colonies .its popularity is the highest in the indian subcontinent . despite series of controversies involving match fixing , spot fixing and ball tampering , the sport has managed to maintain international attention as well research interests( , , ) .currently there are ten countries that have been granted test status by international cricket council ( icc ) - australia ( aus ) , bangladesh ( ban ) , england ( eng ) , india ( ind ) , new zealand ( nz ) , pakistan ( pak ) , south africa ( sa ) , sri lanka ( sl ) , west indies ( wi ) and zimbabwe ( zim ) .the reliance icc rankings is the official guide used to evaluate the performance of teams as well as the players .ranking schemes are based on points that are acquired by a team after a tournament . as mentioned by , due to the opacity of the ranking schemes ,the methods used by icc are still not comprehensible . again in cricket the captain is responsible for the team .before the game starts the home captain tosses the coin and the touring captain calls heads or tails . the captain chooses the batting order , sets up fielding positions and shoulders the responsibility of on - field decision - making . thus the outcome of a match depends on the captain s decisions .additionally , the captain is also responsible at all times for ensuring that play is conducted within the spirit of the game as well as within the laws . in this sense ,the success of a team depends on the captain .however , currently there exist no ranking schemes to rank the cricket captains .in this paper we numerically estimate the success of a team as well as the captain by analyzing the network of interaction of competing teams and also the captains .the primary goal of the paper is to elucidate the impact of network structure on rankings of teams and also that of the cricket captains .while the number of wins is a natural measure for success of a team , it does not provide a full picture of the ` quality ' of win .we are thus motivated to study an alternative method to assess the quality of a win .for example , a win against australia or south africa carries more importance than a win against a lesser team .this is analogous to the citation networks in which the effect of citation coming from an important paper is greater than that coming from a less popular one .the pagerank algorithm ( ) , a network - diffusion - based algorithm has emerged as leading method to rank scientists ( ) , papers ( ) .more recently applied pagerank algorithm to rank tennis players . in this paperwe apply the pagerank algorithm to rank cricket teams and also identify the most successful cricket captain .the rest of the paper is organized as follows . in section 2 , we define and characterize the cricket - team network and provide a description of the pagerank algorithm that we employ as a ranking scheme across eras and also in the history of cricket ( ) . in section 3 , we discuss the results and we conclude in section 4 .data were collected from the website of cricinfo ( _ http://www.espncricinfo.com/_ ) .we downloaded the information of results and also the captains who led their respective teams from the score - cards . for a single match, the score - card keeps track of information about the teams , the runs scored by batsmen , wickets taken by bowlers , the names of captains who led their respective teams and the result of a match .we collected the data for test matches ( ) and one day international ( odi ) cricket ( ) . in our analysiswe have excluded the matches with no results and matches which were abandoned .we analyze the network of cricket teams by analyzing the head - to - head encounter of competing teams .a single match is represented by a link between two opponents .thus if team wins against team , a directed link is drawn from to ( figure [ fig1 ] ) .a weighted representation of the directed network is obtained by assigning a weight to the link , where is equal to the fraction of times team wins against team .we quantify the relevance of matches with the use of a complex network approach equivalent to the one used for the computation of the pagerank score .mathematically , the process is described by the following set of equations where is the weight of a link and = is the out - strength of a link . is the pagerank score assigned to team and represents the fraction of the overall `` influence '' sitting in the steady state of the diffusion process on vertex ( ) . in eqs .( [ eq : pg ] ) , $ ] is a control parameter that accounts for the importance of the various terms contributing to the score of the nodes and is the total number of teams in the network . the term represents the portion of the score received by node in the diffusion process according to the hypothesis that vertices redistribute their entire credit to neighboring nodes . stands for a uniform redistribution of credit among all nodes .the term serves as a correction in the case of the existence of dangling nodes ( i.e. , nodes with null out - degree ) , which otherwise would behave as sinks in the diffusion process .width=528 ] traditionally , the choice of is set at ( ) .hence , we set and run the ranking scheme on networks of cricket teams and also on their captains . in table[ table1 ] , we report the results obtained from analysis of network of cricket teams for test cricket . we identify _ australia _ as the most successful team in history of test cricket . even though _ south africa _ was banned from playing international cricket from , it emerges as the second best team followed by _, _ west indies _ , _pakistan _ , _ india _ , _ sri lanka _ , _ new zealand _ , _zimbabwe _ and _ bangladesh_. table [ table2 ] shows the ranking of teams in history of odi cricket ( ) .australia _ emerges as the best odi team ever followed by _south africa _, _ west indies _ , _ england _ , _ pakistan _ , _ india _ , _ new zealand _ , _ sri lanka _ , _ zimbabwe _ and _ bangladesh_. the success of australia could be justified by the dominance of _ australia _ in international cricket for a long period of time ._ australia _ won test series in all the countries and also won four icc world cups in , , and . ) ., width=528 ] we also report the results obtained from the analysis of the network of competing captains(see table [ table3 ] ) ._ steve waugh _ heads the top list of most successful captains in test cricket .the success of _ steve waugh _ could be _ posteriori _ justified by the fact that he led _ australia _ in of their world - record successive test victories . over all_ steve waugh _ won of the test matches he captained .it is interesting to note that of the top captains are from _australia_. south africa s _ graeme smith _ emerges as the second best captain with _ricky ponting _ occupying the third position . from the subcontinent only india s _ m. s. dhoni _ and _ sourav ganguly _ finds a place in the top list .we also perform a similar analysis in odi cricket ( see table [ table4 ] ) . this time _ricky ponting _ emerges as the best captain in odi history , followed by _graeme smith _ ( south africa ) in second place and _ imran khan _ ( pakistan ) in the third ._ ricky ponting _s success as a captain in the odi format is marked by two successive world cup wins in and , with a world - record of consecutive undefeated world cup games . under his captaincy _ australia _ also won the champions trophy in and successfully defended the title in .contrary to the list in test cricket , several of the successful captains in the odi format are from the subcontinent .we also perform a different kind of analysis by constructing networks of teams and their captains in different eras . in table[ table5a ] and table [ table5b ] we report the ranking of teams in different era of test cricket respectively .we compare our ranking with reliance icc team ranking .the table of historical ranking of teams , available at icc s website( ) , begins from for test cricket and from for odi cricket .we rank the teams according to the average of the points scored by any team . during the period , _ australia _ emerged as the most successful team . between and _australia _ was the most successful team according to the pagerank algorithm and also icc s ranking scheme .during _ west indies _ was the best team according to icc ranking . even though the early 1960s were poor periods for england , during the late 60 s _ england _ defeated stronger opponents like _ west indies _ and _ australia_. hence judging by the quality of wins , according to pagerank during england was the most successful team .a similar effect is also observed during the era , where _ india _ occupies the second position according to pagerank . during the same period_ india _ defeated stronger opponents like _ west indies _ and _ england_. both ranking schemes show _ west indies _ was the best team between and .their best period was between february 1981 and december 1989 : in tests in that span , they had a - win - loss record , with victories against _ australia _ , _ england _ , _ new zealand _ and _ india_. during the same span , pakistan was victorious against quality opposition like _ australia _ , _ england _ , and _india_. we observe that both ranking schemes predict _ australia _ as the best team since then . the dominance of _ australia _ in both decades is also reflected in the fact that between october and november , they played tests , and won of them with - win - loss record .the ranking of other teams according to pagerank does not correspond to those of icc ranking . during _india _ occupies the third position according to pagerank score , instead of _west indies_. similarly , between and , _ india _ occupies the second position according to pagerank , whereas according to the icc ranking _ south africa _ occupies the second spot .we report a similar ranking of teams in odi cricket in different era in table [ table6a ] .we observe that _ west indies _ was the best team throughout the 70 s and 80 s .pagerank score shows that _ south africa _ was the best team in the 90 s and _ australia _ is the best team from . according to icc ranking _ australia _is the most successful team during the 1990s and also from .we observe strong correlation between pagerank score and reliance icc ranking and fraction of victories ( in - strength rank ) .we compare the overall ranking of teams playing test cricket ( ) and odi cricket ( ) .figure [ fig4](a ) shows that between and _ south africa _ is the best team according to pagerank score , where as _ australia _ is the best team according to reliance icc ranking .we observe strong correlation between the ranking schemes for odi cricket ( ) ( as shown in figure [ fig4](b ) ) .according to pagerank score and in - strength the top three positions in test cricket ( ) , are occupied by _australia _ , _ south africa _ and _ england _ respectively ( see figure [ fig4](c ) ) . in odi cricket ( ) , _ australia _ emerges as the best team according to pagerank score as well as in - strength . in figure [ fig5 ]we show the correlation among different ranking schemes as function of time .we provide a ranking of captains in test cricket ( table [ table5c ] ) and odi cricket ( table [ table6b ] ) in different era . between and _bill woodfull _ ( australia ) is the most successful captain with _ sir don bradman _ occupying the second position ._ richie benaud _ ( australia ) leads the list twice during and . during the period _ ian chappell _occupies the top position as captain , with _ clive lloyd _ occupying the second position . from west indieswas the most successful team and _ sir vivian richards _ was the most successful captain .mark taylor ( australia ) is the best captain between and and _ graeme smith _( south africa ) emerge as the best captain during . in odicricket australia s _ greg chappell _ emerge as the most successful captain between and ._ clive lloyd _ occupy the second position during that period .pakistan s _ imran khan _ leads the list during the era .south africa s _ hansie cronje _ was the most successful captain from . during the period _ ricky ponting _is the most successful captain followed by south africa s _ graeme smith _ and india s _ m.s.dhoni_. in figure [ fig6 ] we show the correlation among the two ranking schemes for captains ..*most successful teams in history of test cricket ( * ) .the teams are ranked according to the pagerank score of each team . [ cols="<,<",options="header " , ] [ table4 ] ccc * era * & * pagerank * & * reliance icc - ranking * + & & + & australia & + & england & + & west indies & + & south africa & + & new zealand & + & india & + & australia & australia + & england & england + & pakistan & west indies + & west indies & south africa + & south africa & pakistan + & india & india + & new zealand & new zealand + & england & west indies + & west indies & australia + & australia & england + & new zealand & south africa + & south africa & india + & india & pakistan + & pakistan & new zealand + & australia & australia + & india & england + & west indies & pakistan + & england & west indies + & pakistan & india + & new zealand & new zealand + [ table5a ] ccc * era * & * pagerank * & * reliance icc - ranking * + & west indies & west indies + & pakistan & pakistan + & australia & new zealand + & new zealand & australia + & england & india + & india & england + & sri lanka & sri lanka + & zimbabwe & zimbabwe + & australia & australia + & south africa & south africa + & india & west indies + & west indies & pakistan + & pakistan & india + & england & england + & new zealand & sri lanka + & sri lanka & new zealand + & zimbabwe & zimbabwe + & bangladesh & bangladesh + & australia & australia + & india & south africa + & south africa & india + & england & england + & sri lanka & sri lanka + & pakistan & pakistan + & new zealand & new zealand + & west indies & west indies + & zimbabwe & zimbabwe + & bangladesh & bangladesh + [ table5b ] ccc * era * & * pagerank * & * reliance icc - ranking * + & & + & west indies & + & australia & + & england & + & new zealand & + & pakistan & + & india & + & sri lanka & + & west indies & west indies + & australia & australia + & england & england + & pakistan & pakistan + & india & india + & new zealand & new zealand + & sri lanka & sri lanka + & zimbabwe & + & bangladesh & + & south africa & australia + & australia & south africa + & pakistan & pakistan + & england & west indies + & sri lanka & england + & west indies & india + & india & sri lanka + & new zealand & new zealand + & zimbabwe & zimbabwe + & bangladesh & bangladesh + & australia & australia + & south africa & south africa + & india & sri lanka + & sri lanka & pakistan + & pakistan & india + & new zealand & new zealand + & england & england + & west indies & west indies + & bangladesh & zimbabwe + & zimbabwe & bangladesh + [ table6a ] ) ; ( kendall , spearman correlation ) . * ( b ) * scatter plot between the rank positions obtained according to reliance icc ranking and those obtained with pagerank for odi cricket ( ) ; ( , ) . * ( c ) * scatter plot between the rank positions obtained according to in - strength and those obtained with pagerank for test cricket ( ) ; ( , ) . *( d ) * scatter plot between the rank positions obtained according to in - strength and those obtained with pagerank for odi cricket ( ) ; ( , ) ., width=528 ] ( blue ) , between the ranking based on pagerank and the one based on the reliance icc ranking , as function of time , for test matches ( ) . *( b ) * the correlation coefficients are calculated between the ranking based on pagerank and the one reliance icc ranking for odi matches ( ) . * ( c ) * the correlation coefficients are calculated between the ranking based on pagerank and in - strength for test matches ( ) . *( d ) * the correlation coefficients are calculated between the ranking based on pagerank and in - strength for odi matches ( ).,width=528 ] ccc * era * & * top five captains * & * country * + & bill woodfull & australia + & sir donald bradman & australia + & john goddard & west indies + & sir gubby allen & england + & normal yardley & england + & richie benaud & australia + & gulabrai ramchand & india + & peter may & england + & abdul kardar & pakistan + & lindsay hassett & australia + & richie benaud & australia + & sir frank worrell & west indies + & bob simpson & australia + & ted dexter & england + & sir garry sobers & west indies + & ian chappel & australia + & clive lloyd & west indies + & greg chappell & australia + & ray illingworth & england + & mike denness & england + & sir vivian richards & west indies + & allan border & australia + & greg chappell & australia + & clive lloyd & west indies + & geoff howarth & new zealand + & mark taylor & australia + & hansie cronje & south africa + & allan border & australia + & mike atherton & england + & steve waugh & australia + & graeme smith & south africa + & ricky ponting & australia + & steve waugh & australia + & m. s. dhoni & india + & sourav ganguly & india + [ table5c ] ccc * era * & * top five captains * & * country * + & greg chappell & australia + & clive lloyd & west indies + & geoff howarth & new zealand + & mike brearley & england + & sunil gavaskar & india + & imran khan & pakistan + & sir vivian richards & west indies + & kapil dev & india + & allan border & australia + & javded miandad & pakistan + & hansie cronje & south africa + & arjuna ranatunga & sri lanka + & mohammad azharuddin & india + & wasim akram & pakistan + & richie richardson & west indies + & ricky ponting & australia + & graeme smith & south africa + & m. s. dhoni & india + & stephen fleming & new zealand + & mahela jayawardene & sri lanka + [ table6b ] ) ; ( kendall , spearman correlation ) . * ( b ) * scatter plot between the rank positions obtained according to in - strength and those obtained with pagerank for odi cricket ( ) ; ( , ) . ,our work demonstrates the strength of social network analysis methods in quantifying the success of cricket teams and their captains .here we have created a directed and weighted network of contacts ( i.e , teams and captains ) .the correct assessment of a team s success ( or captain s success ) needs the consideration of the entire network of interaction .the pagerank algorithm takes into account the quality of matches won .for example , a win against a strong team is more important than a win against a weak team .also a captain is as good as the team . in this sense ,a win against _ clive lloyd _ , _ steve waugh _ or _ graeme smith _ is more relevant than a win against a lesser captain .our analysis shows that pagerank algorithm is effective in finding the most successful team and captain in the history of cricket .it should be noted that success of a team or a captain depends on various factors like home advantage , success of batsmen and bowlers .for example , australia s dominance in both forms of the game is a manifestation of the fact that they are able to adjust in all kinds of pitches around the world , whereas subcontinent teams always played well under subcontinent conditions but were not able to repeat their performance abroad on a consistent basis .our analysis does not require these ` external ' factors which are usually taken into account in icc rankings .however , we would like to mention that our method does not aim to replace the icc ranking .it suggests a novel approach to refine the existing ranking scheme .we would like to state that cricket is a team game .success or failure of a team depends on the collective performance of all team members .simple statistics like runs scored by batsmen , wickets taken by bowlers or exceptional fielding does not provide a reliable measure of a player s contribution to the team s cause . quantifying the impact of player s individual performance in sportshas been a topic of interest in soccer ( ) and baseball ( ) .however , in cricket the rules of the game are different and therefore it would be interesting to apply tools of network analysis on interaction between players .for example , a contact network of batsman bowler could give an estimate of the greatest batsman ( bowler ) ever .potentially , a quantitative approach to a player s performance could be used to estimate the man of the match ( series ) award after a tournament .we thank the cricinfo website for the public availability of information on cricket matches .we also gratefully acknowledge helpful discussions with rufaro mukogo , david mertens and xiaohan zeng .borooah , v. k. and j. e. mangan ( 2010 ) : `` the `` bradman class '' : an exploration of some issues in the evaluation of batsmen for test matches , 1877 - 2006 , '' _ journal of quantitative analysis in sports _ , 6 .
|
we consider all test matches played between and and one day international ( odi ) matches played between and . we form directed and weighted networks of teams and also of their captains . the success of a team ( or captain ) is determined by the ` quality ' of wins and not on the number of wins alone . we apply the diffusion based pagerank algorithm on the networks to access the importance of wins and rank the teams and captains respectively . our analysis identifies _ australia _ as the best team in both forms of cricket test and odi . _ steve waugh _ is identified as the best captain in test cricket and _ ricky ponting _ is the best captain in the odi format . we also compare our ranking scheme with the existing ranking schemes which include the reliance icc ranking . our method does not depend on ` external ' criteria in ranking of teams ( captains ) . the purpose of this paper is to introduce a revised ranking of cricket teams and to quantify the success of the captains .
|
numerical path integral methods have proved to be highly useful tools in the analysis of finite temperature , many - body quantum systems. a central theme in such studies is the conscious use of dimensionality , both in the reformulation of the original problem and in the subsequent numerical simulations .as the scale of the problems under study continues to grow , it becomes increasingly important that the formal properties of the numerical methods that are utilized be properly characterized .recently , predescu and co - workers have presented a number of results concerning the convergence properties of random series - based path integral techniques .important in their own right , these formal properties have also led to the development of a new class of path integral methods , the so - called reweighted techniques. reweighted approaches accelerate the convergence of `` primitive '' series methods by including the effects of `` higher - order '' path variables in a simple , approximate fashion .reweighted methods achieve the convergence rate of related partial averaging approaches without requiring the construction of the gaussian transform of the underlying potential energy function .previous work on the reweighted method has focused principally on the construction of the quantum - mechanical density matrix. in the present work , we wish to examine estimators for various coordinate - diagonal and off - diagonal properties .while the present discussion is focused principally on reweighted methods , the results obtained are broadly applicable to more general random series approaches . in section ii of the present article, we examine the thermodynamic ( t - method ) and direct ( h - method ) estimators for the total energy . in order to avoid any confusion with earlier estimators, we mention that in the present article by t - method and h - method estimators we understand the respective energy estimators introduced by predescu and doll in ref . .thus , the t - method estimator we employ does not have the variance difficulties associated with the barker estimator for large numbers of path variables. as the low - temperature simulation presented in the second part of the article demonstrates , the present t - method estimator does not exhibit any of the difficulties sometimes associated with the virial estimator for low - temperature systems or for strongly correlated monte carlo sampling techniques. the t - method estimator is closely related and similar in form to the centroid virial estimator. we expect the two estimators to have similar behavior with the nature of the quantum system , the temperature , and the monte carlo sampling method. however , an important difference between the two estimators is the fact that the t - method estimator is a veritable thermodynamic estimator , in the sense that it is obtained by temperature differentiation of the quantum partition function .this observation is important because the temperature differentiation can be implemented numerically by a finite - difference scheme and , in principle , may lead to numerically stable algorithms that do not require derivatives of the potential . for large dimensional systems or systems described by complicated potentials , we expect such algorithms to be significantly faster than those based on explicit analytical formulas .the relative merits of such algorithms will be examined in future work . in section iii, we examine the application of the reweighted methods to a model problem , that of simulating the thermodynamic properties of the ( h molecular cluster .in section iv , we summarize our present findings and clarify a number of issues raised in previous studies of this molecular hydrogen system. this section , we consider a one - dimensional quantum canonical system characterized by inverse temperature and set forward the task of computing its average energy by monte carlo integration methods developed around several reweighted techniques. the physical system is made up of a particle of mass moving in the potential .we discuss the numerical implementation and the computational merits of both the t - method and h - method estimators .any time the multidimensional extension is not obvious , we present the explicit formulas of the respective estimators . we begin by presenting the general form of the path integral methods we employ in this paper .we remind the reader that in terms of a standard brownian motion , the feynman - ka formula has the expression \\ & & \times \mathbb{e}\left[e^{-\beta \int_0 ^ 1 v(\sigma b_u){d}u } | \sigma b_1 = x ' , \sigma b_0 = x \right],\end{aligned}\ ] ] where . in this paper , we shall use the symbol to denote the expected value ( average value ) of a certain random variable against the underlying probability measure of the brownian motion .it is straightforward to see that the first factor of the product in eq .( [ eq:1a ] ) ( which represents the conditional probability density that the rescaled brownian motion reaches the point provided that it starts at the point ) is the density matrix of a free particle of mass = \rho_{fp}(x , x';\beta).\ ] ] moreover , rather than using the conditional expectation appearing in the second factor of eq .( [ eq:1a ] ) , one usually employs a stochastic process , called a standard brownian bridge, which is defined as a standard brownian motion conditioned on the end points such that and . in terms of the newly defined process , the feynman - ka formula reads {d}u\right\},\ ] ] where is a straight line connecting the points and and is called the reference path . as discussed in ref ., one of the most general constructions of the standard brownian bridge is given by the ito - nisio theorem. let be a system of functions on the interval ] .let if is the space of infinite sequences and =\prod_{k=1}^{\infty}{d}\mu(a_k)\ ] ] is the probability measure on such that the coordinate maps are independent identically distributed ( i.i.d . ) variables with distribution probability then i.e. , the right - hand side random series is equal in distribution to a standard brownian bridge .the notation in ( [ eq:2a ] ) is then appropriate and allows us to interpret the brownian bridge as a collection of random functions of argument , indexed by .using the ito - nisio representation of the brownian bridge , the feynman - ka formula takes the form \nonumber \exp\bigg\{-\beta \int_{0}^{1}\ ! \ !v\big[x_r(u ) \\ & + & \sigma \sum_{k=1}^{\infty}a_k \lambda_k(u ) \big]{d}u\bigg\}.\end{aligned}\ ] ] for a multidimensional system , the feynman - ka formula is obtained by employing an independent random series for each additional degree of freedom . a reweighted method constructed from the random series is any sequence of approximations to the density matrix of the form {d}u\bigg\},\qquad\end{aligned}\ ] ] where and are some fixed integers , where and where in eq .( [ eq:5 ] ) , indexes the sequence of reweighted approximations , sequence that converges to the density matrix in the limit .remark that the approximation of index actually utilizes variables for path parameterization . in the construction of a certain path ,the first functions coincide with the ones for the corresponding series representation , as shown by eq .( [ eq:6 ] ) .a number of additional functions are constructed so that to maximize the order of convergence of the reweighted approximation .notice that if the resulting approximation has a convergence of order as measured against , then it has the same order of convergence when measured against the total number of variables , though the convergence constant is times larger .this explains why the number of additional functions is chosen to scale linearly with . for additional information ,the reader is advised to consult ref . .it is convenient to introduce the additional quantities and , which are defined by the expressions {d}u\bigg\}\qquad\end{aligned}\ ] ] and {d}u\bigg\},\qquad\end{aligned}\ ] ] respectively .with the new notation , eq .( [ eq:5 ] ) becomes (x , x',\bar{a};\beta),\end{aligned}\ ] ] while the feynman - ka formula reads \infty(x , x',\bar{a};\beta).\end{aligned}\ ] ] the analytical expressions of the functions depend on the nature of the reweighted techniques and are generally chosen to maximize the asymptotic convergence of the respective reweighted techniques. to a large extent , the specific form of these functions is not important for the present development , but the reader is advised to consult refs . and for quadrature techniques and additional clarifications .the remainder of the present section is split into two parts .first , we discuss the problem of computing the ensemble averages of operators diagonal in coordinate representation .in particular , this resolves the problem of computing the average potential energy .second , we consider the problem of evaluating the total energies ( hence , also the kinetic energies ) by means of the t - method and h - method estimators . by definition ,the ensemble average of an operator diagonal in the coordinate representation is the quantity is the diagonal density matrix . by convention, we drop the second variable of the pair any time .for instance , we use instead of . by means of eq .( [ eq:11 ] ) , the average above can be recast as \infty(x,\bar{a};\beta)o(x)}{\int_{\mathbb{r } } { d}x \int_{\omega}{d}p[\bar{a}]x_\infty(x,\bar{a};\beta)}.\ ] ] this average can be recovered as the limit of the sequence (x,\bar{a};\beta)o(x)}{\int_{\mathbb{r } } { d}x \int_{\omega}{d}p[\bar{a}]x_n(x,\bar{a};\beta)},\ ] ] the terms of which are to be evaluated by monte carlo integration .the estimating function appearing in the above formula is called the point estimating function of the operator .an alternative to the point estimating function is the so - called path estimating function , the derivation of which is presented shortly .as demonstrated in appendix a , the function appearing in eq .( [ eq:13 ] ) can be replaced by ] . in the limit , the variance of the point estimating function converges to \infty(x,\bar{a};\beta)o(x)^2}{\int_{\mathbb{r } } { d}x \int_{\omega}{d}p[\bar{a}]x_\infty(x,\bar{a};\beta)}-\left\langle o \right\rangle_{\beta}^2 \\ & & = \frac{\int_{\mathbb{r}}{d}x \int_{\omega}{d}p[\bar{a}]x_\infty(x,\bar{a};\beta)\int_0 ^ 1 o[x+ \sigma b_{u}^0(\bar{a})]^2 { d}u}{\int_{\mathbb{r } } { d}x \int_{\omega}{d}p[\bar{a}]x_\infty(x,\bar{a};\beta ) } \\ & & \ \ \ \\ \ \ \ \ \ -\left\langle o \right\rangle_{\beta}^2,\end{aligned}\ ] ] while the variance of the path estimating function converges to \infty(x,\bar{a};\beta)\left\{\int_0 ^ 1 o[x+ \sigma { b}_{u}^0(\bar{a } ) ] { d}u\right\}^2}{\int_{\mathbb{r } } { d}x \int_{\omega}{d}p[\bar{a}]x_\infty(x,\bar{a};\beta ) } -\left\langle o \right\rangle_{\beta}^2.\ ] ] the cauchy - schwartz inequality implies { d}u\right\}^2 \leq \int_0 ^ 1 o[x+ \sigma { b}_{u}^0(\bar{a})]^2 { d}u\ ] ] and shows that the variance of the path estimating function is always smaller than that of the point estimating function .the actual decrease in the variance is not always significant because the points for different are strongly correlated .depending on the nature of the function , the variance decrease may not compensate the effort required to compute the average { d}u ] , which also enters the expression of , is computed anyway . to summarize the findings of the present subsection, the point estimator provides a more accurate value but has a larger variance than the path estimator .we next ask if there are any methods for which one may construct an estimator providing the same values as the point estimator but having the variance of the path estimator .more precisely , we seek methods for which there is a division such that the mesh converges to zero as and such that the points have the same marginal distribution as . for such methods , the expected value of the estimating function (u_{i+1}-u_{i})\ ] ] under the probability distribution given by eq .( [ eq:18 ] ) is an estimator satisfying the criteria outlined in this paragraph .there are two methods we employ in the present paper for which such an estimator exists .the first one , is the trapezoidal trotter discrete path integral method ( tt - dpi ) obtained by the trotter composition of the short - time approximation .\ ] ] it has been shown that for , the tt - dpi method admits the following implementation + 1}(u_i)a_{l,[2^{l-1 } u_i]+1}\right]\right\},\end{aligned}\ ] ] where for and the functions are the so - called schauder functions, the definitions of which are presented in the cited references .we leave it for the reader to use eq .( [ eq:20 ] ) and show that if , then all the points have identical marginal distribution given by the formula in this case , the point and the path estimators produce identical results for the ensemble average of a diagonal operator at least for the ensemble average of the potential energy , one should always use the path estimator , which has smaller variance . a second method for which there is an estimator giving the same values as the point estimator but having ( asymptotically , as ) the variance of the path estimator is the so - called lvy - ciesielski reweighted technique ( rw - lcpi ) defined by the formula + 1 } \;\tilde{f}^{(n)}_{l,[2^{l-1 } u]+1}(u)\right]{d}u\right\},\end{aligned}\ ] ] where ] ) followed by a separate attempt to move the rest of the path variables associated with the particle .both the physical coordinates and the path variables are moved in a cube centered about the old coordinates : and where the three components of are independent uniformly distributed random numbers on the interval ] .different streams are identified by different identification numbers .the streams have been initialized once at the beginning of the simulation with different seeds .given the streams of independent random numbers , the monte carlo simulation proceeds as follows .for each stream , one performs an independent simulation consisting of blocks .these blocks are preceded by equilibration blocks , which are needed to bring the system into probable configurations but do not contribute to the averages of the estimating functions .for the wiener - fourier reweighted method with , we use independent streams of blocks each , for a total of million passes .the equilibration phase consists of blocks for each stream .ideally , the length of the individual streams should be chosen to be sufficiently large , that the averages of the computed property for different streams are independent and normally distributed , as dictated by the central limit theorem .this requirement is satisfied by all simulations we have performed .we have collected individual averages for all blocks and streams and performed several statistical tests verifying the applicability of the central limit theorem as well as the independence between the block averages of same or different streams .let denote the block - averages of the property for stream and block ( the rw - wfpi simulation for has been analyzed in a similar fashion ) .under the assumption that the size of the blocks is large enough so that the correlation between different block - averages is negligible and under the assumption that the block - averages for different streams are highly uncorrelated , the values should have a gaussian distribution centered around the average value with variance the validity of this assumption can be verified with the help of the shapiro - wilks normality test. if the collection of samples does not pass the test , it does not necessarily follow that the samples are not independent , as their distribution is normal only if the size of the blocks is sufficiently large . at a significance level of 5%, we do not reject the gaussian distribution hypothesis for all computed average properties . to within the statistical significance of our calculations , the samples be assumed to be independent and have a gaussian distribution .a second set of tests consists in verifying that the row and column averages of have gaussian distributions centered around with variances and , respectively .the validity of this distribution follows from the central limit theorem and the assumption that the samples are independent and have a gaussian distribution characterized by the average and the variance .it is important to emphasize that the row averages must pass this test .as previously discussed , the number of blocks in a stream should be sufficiently large so that the row averages have the required distribution even if the independent samples do not have a gaussian distribution .again , under the assumption of independence only , the row averages should have a gaussian distribution centered around and have variance for a sufficiently large number of blocks .we have employed the kolmogorov - smirnov test to compare the distributions of the row and column averages with the theoretical gaussian distributions . for all computed average properties, we find that the respective distributions are identical at a statistical significance level of 5% .the agreement for the distribution of the row averages is evidence that the streams generated by the dynamic creator package are sufficiently independent , whereas the agreement for the distribution of the column averages is evidence that the block averages of the same streams are independent . for the third set of tests ,we have considered two time - series and obtained by concatenating the rows of the matrix and the columns , respectively .we then have studied the autocorrelation of the two time series for a maximum lag of .the correlation coefficients for a lag are computed with the formula where if . under the independence hypothesis of the samples ,the statistics of the correlation coefficients for is normal with average zero and standard deviation .moreover , the correlation coefficients can be regarded as independent samples of this normal distribution . by the binomial distribution , the probability that at most correlation coefficients lie outside the interval ] .one computes and so at a level of significance of 5% , the hypothesis that the are independent samples of a normally distributed variable of mean zero and standard deviation should be rejected if or more correlation coefficients lying outside the interval ] .these points are and , respectively ( of course , the points are not counted ) .consequently , the simulation passes our third statistical test .in fact , all the simulations performed have passed this statistical test for all computed properties .we conclude that the correlation between the block averages of same or different streams is negligible . by the central limit theorem , the statistical error in the determination of the average of the property is where is defined by eq .( [ eq:3.3 ] ) .( for the statistical error , we employ the value , corresponding to an interval of 95% confidence .the 5% probability that the results lie outside the confidence interval is chosen to agree with the level of significance of the statistical tests ) .the analysis performed in the present subsection demonstrates that the streams generated by the dynamic creator algorithm have negligible correlation at least for our purposes .a separate advantage in the use of independent streams is to overcome the phenomenon of quasiergodicity, which might appear in monte carlo simulations whenever the distribution that is sampled has several well defined minima that are separated by walls of high energy . in this case, the random walker may be trapped in one of the wells and never sample the others , or sample them with the wrong frequency .the monte carlo simulation may pass all the aforementioned statistical tests but still produce the wrong results .for our system , the probability that such a situation may occur is quite low because the system is highly quantum mechanical with strong barrier tunneling .moreover , the independent streams have been initialized randomly in configuration space .this makes it unlikely that all the streams are trapped precisely into the same local minimum or group of local minima .evidence for quasiergodicity may be captured in the form of a few outlying averages among the stream averages .such outlying averages have not been observed .ccccccc & & & & & & + + 4 & -57.66 0.05 & -16.63 0.18 & -82.14 0.07 & -61.72 0.12 & 24.48 0.02 & 45.09 0.15 + + 8 & -37.61 0.05 & -17.77 0.16 & -64.74 0.06 & -53.07 0.11 & 27.13 0.02 & 35.29 0.13 + + 16 & -25.68 0.04 & -18.28 0.13 & -54.27 0.06 & -49.33 0.10 & 28.60 0.03 & 31.06 0.11 + + 32 & -20.23 0.04 & -18.00 0.12 & -49.66 0.06 & -48.05 0.10 & 29.42 0.03 & 30.05 0.11 + + 64 & -18.29 0.04 & -17.85 0.11 & -48.19 0.06 & -47.86 0.09 & 29.90 0.03 & 30.01 0.11 + + 128 & -17.75 0.04 & -17.64 0.12 & -47.83 0.06 & -47.81 0.09 & 30.08 0.03 & 30.17 0.11 + + 256 & -17.71 0.04 & -17.70 0.12 & -47.85 0.07 & -47.87 0.10 & 30.14 0.03 & 30.17 0.12 + + ccccccc & & & & & & + + 3 & -70.46 0.06 & 18.24 0.20 & -93.47 0.07 & -69.03 0.09 & 23.01 0.02 & 87.27 0.19 + + 7 & -44.08 0.05 & -10.81 0.15 & -71.03 0.06 & -55.08 0.08 & 26.94 0.02 & 44.28 0.14 + + 15 & -29.84 0.04 & -15.84 0.12 & -58.33 0.06 & -49.10 0.07 & 28.50 0.02 & 33.26 0.12 + + 31 & -22.76 0.04 & -17.40 0.10 & -51.95 0.06 & -47.83 0.06 & 29.19 0.03 & 30.43 0.11 + + 63 & -19.50 0.04 & -17.68 0.10 & -49.15 0.06 & -47.69 0.06 & 29.65 0.03 & 30.01 0.11 + + 127 & -18.25 0.04 & -17.68 0.10 & -48.20 0.06 & -47.80 0.06 & 29.95 0.03 & 30.11 0.11 + + 255 & -17.84 0.04 & -17.65 0.11 & -47.93 0.07 & -47.85 0.07 & 30.09 0.03 & 30.20 0.12 + + ccccccc & & & & & & + + 3 & -68.54 0.05 & 78.08 0.30 & -89.88 0.07 & -89.88 0.07 & 21.34 0.02 & 167.97 0.32 + + 7 & -45.29 0.05 & 7.22 0.19 & -70.88 0.06 & -70.88 0.06 & 25.58 0.02 & 78.10 0.21 + + 15 & -30.61 0.04 & -12.52 0.13 & -58.53 0.06 & -58.53 0.06 & 27.92 0.02 & 46.01 0.15 + + 31 & -22.95 0.04 & -16.86 0.11 & -51.99 0.06 & -51.99 0.06 & 29.04 0.03 & 35.14 0.12 + + 63 & -19.55 0.04 & -17.66 0.10 & -49.19 0.06 & -49.19 0.06 & 29.65 0.03 & 31.53 0.11 + + 127 & -18.29 0.04 & -17.70 0.10 & -48.27 0.06 & -48.27 0.06 & 29.97 0.03 & 30.57 0.11 + + 255 & -17.86 0.04 & -17.71 0.11 & -47.94 0.07 & -47.94 0.07 & 30.07 0.03 & 30.23 0.12 + + the computed averages for all methods and estimators utilized are presented in tables [ table : rw - wfpi ] , [ table : rw - lcpi ] , and [ table : lc - ttpi ] .for a given number of path variables , the rw - wfpi , rw - lcpi , and tt - dpi methods utilize , , and quadrature points , respectively .[ for a discussion of the minimal number of quadrature points and of the nature of the quadrature schemes that must be employed for the first two methods , the reader should consult ref . .for the rw - wfpi method , we have utilized gauss - legendre quadrature points , though a number of points would have sufficed . ] the observed overall computational time for the three methods have followed the ratios , even though the time necessary to compute the paths is proportional to for the first method and to for the other methods .the computation of the paths takes full advantage of the vector floating point units of the modern processors and is dominated by the calls to the potential , except for very large . as discussed in ref ., the asymptotic convergence for the reweighted techniques is expected to be cubic , even for the lennard - jones potential that is not included in the class of potentials for which cubic convergence has been demonstrated formally .we find that the asymptotic convergence is attained only for very large , as one may see by comparing for example the total , potential , and kinetic energies computed with the help of the t - method estimator for the rw - lcpi and the tt - dpi methods .even if the latter method has only asymptotic convergence , the two methods produce almost equal results .in fact , a numerical analysis of the relationship in which the left - hand side quantity is plotted against for different values of , suggests that , while the methods have converged within the statistical error , none of the three methods includes sufficiently large values of to attain the ultimate asymptotic rate of convergence .when comparing the values of the h - method energy estimator and of the related potential and kinetic estimators for the three path integral techniques , one notices that the rw - lcpi technique provides better values than the tt - dpi method .the h - method estimator has a better behavior if used together with a reweighted technique .this behavior is consistent with the analysis we have performed in section ii on the values of the potential point - estimators and the excellent values found with the rw - wfpi method . for the reweighted techniques , the h - method estimator provides better energy values than the t - method estimatorthis is also true of the potential and kinetic parts of the estimators .however , the variance of the h - method estimator is significantly larger than the variance of the t - method estimator and the difference is even more pronounced if one compares the corresponding kinetic estimators .as discussed in section ii.a , the path estimator for the potential energy has a smaller variance than the point estimator .indeed , the results from table [ table : rw - wfpi ] show that the variance of the path estimator is approximately times smaller than the variance of the point estimator . in the case of the rw - lcpi and tt - dpi methods ,we have employed the estimator given by eq .( [ eq:24 ] ) and the path estimator , respectively .these were shown to produce values identical to the point estimator but have the variance of the path estimator .for the rw - wfpi and rw - lcpi methods , the point and the path estimators produce different results . due to the very design of the reweighted techniques ,we have argued that the point estimator results should be the more accurate ones .this theoretical prediction is well supported by the values presented in tables [ table : rw - wfpi ] and [ table : rw - lcpi ] . while we have argued that the h - method estimator is a better estimator as value ( but not necessarily as variance ) than the t - method estimator for the reweighted methods , it is apparent from table [ table : lc - ttpi ] that the same difference persists for the trapezoidal trotter scheme . as discussed before , for the tt - dpi method, the point and path estimators provide the same value for the average potential .as opposed to the reweigthed techniques , the h - method kinetic estimator is less accurate than the t - method kinetic energy estimator . quite interestingly , even if individually the potential and the kinetic parts are more accurate for the t - method estimator , it is the h - method energy estimator that provides a more accurate total energy .clearly , a strong compensation of errors appears in the case of the h - method estimator .such a compensation of errors is generally characteristic of variational methods . in thisrespect , notice that the tt - dpi density matrices are positive definite because they are obtained by lie - trotter composing a certain symmetrical short - time approximation . by the ritz variational principle, the h - method energy estimator can not have a value smaller than the ground - state energy .thus , the ritz variational principle provides some control on the values of the h - method estimator , but not on the individual components , nor on the t - method estimator .the rw - lcpi density matrices are also positive definite for and indeed , the energy provided by the h - method estimator is still better than what the values of the potential and kinetic parts suggest . while a final resolution awaits further study , it is apparent that this finding is not related to the asymptotic rate of convergence of the path integral technique . among the three methods presented , the rw - wfpi has the fastest convergence for all properties studied .moreover , for and , there is a good agreement ( within statistical noise ) between the t- and the h - method energy estimators , as well as between their potential and kinetic energy components . for ,one concludes that the systematic error is smaller than the statistical error for all properties computed . an additional rw - wfpi simulation with in million monte carlo passes has produced results consistent with the findings above .the results are summarized in table [ tab:1 ] and represent the energy values we report ..estimated energies in k / molecule for the cluster computed with the help of the wiener - fourier reweighted technique using path variables and million monte carlo passes . listedare the average potential , kinetic , and total energies calculated with the help of the t - method ( left column ) and h - method ( right column ) estimators .the reported errors are two standard deviations .[ cols="^,>,^,<,^,>,^ , < " , ]in the present work we have considered a number of issues related to the choice of estimators for random series path integral methods . we have illustrated our results by applying them to the problem of computing various thermodynamic properties of a model of the ( h cluster using reweighted path integral techniques .the molecular hydrogen cluster is a strongly quantum mechanical system and is representative of the type of problems one is likely to encounter in many applications .hence , it constitutes a useful benchmark for present and future path integral techniques and for this reason it is important that its physical properties be determined within advertized statistical error bars .path integral methods capable of dealing with such highly quantum - mechanical systems in an efficient manner are needed , both for reliable determinations of the physical properties of the respective systems as well as for accurate parameterizations of the intermolecular potentials .we wish to make a number of points concerning the present results and the methods we have utilized to obtain them . at a more general level, we would like to emphasize that the reweighted path integral methods discussed here provide a broadly applicable , simple , and formally well characterized set of techniques . as demonstrated by the present results , they are capable of producing high - quality numerical results for problems of appreciable physical complexity . moreover , they do so without the assumption of a particular form for the underlying microscopic forces .furthermore , the estimators described in the present paper are convenient , accurate , and easily implemented for any random series approach . as discussed in section iii , when used together , the t and h - method estimators provide an important consistency check on the quality of the path integral simulations . such consistency checksare a valuable element in judging the reliability of particular simulations .as previously mentioned , the cluster application discussed here provides a convenient test bed for the development of numerical methods . for this reason, we have exercised due diligence with respect to the quality of our final results summarized in table [ tab:1 ] .as discussed in section iii , we have subjected both the parallel random number generator employed and the numerical results obtained to a series of quality - control tests . beyond these statistical checks , it is important to note there is an internal consistency check on the quality of the present results .specifically , as is evident in tables [ table : rw - wfpi ] , [ table : rw - lcpi ] , and [ table : lc - ttpi ] , the kinetic , potential , and total energies from the three different path integral approaches ( trapezoidal trotter , reweighted lvy - ciesielski , and reweighted wiener - fourier ) all agree .it is also important to note in this context that , while the presently computed total energies agree with those reported by chakravarty _et al._, the individual kinetic and potential energies do not . the kinetic energy reported by chakravarty _ _et al.__ is approximately 0.8 k / particle higher than found in the present simulations ( with the potential energy being correspondingly lower ) .the magnitude of this difference is well outside the statistical error bars involved and appears to signal a systematic error .based on the observed consistency between the results produced by three different path integral methods and on the agreement between the t and h - method estimators for each of these path integral formulations , we feel confident of the results we have reported in table [ tab:1 ] ._ note : _ after the present simulations had been completed , we have learned from d. m. ceperley that the off - diagonal pair density used as the starting point in the simulations reported in ref .was truncated at first order in the expansion of off - diagonal displacements instead of second order and that the inclusion of this second order term resolves the kinetic and potential energy difference noted above .the authors acknowledge support from the national science foundation through awards no .che-0095053 and che-0131114 .they also wish to thank the center for advanced scientific computing and visualization ( tcascv ) at brown university , especially dr .james odell , for valuable assistance with respect to the numerical simulations described in the present paper .they would also like to thank mr .cristian diaconu for helpful discussions concerning the present work .finally , the authors would like to express a special thanks to professor david ceperley for continuing discussions concerning the present simulations and for his efforts in tracking down the origin of the pair density issues noted in section iv .the main purpose of this section is to give a compact form for the integral \infty(x , x',\bar{a};\beta)o[x_r(\theta)+ \sigma b_\theta^0(\bar{a})],\end{aligned}\ ] ] where is an arbitrary point in the interval $ ] . in terms of a standard brownian motion [ see eq .( [ eq:1a ] ) ] , the above integral can be put into the form \mathbb{e } \left[e^{-\beta \int_0 ^ 1 v(\sigma b_u){d}u } o(\sigma b_\theta ) \big| \sigma b_1 = x ' , \sigma b_0 = x \right ] \\ & & = \int_{\mathbb{r } } { d}y o(y ) p \left[\sigma b_1 = x ' ,\sigma b_\theta = y \big| \sigma b_0 = x \right]\mathbb{e } \left[e^{-\beta \int_0 ^ 1 v(\sigma b_u){d}u } \big| \sigma b_1 = x ' , \sigma b_\theta = y , \sigma b_0 = x \right].\end{aligned}\ ] ] using the markov property of the brownian motion , one readily justifies the equalities \\ & & = p\left[\sigma b_1 = x'\big| \sigma b_\theta = y \right ] p\left [ \sigma b_\theta = y \big| \sigma b_0 = x \right ] \qquad \\ & & \nonumber = \rho_{fp}(x , y , \theta \beta ) \rho_{fp}[y , x ' ; ( 1-\theta)\beta]\end{aligned}\ ] ] and \\ & & = \mathbb{e } \left[e^{-\beta \int_0^\theta v(\sigma b_u){d}u } \big| \sigma b_\theta = y , \sigma b_0 = x\right ] \\ & & \times \mathbb{e } \left[e^{-\beta \int_{\theta}^1 v(\sigma b_u){d}u } \big| \sigma b_1 = x ' , \sigma b_\theta = y\right].\nonumber\end{aligned}\ ] ] performing the transformation of coordinates in the second factor of the right - hand side of eq .( [ eq : a4 ] ) and employing the invariance of the brownian motion under time translation one obtains \\ = \mathbb{e } \left[e^{-\beta \int_0^\theta v(\sigma b_u){d}u } \big| \sigma b_\theta = y , \sigma b_0 = x\right ] \\ \times \mathbb{e } \left[e^{-\beta \int_0^{1-\theta } v(\sigma b_u){d}u } \big| \sigma b_{1-\theta } = x ' ,\sigma b_0 = y\right].\nonumber\end{aligned}\ ] ] let us focus on the term .\ ] ] performing the substitution of variables and employing the scaling property of the brownian motion one proves \\ & & \nonumber = \mathbb{e } \left[e^{-\beta \theta \int_0 ^ 1 v(\sigma \theta^{1/2 } b_u){d}u } \big| \sigma \theta^{1/2 } b_1 = y , \sigma \theta^{1/2 } b_0 = x\right ] \\ & & = \rho(x , y ; \theta \beta)/\rho_{fp}(x , y ; \theta \beta).\end{aligned}\ ] ] in a similar fashion , one demonstrates that \\ = \rho\left[y , x ' ; ( 1-\theta ) \beta\right]\big/\rho_{fp}\left[y , x ' ; ( 1-\theta ) \beta\right].\end{aligned}\ ] ] we now combine eqs .( [ eq : a1 ] ) , ( [ eq : a2 ] ) , ( [ eq : a3 ] ) , ( [ eq : a5 ] ) , ( [ eq : a6 ] ) , and ( [ eq : a7 ] ) to obtain \infty(x , x',\bar{a};\beta)o[x_r(\theta)+ \sigma b_\theta^0(\bar{a } ) ] \\ = \int_{\mathbb{r } } \rho(x , y;\theta \beta ) \rho[y , x ' ; ( 1-\theta)\beta ] o(y ) { d}y.\end{aligned}\ ] ] with the help of eq .( [ eq : a8 ] ) and by cyclic invariance ,\infty(x,\bar{a};\beta)o[x+ \sigma b_\theta^0(\bar{a } ) ] \\ & & \nonumber = \int_{\mathbb{r}}{d}x\int_{\mathbb{r } } { d}y \rho(x , y;\theta \beta ) \rho[y , x ; ( 1-\theta)\beta ] o(y ) \\ & & = \int_{\mathbb{r } } { d}y \rho(y , y;\beta ) o(y ) \\ & & \nonumber = \int_{\mathbb{r}}{d}x \int_{\omega}{d}p[\bar{a}]x_\infty(x,\bar{a};\beta)o(x).\end{aligned}\ ] ] moreover , since the function is arbitrary , the last identity also implies that the random variables and have identical distribution functions under the probability measure }{\int_{\mathbb{r}}{d}x \int_{\omega}{d}p[\bar{a}]x_\infty(x,\bar{a};\beta)}.\ ] ] by setting in eq .( [ eq : a8 ] ) , one obtains the well - known product formula \infty(x , x',\bar{a};\beta)\\ = \int_{\mathbb{r } } \rho(x , y;\theta \beta ) \rho[y , x ' ; ( 1-\theta)\beta ] { d}y,\end{aligned}\ ] ] which is seen to be a consequence of some basic properties of the brownian motion . 99 _ quantum monte carlo methods in physics and chemistry _ , edited by m. p. nightingale and c. j. umrigar , ( kluwer , drodrecht , 1999 ) . c. predescu and j. d. doll , j. chem . phys . * 117 * , 7448 ( 2002 ) . c. predescu , j. d. doll , and david l. freeman , _ asymptotic convergence of the partial averaging technique ,_ e - print : http://arxiv.org/abs/cond-mat/0301525 . c. predescu , _ reweighted methods : definition and asymptotic convergence ,_ e - print : http://arxiv.org/abs/cond-mat/0302171 .j. d. doll , r. d. coalson , and d. l. freeman , phys .55 * , 1 ( 1985 ) ; r. d. coalson , d. l. freeman , and j. d. doll , j. chem . phys . * 85 * , 4567 ( 1986 ) . c. predescu , d. sabo , and j. d. doll , _ numerical implementation of some reweighted path integral techniques ,e - print : http://arxiv.org/abs/cond-mat/0305436 . m. f. herman , e. j. bruskin , and b. j. berne , j. chem . phys . * 76 * , 5051 ( 1982 ) .a. giansanti and g. jacucci , j. chem . phys .* 89 * , 7454 ( 1988 ) .p. a. fernandes , a. p. carvalho , and j. p. p. ramalho , j. chem .phys . * 103 * , 5720 ( 1995 ) .j. cao and b. j. berne , j. chem .phys . * 91 * , 6359 ( 1989 ) .m. kolar and s. f. oshea , j. phys .a * 29 * , 3471 ( 1996 ) .w. janke and t. sauer , j. chem . phys . * 107 * , 5821 ( 1997 ) .d. m. ceperley , rev .mod . phys . ,* 67 * , 279 ( 1995 ) . k. r. glaesemann and l. e. fried , j. chem116 * , 5951 ( 2002 ) .c. chakravarty , m. c. gordillo , and d. m. ceperley , j. chem .phys . * 109 * , 2123 ( 1998 ) .j. d. doll and david l. freeman , j. chem .* 111 * , 7685 ( 1999 ) . b. simon , _ functional integration and quantum physics _( academic , london , 1979 ) .r. durrett , _ probability : theory and examples , _( duxbury , new york , 1996 ) , pp . 430 - 431 .s. kwapien and w.a .woyczynski , _ random series and stochastic integrals : single and multiple _( birkhuser , boston , 1992 ) , theorem 2.5.1 . c. predescu and j. d. doll , phys .e * 67 * , 026124 ( 2003 ) .h. p. mckean jr . , _ stochastic integrals _( academic , new york , 1969 ) .p. sindzingre , d. m. ceperley , and m. l. klein , phys .* 67 * , 1871 ( 1991 ) .j. p. neirotti , d. l. freeman , and j. d. doll , phys .e * 62 * , 7445 ( 2000 ) .n. metropolis , a. w. rosenbluth , m. n. rosenbluth , a. m. teller , and e. teller , j. chem .* 21 * , 1087 ( 1953 ) .m. kalos and p. whitlock , _ monte carlo methods _ ( wiley - interscience , new york , 1986 ) .m. matsumoto and t. nishimura , acm transactions on modeling and computer simulation * 8 * , 3 ( 1998 ) .m. matsumoto and t. nishimura , _ dynamic creation of pseudorandom number generators _ , in _monte carlo and quasi - monte carlo methods 1998 _ , ( springer - verlag , new york , 2000 ) , pp 5669 . `http://www.math.keio.ac.jp/~matumoto/emt.html ` .s. s. shapiro and m. b. wilk , biometrika , * 52 * , 591 ( 1965 ) .w. h. press , s. a. teukolsky , w. t. vetterling , and b. p. flannery , _ numerical recipes _( cambridge university , cambridge , 1992 ) ch .j. p. valleau and s. g. whittington , in _ statistical mechanics _ , edited by b. j. berne ( plenum , new york , 1977 ) ch . 4 , p. 145
|
we perform a thorough analysis on the choice of estimators for random series path integral methods . in particular , we show that both the thermodynamic ( t - method ) and the direct ( h - method ) energy estimators have finite variances and are straightforward to implement . it is demonstrated that the agreement between the t - method and the h - method estimators provides an important consistency check on the quality of the path integral simulations . we illustrate the behavior of the various estimators by computing the total , kinetic , and potential energies of a molecular hydrogen cluster using three different path integral techniques . statistical tests are employed to validate the sampling strategy adopted as well as to measure the performance of the parallel random number generator utilized in the monte carlo simulation . some issues raised by previous simulations of the hydrogen cluster are clarified .
|
the applicability of the weibull distribution to describe the fracture strength of brittle and quasi - brittle materials has been a topic of intense debate .several experimental studies argue that the weibull distribution is not always the best statistical distribution to fit fracture data ( numerous other argue otherwise ) , particularly for quasi - brittle materials that have significant precursor damage .these observations demand a general theoretical explanation .the suggested explanations for these empirical observation includes bimodal or multimodal flaw size distribution , r - curve behavior , small size of the datasets , and thermal activated crack nucleation .here we provide a general explanation for these observations by showing that the weibull distribution is unstable in the renormalization group sense for quasi - brittle materials , and thus not applicable at long length scales . in deriving the weibull distribution of fracture strengths it is invariably assumed that the material volume has a population of non - interacting crack - like defects , and fracture happens as soon as the weakest of these defects starts to grow .this assumption is also known as the ` weakest - link hypothesis ' .experimental observations suggest that this assumption does not hold for a large class of quasi - brittle materials . these materials , including paper , granite , antler bone , wood , and composites etc ., typically `` crackle '' , suggesting that several local cracks grow and get arrested prior to global fracture .advanced composites are designed to fail gracefully , that is , they have multiple failures before the ultimate fracture .it is clear that for such materials the weakest defect does not dominate the fracture properties of the material , and the defects interact via elastic fields .the emergent scale - invariant properties of these interactions have been a topic of intense study in the statistical physics community .several researchers have used the weibull theory to model these quasi - brittle materials .we show that even if the microscopic strength distribution is weibull , the emergent distribution is significantly distorted due to elastic interactions and metastability .thus , the weibull distribution is not stable in the renormalization group sense .we provide numerical evidence to support our theoretical claims . for brittle materials such as glasses or ceramics that fracture catastrophically without precursor damage, it is assumed that the distribution of fracture strength can be derived from the distribution of flaw sizes by using griffith s criteria ( or equivalently the stress intensity approach ) and ignoring effects such as crack bridging or coalescence . for exponentially distributed cracksthe fracture strength is expected to be described by the duxbury - leath - beale ( dlb ) distribution , while only in the case of power law distributed cracks one expects to obtain the weibull distribution .it is , however , challenging to measure the flaw size distribution experimentally , and thus these assumptions are rarely verified empirically .one of the aims of this paper is use numerical simulations to show that the simple relations that are widely used in the literature are not accurate , and further study is needed to understand the discrepancy .this observation has important implications for material engineers who aim to improve the fracture properties of brittle materials by controlling the micro - structure . in light of the above discussed limitations of the weibull theory, what distribution should one use to fit fracture data ? to answer this question , we consider two classes of fuse networks to model brittle and quasi - brittle materials . both of the these models are derived from the classical fuse network models . in the model for brittle materials ,the fuse network is seeded with power law distributed cracks with varying morphology .this is different from the classical diluted fuse network model which has an exponential distribution of cracks .the model for quasi - brittle materials has a continuous distribution of fuse strengths , where each fuse strength is a random number drawn from a standard weibull distribution . in this mannerwe can ensure that the microscopic strength distribution is weibull , and study the emergent macroscopic distribution .this model differs from its counterparts in the literature by the choice of the microscopic disorder , and enables a numerical study of the stability of the weibull distribution . analyzing the simulations, we find that the recently proposed t - method provides a suitable alternative to fit the numerical data .the method is general enough that it can be applied in a wide variety of cases .the rest of the paper is organized as follows .section [ sec : weibulltheory ] presents the basics of the classical weibull theory and the commonly used relation between the strength distribution and the defect size distribution .the details of the numerical models used in this study are discussed in section [ sec : rfm ] .section [ sec : weibstab ] presents theoretical and numerical evidence to show that the weibull distribution is unstable under coarse graining for quasi - brittle materials . in section [ sec : mod ] we present the numerical evidence to show that the relation between the strength distribution and the flaw size distribution is nontrivial , and can not be obtained by a straightforward application of the griffith s criteria .we discuss the possible sources of the observed discrepancy .section [ sec : tmethod ] presents a comparison of the performance of the weibull distribution and the recently proposed t - method for fitting the simulation data for quasi - brittle fuse networks .the conclusions are presented in section [ sec : conc ] .in this section we review the classical weibull theory in order to facilitate the discussion in the following sections .we consider a material volume subjected to a stress field .the material is supposed to have a density of defects of various shapes and sizes , such that is the probability of not finding a defect with critical stress less than in a volume of the material .here we assume that the stress in uniaxial and tensile ; the case of full tensorial stress is similar and is not presented here to avoid unnecessary notational complexity .the volume is supposed to be large enough that it contains sufficient number of cracks , and yet small enough that the stress can be considered roughly constant across it ; it is sometimes also called the representative volume element . is supposed to be a homogeneous material property .then , the probability that the material volume will survive the stress field is given by weibull recognized that taking , where is a material dependent scale parameter , and is the material depended weibull modulus , gave a good fit for several brittle materials , and introduced what is now known as the standard weibull distribution it turns out that the empirical choice made by weibull can be justified by a renormalization group calculation in which one writes recursive equations describing the failure distribution as the scale is changed .the weibull distribution is one of the possible fixed points of the renormalization group transformation .the weibull distribution can alternatively be derived by connecting the function to the microscopic defect size distribution .the basic calculation outlined in the remainder of this section can be found in a number of important references . according to griffith s criteria ,a crack of length is stable at applied normal stress if where is the geometry factor of the crack , and , the critical stress intensity factor , is a material property .the exponent of is applicable for ideally sharp cracks , and can have a different value for wedge shaped or blunted cracks .thus , if we take to be probability that the volume element does not contain any crack longer than , then we have if the defect size ( crack length ) distribution is a power law with exponent , then , which gives , where .thus , a power law defect size distribution with exponent leads to weibull distribution of fracture strength with modulus .as pointed out before , this entire analysis assumes that the flaws do not interact , and that the failure of the weakest flaw leads to the failure of the entire material volume .we also show in section [ sec : weibstab ] relaxing the assumption that the weakest flaw leads to global failure has important consequences and results in the strength distribution flowing away from the weibull form . our numerical calculation reported in section [ sec : mod ]sections show that crack bridging is an important form of crack interaction that can significantly alter the resulting weibull modulus away from the dilute limit ( but does not change the weibull form for power law distributed cracks ) .in this section we describe the computational model that we use for various classes of brittle and quasi - brittle materials . the theoretical arguments presented in later sections benefit from having a concrete model as a point of reference .we study several variants of the basic two dimensional random fuse model ( rfm ) .the rfm is a well accepted model of brittle fracture where each fuse represents a coarse grained material region ( analogue of the classical representative volume element ) .the model consists of a set of conducting fuses with unit conductivity and breaking threshold , arranged on a 45-tilted square lattice composed by nodes .a unit voltage drop is applied along two parallel edges of the lattice while periodic boundary conditions are imposed along the other two edges .the kirchhoff equations are solved numerically using the algorithm proposed in ref . to determine the current flowing in each of the fuses .we then evaluate the ratio between the current and the breaking threshold and the fuse having the largest value , , is irreversibly removed ( burnt ) .the current is redistributed instantaneously after a fuse is burnt . each time a fuse is burnt ,it is necessary to re - calculate the current distribution in the lattice .the process of burning fuses , one at a time , is repeated until the lattice system fails completely ( becomes non - conductive ) .the random fuse model is equivalent to a scalar elastic problem where we consider a pure anti - plane shear deformation . in this condition ,the shear stress is related to the total current by , the shear strain to the voltage drop by and the conductivity is equivalent to the shear modulus . from the breaking sequencewe can derive the current - voltage ( or stress - strain ) curve of the network under adiabatic loading as discussed in ref . . in this studywe employ two different disorder distributions to model quasi - brittle and brittle materials : * _ weibull disorder ( quasi - brittle ) . _the fuse strength threshold is chosen to be a random variable drawn from a weibull distribution with modulus , thus the survival probability of a fuse at applied stress is .fuse networks with continuously distributed strengths have been studied previously . in those studiesthe thresholds were drawn from the uniform , power law , and hyperbolic distributions . however , the focus of those studies was on the morphology and dynamic properties , while we focus on strength .further , by letting the local thresholds be weibull distributed , we can directly study the stability of the weibull distribution under coarse graining . * _ diluted cracks ( brittle ) ._ we remove a fraction of the fuses and assign the same breaking threshold (= 1 ) to the intact fuses .we take , thus keeping the initial damage fairly dilute in order to avoid the phenomena that happens near the percolation threshold ( at for the tilted square lattice we are using ) .note that the missing fuses are not chosen randomly , but rather in a way that they form a set of cracks with power law distributed crack lengths with , where is the exponent of the power law .we employ both straight and fractal flaws , grown by using self - avoiding random walks .fuse networks with diluted cracks were originally studied in refs .however , in those studies the cracks lengths had an exponential distribution ( as opposed to power law ) .exponential distribution of defect sizes leads to a gumbel type distribution of strengths , and thus are markedly different from our model .for each case , we do extensive statistical sampling for network sizes .the standard weibull distribution reported in eq .[ eq : weibull ] is derived under the assumption that the failure of the weakest flaw ( or representative volume element ) leads to complete global failure . under this assumption ,if the strength distribution of the representative element is standard weibull with modulus , i.e. , then the survival probability of the material volume is given by in mathematical terms , we can say that the weibull distribution is stable under coarse graining : a system composed by subsystems described by the weibull distribution is itself described by the weibull distribution . as we mentioned earlier , however , the weakest - link assumption is not accurate in quasi - brittle materials .we can then derive the condition for eq .[ eq : simweibull ] to remain valid if this assumption is relaxed .the stress at which the weakest flaw fails scales as .the failure of this volume element enhances the stress on its neighbors due to stress concentration .however , the neighbors of the weakest flaw are typically not very weak , and we safely assume that their strength is near the mean strength . assuming that the stress concentration factor scales as , where is a geometry factor , the neighboring volume element fails if which yields as the approximate condition for the validity of the weakest link hypothesis . outside of this rangethe local failure of the weakest link does not trigger global failure . in this calculationwe have ignored details and made several simplifications , thus it only gets the correct scaling .the above arguments show that the weakest link hypothesis is self - consistent , and the weibull distribution is stable under coarse - graining only if the weibull modulus is large enough , . clearly , the strength distribution flows away from the weibull distribution in the limit of .the typical ranges for the weibull modulus are for metals , for ceramics , and for biomaterials such as nacre .it is clear that for materials with small to moderate values of ( such as biomaterials ) the applicability of weibull analysis is questionable .indeed the weakest link hypothesis is manifestly false these materials exhibit significant precursory fracture events ( crackling noise ) before failure .figure [ fig : weibullflow ] shows the emergent strength distribution for fuse networks of various sizes where the fuse threshold is taken from a weibull distribution with .we choose such a high value of to show the crossover away from weibull , for smaller values of the distribution has already flown away from weibull even for the smallest networks that we can simulate .according to the weibull theory the emergent distribution of strength would be given by eq .[ eq : simweibull ] with ( there are fuses ) , thus giving .figure [ fig : weibullflow ] shows that while this prediction holds for small values of , the distribution flows away from the weibull distribution at longer lengths .this shows that the weibull distribution is unstable to disorder in a renormalization group sense , and must be used with caution for quasi - brittle materials ., with for the threshold distribution . if the emergent distribution was weibull , it would follow the solid black line . clearly , the distribution flows away from weibull at long length scales , showing that the weibull distribution is not stable in a renormalization group sense ., title="fig:",scaledwidth=40.0% ] we have established that the strength distribution flows away from weibull in quasi - brittle materials , but what does it flow towards ? it is an unsolved problem to compute the new emergent distribution of strengths analytically .however , to get some idea about the distribution , we compute a very simple minded upper - bound to the survival probability for the fuse network model . from eq .[ eq : stressintensity ] , at any given stress the length of the critical crack goes as ( i.e. a crack longer than will have unstable growth ) if the fuse strength threshold is standard weibull , then the probability of having a crack of size at any given lattice site is at least . since there are lattice sites , the global probability of survival is at most making asymptotic expansions for small , we get if we take the slowly varying to be a constant , then the above expression is reminiscent dlb distribution .the factor of can be removed in a more natural way if one takes into account the stress concentration at each step of crack growth ( see ref . for a similar treatment ) .our observation is supported by experimental results for some quasi - brittle materials where the dlb distribution was found to fit the data better than the weibull distribution . since the upper bound that we have established decays faster than any weibull function at ,the macroscopic survival probability can not be of the weibull form , even if the microscopic distribution is weibull .note that the arguments made here are fairly general , and thus we expect the macroscopic strength distribution for any material with significant precursor damage to deviate from the weibull distribution .we have confirmed that these ideas are consistent with the results of our numerical simulations .figure [ fig : weibull ] shows the survival probability obtained by statistical sampling of fuse networks with different values of .the main plot in the figure shows that the survival probability is consistent with a dlb distribution .if instead the survival probability was consistent with a weibull distribution , then the insets in the figure ( so called weibull plots ) would be straight lines .however , the plots show considerable curvature , suggesting a deviation from the weibull distribution at long length scales . and b ) .the main figures show the dlb test , while the insets show the weibull test ; straight lines indicate agreement with the tested form.,title="fig:",scaledwidth=40.0% ] and b ) .the main figures show the dlb test , while the insets show the weibull test ; straight lines indicate agreement with the tested form.,title="fig:",scaledwidth=40.0% ] the dlb distribution was originally associated with samples having an exponential distribution of crack length .we have confirmed this hypothesis in our simulations by measuring the crack length distribution just before catastrophic failure .the result , reported in figure [ fig : weibull - crack ] shows indeed the presence of an exponential tail . and b) . the distribution is a power - law with an exponential tail for all values of ., title="fig:",scaledwidth=40.0% ] and b) .the distribution is a power - law with an exponential tail for all values of ., title="fig:",scaledwidth=40.0% ]it is widely assumed that the emergent weibull modulus for brittle materials can be derived by using the griffith s criteria if the crack length distribution is known .this assumption has been used in several important studies .however , it has never been verified empirically due to experimental challenges .we examine this assumption numerically by simulating fuse networks seeded with power law distributed cracks .cracks are created by removing a certain fraction , , of fuses from the network .the net density of cracks , , is kept low ( ) to mimic materials such as glasses or ceramics where the density of micro cracks is small .the critical effects associated with approaching the percolation threshold are also avoided by keeping small . unlike the classical fuse network models ,the removed fuses are chosen so as to generate a power law distribution of crack lengths ( section [ sec : rfm ] ) .we derive the strength distribution based on the standard griffith s criteria based assumption and compare the result to simulations . according to griffith s theory , if the exponent of the power law distribution of crack lengths is , then for eq .[ eq : defectdensity ] we have , giving , where .this yields the following weibull distribution of strengths for a fuse network of linear size and ` volume ' ( assuming uniform stress ) thus the weibull modulus is given by .the above discussion assumes that flaw distribution does not change at all in fracture process . in real materials , as well as in our fuse network model, there is at least a small amount of damage before catastrophic fracture .this damage can change the tail of the crack width distribution .let , be the exponent of the crack size distribution before loading , and at peak load , respectively .we investigate the relation between , , , and numerically .we find in our simulations that .further , we find that the modulus of the emergent weibull distribution is related to the damage distribution at peak load , .figure [ fig : powerlaw]a . shows the comparison of the crack size distribution at zero and peak load for .figure [ fig : powerlaw]b . shows the corresponding survival probability on a so called weibull plot .the slope of the weibull plots agrees well with . that is larger than the initial one .b ) the corresponding survival distribution obeys the weibull law with ,title="fig:",scaledwidth=40.0% ] that is larger than the initial one .b ) the corresponding survival distribution obeys the weibull law with ,title="fig:",scaledwidth=40.0% ] thus , the standard assumption of is incorrect .we further explore the relation between and by carrying out extensive numerical simulations for , and .we also investigate the effect of the shape of initial cracks .we seed the network either with straight cracks , or fractal looking cracks grown by using self - avoiding random walks .in both cases we maintain the width distribution , , and the defect density as dictated by .figure [ fig : linexpos ] shows the relation between and for various values of for straight as well as grown cracks . for all the cases we observe that .it is reasonable to expect a slight increase in the exponent due to crack bridging .however , it is not clear what causes the almost three times increase in for some configurations . , and at peak load , .for linear - grown cracks , the relation depends strongly on and , while for random - walk - grown cracks we find , for both investigated dilution parameters and . ,except for the case of power law distributed cracks , we see that the strength distribution is not weibull , and is probably of the dlb type . in a previous paper discussed how the extreme value functions are an extremely poor approximation to the dlb form .these considerations raise the following question : what form should be used to fit fracture data in practice ?one of the major concerns while fitting data to extreme value distributions is the accuracy of extrapolations in the low probability tail .we compare the standard weibull theory and the recently proposed t - method by fitting the data fracture data for the quasi - brittle fuse networks with the two techniques .in weibull theory ( eq . [ eq : weibull ] ) the survival probability of the network is given by .given the observed data vector (= vector of fracture strengths observed in simulation ) of length , the parameters are determined by using the maximum likelihood estimation ( mle ) as the values that maximize the following log - likelihood function the parameters that minimize the above log - likelihood function give the best fit parameters for the weibull theory . the t - method first applies a nonlinear transformation to the data , , and then fits the transformed data of an extreme value form , thus giving the following log - likelihood function where the parameters are estimated by minimization , and is the standard gumbel distribution .we use the dataset of over 20,000 simulations , corresponding to the random fuse model with and to test the applicability of the above method for such extrapolations .we choose 20 random samples of 200 data points from the dataset .we then fit each of the smaller data sets using the weibull theory and the t - method .we extrapolate the fits and compare prediction in the low probability tail with the empirical data .figure [ fig : transformation ] shows the standard deviation predictions of such fits .it is clear from the figure that the t - method outperforms the standard weibull theory in accuracy of the fit and it extrapolation in the low stress tail. 1 standard deviation results.,scaledwidth=40.0% ]in conclusion , we have studied the conditions for emergence of the weibull distribution for fracture strength in brittle and quasi - brittle materials . we show the weibull distribution is unstable under coarse graining for a large class of materials where the weakest link hypothesis is not strictly valid , and there is significant precursor damage . for the case of brittle materialswe show that the relation between strength distribution and the defect size distribution is highly non - trivial and can not be obtained by simple application of the griffith s criteria .crack bridging has significant effect on the tails of the crack size distribution , and thus changes the weibull modulus considerably .we find that the recently proposed t - method does a significantly better job at fitting the fracture strength data , as compared to the weibull distribution .we hope that the our results will lead to further research and discussion about the applicability of the weibull distribution for fracture data , particularly for quasi - brittle materials that crackle .we thank zoe budrikis and claudio manzato for valuable comments and fruitful discussions . z.b. and s. z. were supported by the european research council through the advanced grant 2011 sizeffects .was partially supported by the miller institute for basic research in science , berkeley .j.p.s . and a.s .( partially ) were supported by doe - bes de - fg02 - 07er46393 .j.p.s . ands.z acknowledge support from materials world network : cooperative activity in materials research between us investigators and their counterparts abroad in italy ( nsf dmr 1312160 ) .47ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) link:\doibase 10.1063/1.331731 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) http://stacks.iop.org/0022-3727/42/i=21/a=214007 [ * * , ( ) ] _ _ ( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) `` , '' in _ _ , ( , , ) * * , ( ) * * , ( ) * * , ( ) ( ) , * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
|
the statistical properties of fracture strength of brittle and quasi - brittle materials are often described in terms of the weibull distribution . the weakest - link hypothesis , commonly used to justify it , is however expected to fail when fracture occurs after significant damage accumulation . here we show that this implies that the weibull distribution is unstable in a renormalization group sense for a large class of quasi - brittle materials . our theoretical arguments are supported by numerical simulations of disordered fuse networks . we also find that for brittle materials such as ceramics , the common assumption that the strength distribution can be derived from the distribution of pre - existing micro - cracks by using griffith s criteria is invalid . we attribute this discrepancy to crack bridging . our findings raise questions about the applicability of weibull statistics to most practical cases .
|
in a cryptography protocol , we discus the possibility that two remote parties , conventionally called alice and bob , exchange a secret random key to implement a secure encryption - decryption algorithm , without meeting each other .once the sharing is achieved , the two parties can secretly exchange a message over the public channel by encrypting them with a key with a length equal to the message . in the key distribution with classical transmission lines ,an eavesdropper can freely sneak into the transmissions and monitor the information .thus the role of cryptography is to provide some mathematical procedure that makes it computationally difficult for the eavesdropper to reproduce the key from the information sent through the transmission lines .however , no existing classical cryptosystems has proven to present sufficient difficulty to an eavesdropper .in particular , it was shown that some of them can be broken in principle by quantum computation .on the other hand , quantum mechanics ( qm ) and its fundamental concepts were previously restricted to clarify and describe very special issues of physics .recently , these concepts have been extensively used in such concrete problems as the distribution of secure keys between two parties . in 1984 ,bennett and brassard proposed a way of distributing the keys in a physically secure way , by using quantum physics .their protocol bears the acronym bb84 , and was the first protocol of quantum cryptography , but we shall use the more precise name of quantum key distribution ( qkd ) . in 1991 ,ekert proposed a qkd protocol that used entangled particles , and he stated that the violation of bell s inequality might be the physical principle that ensures security . in the quantum mechanical approach to key distribution , security is based on the laws of qm and not on the unproven complexity of a mathematical problem , as was the case with the classical cryptography . in these protocols eavesdropping by an eve has been considered , and it has been shown that they are secure against various types of eve s attacks ( for a comprehensive review see ) .after the earlier qkd protocols , various extensions were proposed , for example : a key distribution protocol in which alice distributes the key among different bobs , in such a way that only with the participation of all bobs the keys could be retrieved .continuous variable key distributions are among these generalizations .qkd protocols have been demonstrated over the past fifteen years through an ingenious series of experiments by several research groups .many encouraging experiments have demonstrated qkd , some spanning more than a hundred kilometers through optical fibers .hence , the possibility of a secure key distribution in the presence of noisy channels and decoherence has been considered .recently , a new approach to qkd was proposed by zhang , li and guo ( zlg ) , in which they proposed a qkd protocol that uses a quantum key to encode and decode the classical information by considering a previously shared reusable bell state which acts as the quantum key . in this paper , we show that their protocol is insecure against eve s attack and with a probability of , eve can be informed of alice and bob keys without being detected by them .hence , we show that their extensions to higher dimension and quantum secret sharing based on the reusable ghz states , are not secure .this paper is organized as follows . in sec .ii we briefly review some properties of qkd protocols in which data exchange between alice and bob is done , using orthogonal states . in sec .iii we review the original zhang _ et al . _ protocol . in sec .iv we show that this the protocol is not secure against some of eve s attacks . in sec .v we show that zlg extension to higher dimensions , suggested by karimipour , bahraminasab and bagherinezhad ( kbb ) is not secure . in sec .vi we present some modified protocols that are not only secure against eve s attacks , but also our protocol has simple qkd steps and any eavesdropping by eve can be detected by alice and bob .hence , we show that zlg protocol can be repaired , if we go to higher dimensional shared epr states . in sec .vii we show that quantum secret sharing based on the reusable ghz state among three parties , suggested by bagherinezhad and karimipour ( bk ) , is not secure and we repair this protocol . finally , in sec .viii we summarize our results .security of quantum cryptosystems is guaranteed by the two fundamental laws of qm , ( i ) no - cloning theorem of non - orthogonal states in quantum mechanics , ( ii ) any measurement on a system can cause a perturbation on it ( wave function reduction ) . on the other hand , in addition to the above basic principle , in a realistic qkd protocol , we mention some conditions that should be satisfied by any qkd protocol .as suggested by m. nielsen , to get the best possible cryptosystems , we need to have the following properties : \i ) the theoretical basis must be entirely public ; \ii ) actual implementation should be widely and publicly available ; \iii ) there should be extended periods of testing real implementations .in addition to the above conditions , any qkd protocol must have the following property , \iv ) if alice and bob detect the presence of eve on the distributed keys , they should discard those keys and must apply their protocol to other states , and in this new use of the protocol they must detect the existence of eve again . although , systems obeying these properties do nt necessarily have security , they have preconditions for being secure .a simple scheme was proposed by bennett , which uses only two nonorthogonal states . in this method ,nonorthogonal encoding of the bit of information inevitably leads to the waste of a portion of photons .in contrast to these schemes that use nonorthogonal states , some deterministic quantum cryptography schemes , based on orthogonal states , have been proposed .goldenberg and vaidman ( gv ) and consequently , koashi and imoto ( ki ) have presented a quantum cryptography based on orthogonal states .the basic technique is to split the transfer of one bit of information into two steps , ensuring that only a fraction of the bit of information is transmitted at a time .then the no - cloning theorem of orthogonal states guarantee its security .cabello presented a protocol based on gv and ki protocols and on the idea of using a larger alphabet that saturates the capacity of the quantum channel .he has introduced a key distribution protocol which works in the holevo limit .after this paper , some people attempted to get holevo limit by using another approach .recently , some people have used non - local variables , firstly introduced by bennett _ , to propose protocols for qkd .these state , which are the product of two states can not be distinguished through the use of local operations and ( unlimited ) classical two - way communications ( locc ) .hence , zhang _ et al . _ presented another qkd scheme that uses a quantum key to encode and decode the classical information , and the previously shared reusable epr state acts as the quantum key .they called it channel encrypting qkd .this protocol can be used for quantum secure direct communication , and was proposed and pursued by some groups .one of the best properties of qkd with orthogonal states is that we do nt use any classical communication ( although , earlier versions did nt have this property ) , because of the lack of classical communications , eve has minimum information about the secret keys that are shared by alice and bob . hence in this protocol , we make use of the total dimension of hilbert space .thus , the security of protocol can be enhanced . in what follows, we restrict ourself to protocols haveing no decoherence effects and in which all of the parties and eavesdropping operations are exact and the environment does nt interact with our system .in the zhang _ et al ._ protocol , alice and bob have previously shared some quantity of the bell pairs , serving as the quantum key , respectively .the rotation can be described by the state does not change under bilateral operation of .the purpose of this operation is to prevent the other parties from eavesdropping .then alice selects a value of a bit ( or ) and prepares a carrier particle in the corresponding state ( or ) , randomly .the classical bit and the state are only known by alice herself .alice uses the particle of the entangled pairs and in state to do a controlled - not ( cnot ) operation ( is the controller and is the target ) and the three particles will be in a ghz state or then , she sends to bob .bob uses his corresponding particle to do a cnot operation on again ( is the controller and is the target ) . now the key particles and and the carrier particle are in the same state as the initial state : latexmath:[\ ] ] this state has an interesting property : carrier states corresponding to states and are not orthogonal to each other , and eve can not perfectly entangle her state to those of alice and bob .thus , for every round of the protocol , eve makes some errors on parties measurement results . with attention to the aforementioned strategy for the protection of the zlg protocol against the ggwz attack , in the following, we do nt consider this eavesdropping strategy for our proposed protocols which repair zlg , kbb and bk protocols .hence , it is not complicated to show that , in our approach to eve s eavesdropping strategy , it is not important whether alice and bob take an arbitrary rotation angle or use states of equation ( 6 ) . _ in both cases , eve can still take an appropriate rotation angle and measurement bases that help her to do eavesdropping . _recently , karimipour extended zlg protocol to -level systems .their protocol is based on shared entanglement of a reusable bell state .the security against some individual attacks is proved , where the information gain of eve is zero and the quantum bit error rate ( qber ) introduced by her intervention is . however , in this section we show that eve can attack by a special strategy to get , with probability , all of keys without being detected by alice and bob .+ for convenience , we use the same notations as in .let us first give a brief description of the qkd protocol involved here . at the beginning ,alice and bob share a generalized bell state : , which is encoded as a basis state .they defined controlled - right and controlled - left shift gates that play the role of controlled-`not ` gate , and act as follows : alice performs a controlled- right shift on and thus entangles this qudit to the previously shared bell state , producing the state : with certainty . to protect this protocol against a specific kind of attack ,alice and bob proceed as follows . before sending each of the qudits , alice and bob act on their shared bell state by the hadamard gates and , which are defined as , respectively .the key point is that a bell state disentangled from the outside world is unchanged under this operation , while a state entangled with outside is not : eve s strategy is similar to that in the previous section and can be described as follows : \1 ) alice uses the particle of the entangled pairs and in state to do a controlled- right shift on it to produce the state ( eq . [ 10 ] ) .then , she sends to bob .\2 ) eve intercepts particle and does a measurement in the basis on it .eve knows that alice , bob and particle states reduce to where .\3 ) eve sends particle to bob .bob uses his corresponding particle to do a controlled - left shift on the . bob measures and gets the classical dit corresponding to state the . at this stage ,alice and bob can not detect eve s operation .\4 ) alice and bob apply the operations and on the their particle states . \5 ) alice chooses another particle and performs a controlled- right shift on and sends to bob .\6 ) eve intercepts and keeps particle to herself , thus , eve has a share on the bell state between herself and alice .eve prepares a maximally entangled state and sends particle to bob .\7 ) bob takes particle and uses particles and to do a controlled - left shift on it bob does a measurement on the particle particle in the basis . with the probability of , we have .thus , at this stage , eve shares two similar bell states between herself on the one hand and alice and bob on the other hand . \8 ) for the next rounds of kbb protocol , in the case of the above state , eve gets the results ( ) for odd ( even ) sequences of alice and bob results for fix amount of and .however , when alice and bob compare a subsequence of the data dits publicly to detect eavesdropping , eve leaks useful information for correcting her results .more specifically , for any odd ( even ) numbers of data dit that is announced by alice and bob , eve can determine which of the possible states is common among them . by thismeans , eve can obtain all of numbered data dits completely without being detected by alice and bob .\9 ) thus , with the probability , eve can get favorable results .in other cases ( with the probability ) , alice and bob can detect eve s attack .in the preceding section , we showed that zlg protocol for epr pairs acting as the quantum key to encode and decode the classical cryptography is not secure against a specific eavesdropping attack , the f - type attack . in this sectionwe suggest three modified protocols which are secure against f - type attack .those protocols have some basic properties , a ) the previously shared bell state is the main element of protocol , b ) alice and bob use all hilbert space dimension communicating particle ( ) and all rounds of the protocol .c ) parties do nt have classical communication . inwhat follows , we first suggest two qkd protocol basis on the reusable bell state , but parties use classical communication , in the tertiary protocol , we derive a efficient and secure qkd without classical communication . in this protocol , we let alice and bob to have classical communications .the basic idea of our modification consists of the following steps : \1 ) alice and bob want to share secure bits between themselves .they have previously shared some quantity of the bell pairs .when the process begins , the two particles rotate their state by the angle , respectively .\2 ) alice randomly selects the value of a bit ( or ) ( totaly bits , for message - coding and for checking sequences ) and prepares a carrier particle in the corresponding states ( or ) . the classical bit and the state only known by alice herself .\3 ) alice randomly selects one of the following manners to send the qubit , \i ) she uses the particle of the entangled pairs and in state to do a cnot operation ( is the controller and is the target ) and calls qbits of it message - coding and qbits , checking .\ii ) she operates with a on the particle of the entangled pairs and takes the particle in state to do a cnot operation ( is the controller and is the target ) and calls it the checking ( qbit ) sequence .then , she sends to bob .alice keeps the arrangement of her choices to herself .\4 ) after bob receives the particle in the case of ( checking sequence ) alice informs bob about her operation and bob operates on the his qubit .then , he uses his corresponding particle to do a cnot operation ( is the controller and is the target ) .bob measures and gets the classical bit corresponding to the state .they keep measurement results on to themselves .\5 ) in the cases ( ) alice does nt send any classical information to bob , and bob uses his corresponding particle to do a cnot operation ( is the controller and is the target ) . then bob measures and gets the classical bit corresponding to the state .\6 ) after the transmission of bit sequence , alice informs bob about her choices sequence for checking qubits ( qubits ) .they compare their results to check eavesdropping .if the error rate in this checking is below a certain threshold , then , they can take message - coding qubits ( qubits ) as raw keys . at this stage , we represent another protocol with the same result as that of the previous approach .the first and second steps of this protocol are similar to the previous one and we replace the third and fourth steps as follows : ) alice randomly selects one of the following manners , \i ) she uses the particle of the entangled pairs and in the state as does a cnot operation ( is the controller and is the target ) and calls it message - coding ( qbits ) and checking ( qbits ) .\ii ) she does nt do any operation on the and calls it the checking ( qbits ) sequence . then, she sends to bob .alice keeps the arrangement of her choices to herself . ) after bob receives the particle in the case of ( checking sequence ) , alice informs bob not to do any operation on .thus , bob measure and gets the classical bit corresponding to the state .they keep the measurement results on to themselves .the fifth and sixth steps are the same as in the previous protocol . in the two previous qkd protocols , alice and bob restricted themselves to have two different types of states for security and some classical communication for some rounds of the protocol .at this subsection , we suggest the new protocol that does nt use classical communication ( as the original zlg protocol ) .every repairing of zlg protocol must have three basic properties , \i ) parties do nt have classical communication , \ii ) uses previously shared reusable bell state ,\iii ) uses orthogonal state as carrier . with the analysis of our eavesdropping attack ,the main defect of zlg protocol is the reduction of bell pairs to product state by eve s measurement on the carrier particle .thus , we choose some entangled state which does nt have this property . for example , we consider higher dimensional systems for sharing bell states that do nt reduce to a product state under eve s measurement . inwhat follows , we need to define the controlled gate between one qudit ( as controller ) and one qubit ( as target ) .similar to kbb protocol , we consider the controlled that acts on the second qubit , conditioned on the first qudit , and defined as : in the above relation subscripts of the state vectors represent the dimension of hilbert space corresponding to each particle , and for the spin half target particles .thus , for even ( odd ) target state changes to ( ) . in our protocol , alice and bob have previously shared some quantity of the bell pairs , serving as the quantum key latexmath:[\[\begin{aligned } when the process begins , the two parties operate hadamard gate on the their particle s state which were defined in the sec .the state does not change under a bilateral operation .then alice selects a value of a bit ( or ) and prepares a carrier particle in the corresponding state ( or ) , randomly .the classical bit and the state are only known to alice herself .alice uses the particle of the entangled pairs and in state to do a controlled gate operation and the three particles will be in state then she sends to bob .bob uses his corresponding particle to do a controlled gate operation on again .now the key particles and and the carrier particle are in the same state as the initial state : latexmath:[\[\begin{aligned } bob measures and gets the classical bit corresponding to the state . at the intermediate stage of eq.([14 ] ) , if eve intercepts particle and does a measurement in the basis on it , reduces to or .thus , eve ca nt disentangle alice and bob states , if alice and bob apply hadanmard transformation on the their states ( for example we consider ) , we have : in the above equation and we restrict ourselves to ( even dimensional hilbert space ) .thus , the above state reduces to = \sum_{l}|l\rangle_{a}[|l\rangle_{b}+2|l+d\rangle_{b}]$ ] . by noting the above state, eve can do any operation on the carrier state , for every cases .alice and bob have a chance to detect eve s operation in the next rounds of protocol . from the aforementioned reasons, we can deduce that the f - attack does not apply to this approach , eve can only entangle her state with alice and bob states and she wishes that after her operation and parties operation , alice and bob can not detect her effects .this strategy is the same as ggwz attack which we have shown to be removable easily .generalizing this approach to the kbb protocol is simple , we consider carrier states in the -dimensional hilbert space and shared bell state in the -dimensional hilbert space .we divide the -dimensional hilbert space to sections , where any section has elements ( ) , similar to kbb protocol .alice performs a controlled - right shift on and thus entangles this qudit to the previously shared bell state , producing the state : in the above relation , calculated at and summation on the represent one set of s which run as ( ) .alice then sends the qudit to bob . at the destination , bob performs a controlled - left shift on the qudit and disentangles it from the bell state , hence revealing the value of with certainty .similar to the usual manner , alice and bob act on their shared bell state by the hadamard gates and . at the intermediate stage of eq.([1] ) , if eve intercepts particle in the intermediate cases and does a measurement in the base on it , then , reduce to .thus , eve ca nt disentangle alice and bob states and f - attack does nt apply to this approach again . although , alice and bob can not detect eve s operation , but , after hadamard transformation by parties , the shared state is transform to : it can be shown that for every round of the protocol , alice and bob have a chance to detect eve s operation .although , ggwz attack still exists for this generalization , based on what we did in sec .iii , we can use other states as carriers , to get secured against ggwz attacks .in a recent paper , bagherinzhad and karimipour ( bk ) proposed a quantum secret sharing protocol based on reusable ghz states .the security against both intercept - resend strategy and entangle - ancilla strategy was proved .however , here we show that with the help of our intercept - resend strategy , eve can obtain all of the data bits with probability without being detected by the communication parties . for convenience, we use the same notations as in ref. .the eavesdropping strategy is illustrated in fig.([ghz ] ) . according to fig.([ghz ] ) , if eve can transform the previously shared entangled state among alice , bob ad charlie to , then , she can take all of keys that alice sends to bob and charlie , where are bell states .now let us give an explicit description of eve s strategy .\1 ) in the beginning , alice , bob and charlie have a perviously shared ghz state which we denote by .\2 ) alice prepares the state ( corresponding to classical bit ) and entangles this state to the already present ghz state by performing cnot gates ( as defined at the kb protocol ) on latexmath:[\[\begin{aligned } to produce the state latexmath:[\[\begin{aligned } and she sends the coded state to bob and charlie .\3 ) eve intercepts first to send qubits and performs a measurement on them in the bases .thus , the carrier state reduces to and and extract the state . at this moment, parties can not detect eve s operation . in what follows, we only consider the state ( a similar approach can be consider for the state ) . among themselves .( b ) eve can transform previously shared entangled state to three bell states of , among herself and parties , then , she can take all of keys that alice sends to bob and charlie.,width=302,height=436 ] \4 ) alice , bob and charlie apply hadamard operation on the their particle states . then , for the even bits , which are encoded as states , i.e. , and , alice entangles this state to the shared state by performing only one single cnot gate on the to produce the state : alice sends the coded state to bob and charlie .\5 ) eve intercepts and keeps sending qubits to herself .thus , she has shared the state between herself and alice .\6 ) eve prepares two maximally entangled state and sends qubits 4 and 6 to bob and charlie respectively . at this stage ,the common state between them is : \7 ) bob and charlie take qubits 4 and 6 and act on the these states by operations and .the common state can be writhen as : \8 ) bob and charlie do a measurement on the qubits 4 and 6 in the basis , and identify these types of keys together . with the probability of one - half, the above state reduces to : for for thus , eve separately shares her qubit in a maximally entangled states with alice s , bob s and charlie s . before going to the next round of qkd, we would like attention following relation for the above states for , we have : and thus , for odd sequence bits eve gets the same bits that alice sent ( for each cases ) . andfor even sequence bits eve gets the bits ( ) for . on the other hand , bob and charlieget precisely the same sequence of bits that alice sent for all rounds of the qkd .although , after parties communication in the general channel for test of security , eve can recognize appropriate keys .this approach has an interesting property that eve can get all of the secret keys with probability one half ( the same as in the case of zlg protocol ) . in other words , the probability does nt increase with going from two to three parties . in what follows , we would like to repair the bk protocol by using higher dimensional ghz states . the new protocol that we suggest is similar to our previous approach to the zlg protocol .we consider a generalization of the ghz state to a dimensional system which was previously shared among the parties .this is the state define as : to bob and charlie .she prepares the state corresponding to and operates with controlled gates and ( previously defined at eq .( [ cnot ] ) ) on them , to the get the state : and she sends a coded state to bob and charlie . at the destination , bob and charlie act on this state by the controlled gates and and extract the state ( for further convenience , we restrict our selves to particles with even dimension ) . hence ,for the even bits which are encoded as state ( similar to bk protocol ) , parties apply hadamard transformation to -dimensional ghz state which we denote by in the above equation .thus , alice entangles the state to the shared state by performing only one single controlled gate to produce : latexmath:[\[\begin{aligned } & & \frac{1}{d } \left\ { \sum_{l^{even}_{1},l_{2},l_{3}=0}^{d-1 } \delta(l_{i},d)|l_{1},l_{2},l_{3}\rangle_{abc}|\overline{q}\rangle\right .\\\nonumber & & \left.+\sum_{l^{odd}_{1},l_{2},l_{3}=0}^{d-1}\delta(l_{i},d)|l_{1},l_{2},l_{3}\rangle_{abc } at the destination , bob and charlie act on this state by the operators and and extract .noting that for , alice and bob must operate in the next round of the protocol . if in the intermediate step of eq .( [ 25 ] ) , eve intercepts the first sending qubits and performs a measurement on them , then , the ghz state reduces to : , we have : in the above equation .thus , eve ca nt disentangled alice , bob and charlie states and we deduce that f - attack does nt apply to this approach it can be shown that for every round of the protocol , parties have a chance to detect eve s operation .eve can only entangle her state with alice and bob states and she wishes that after her operation , the parties , alice , bob and charlie , can not detect her effects .this strategy is the same as the ggwz attack , we showed to be removable easily .one of interesting results of our extension is that our protocol can simply be extended to higher dimension qkd between alice , bob and charlie , by using quantum key encoding and decoding classical information .in this paper , we review zlg , kbb and bk protocols for qkd via quantum encryption and show that these protocols are not secure .thus , eve has always a chance to be informed of her parties keys without being detected by them .that is , we have shown that the security conditions are a necessary condition but not a sufficient one , specially in the qkd with orthogonal states . for the sake of completeness we briefly described the behavior of the presented schemes on efficiency of the transmission .we consider the definition give by cabello .the efficiency of a qkd protocol , , is defined as : where is the expected number of secret bits received by bob , is the number of qubits used in the quantum channel , and is the number of bits used in the classical channel between alice and bob . since in our schemeno classical information is needed in the message - coding , we have .this together with and leads to .however , the practical efficiency takes into account channel s transmittance as well . in our protocol( zlg protocol with higher dimensional bell states ) a qubit travels for a distance , being the separation between alice and bob . if is the transmittance of the qubit over a distance , after traveling the distance and the operation of bob , he takes measurement on the carrier particle .thus , the practical efficiency can be evaluated as . by comparison , in bb84it is , because , and , and a qubit travels a distance .this implies that the presented scheme is more efficient than bb84 , provided the transmittance of the channel is .it is also possible to see that ours practical efficiency is more efficient than m. lucamarini and s. mancini protocol if and that the entanglement - based scheme contains a factor , making it lower than ours .we would like to thank m. golshani for useful comments and critical reading of the manuscript , author is grateful to v. karimipour for helpful discussions and also a. a. shokri for his helps ( this work supported under project name : gozaar ) . c. h. bennett , and g. brassard , in _ proceedings of the ieee international conference on computer , systems , and signal processing , bangalore , india _ ( ieee , new york , 1984 ) pp .175 - 179 ; c. h. bennett , and s. j. wiesner , * 69 * , 2881 ( 1992 ) ; c. h. bennett , * 68 * , 3121 ( 1992 ) .f. grosshans and n. j. cerf , * 92 * , 47905 ( 2004 ) ; f. grosshans and p. grangier , * 88 * , 57902 ( 2002 ) ; r. namiki and t. hirano , * 92 * , 117901 ( 2004 ) ; t. tyc , * 36 * , 7625 ( 2003 ) ; m. d. reid , * 62 * , 62308 ( 2000 ) . a. muller , ( london ) * 378 * , 449 ( 1995 ) ; f. grosshans , ( london ) * 421 * , 238 ( 2003 ) ; p. townsend , ( london ) * 385 * , 47 - 49 ( 1997 ) ; t. jennewein , * 84 * , 4729 ( 2000 ) ; d. s. naik , * 84 * , 4733 ( 2000 ) ; w. tittel , * 84 * , 4737 ( 2000 ) ; i. p. degiovanni , * 69 * , 32310 ( 2004 ) ; f. a. bovino , * 68 * , 34309 ( 2003 ) . g. brassard , * 85 * , 1330 ( 2000 ) ; m. koashi , and j. preskill , * 90 * , 57902 ( 2003 ) ; j. h. shapiro , * 67 * , 22309 ( 2003 ) ; n. gisin , and s. wolf , * 83 * , 4201 ( 1999 ) ; k. tamaki and n. lutkenhaus , * 69 * , 32316 ( 2004 ) ; j. c. boileau , * 92 * , 17901 ( 2004 ) ; z. d. walton , * 91 * , 87901 ( 2003 ) ; w. hwang , * 91 * , 57901 ( 2003 ) ; d. deutsch , * 77 * , 2818 ( 1996 ) .y. zhang , c. li and g. guo , * 64 * , 24302 ( 2001 ) .v. karimipour , a. bahraminasab and s. bagherinezhad , * 65 * , 52331 ( 2002 ) .s. bagherinezhad and v. karimipour * 67 * , 44302 ( 2003 ) .m. nielsen webloge http://www.qinfo.org/people/nielsen/blog/ + ?l. goldenberg and l. vaidman * 75 * , 1239 ( 1995 ) .m. koashi , and n. imoto , * 79 * , 2383 ( 1997 ) .a. cabello , * 85 * , 5635 ( 2000 ) .t. mor , * 80 * , 3137 ( 1998 ) .i. p. degiovanni , * 69 * , 32310 ( 2004 ) .m. lucamarini and s. mancini , * 94 * , 140501 ( 2005 ) .g. l. long and x. s. liu , * 65 * , 32302 ( 2002 ) ; deng , g. l. long and x. s. liu , * 68 * , 42317 ( 2003 ) .g. guo , * 64 * , 042301 ( 2001 ) . l. hsu and c. li , * 71 * , 022321 ( 2005 ) .although , in the authors claimed that their protocols have , but with note to cabello s definition for , it can be shown that respectively for their protocols , here in the practical efficiency is equal to .
|
in this paper , we investigate the security of zhang , li and guo quantum key distribution via quantum encryption protocol [ and show that it is not secure against some of eve s attacks and with the probability one half she gets all of keys without being detected by the two parties . the main defect in this protocol is that there is an attack strategy by which eve can change the previously shared bell state between alice and bob to two bell states among herself and alice and bob . hence , we show that with probability its generalization to -dimension systems is not secure and show that its extension to the case of more partners based on the reusable ghz states is not secure and with probability one half eve gets all of keys without being detected by the two parties . in what follows , we show how in going to higher dimensions those protocols can be repaired .
|
authors are thankful to alan e. robinson for useful discussion on their work and giving the matlab code of flodog model .authors are also thankful to mr .shaibal saha for discussion on reverse correlation experiment .
|
based upon physiological observation on time dependent orientation selectivity in the cells of macaque s primary visual cortex together with the psychophysical studies on the tuning of orientation detectors in human vision we suggest that time dependence in brightness perception can be accommodated through the time evolution of cortical contribution to the orientation tuning of the odog filter responses . a set of difference of gaussians functions has been used to mimic the time dependence of orientation tuning . the tuning of orientation preference and its inversion at a later time have been considered in explaining qualitatively the temporal dynamics of brightness perception observed in `` brief presentations reveal the temporal dynamics of brightness induction and white s illusion '' for 58 and 82 ms of stimulus exposure . + [ [ section ] ] psychophysical studies on human observers suggest that our visual system perceives the luminance of a target region depending upon the luminance of its surround . in a spatial square grating consisting of alternate black and gray stripes , the gray stripes will be looking brighter than the same gray stripes appear with white bordering stripes . this is an example of brightness induction which produces brightness contrast effect . + + it has been observed by (1986 ) that brightness modulation on a static gray patch due to the luminance modulation of its large surround depends on the temporal frequency of the luminance modulation . at a lower temporal frequency ( below hz ) the brightness modulation is perceived significantly but with the increase in temporal frequency the effect of modulation is completely diminished and the central patch appears to be static gray . based on this finding , rossi and paradiso ( 1996 ) have explored whether the temporal cut - off of brightness modulation depends on the spatial scale of the stimulus . modulating the luminance of every other stripe of a square grating and keeping the intervening stripe with constant static luminance , they have observed that temporal cut - off of perceiving brightness modulation on the static gray stripes decreases with decrease in spatial frequency of the square grating . the authors have concluded that process controlling the brightness change due to induction is relatively slower than the process involves in brightness change from direct luminance modulation . the slower process is supposed to get mediated via filling - in mechanism because the signal of luminance contrast appears at the edges by a faster mechanism travels with a finite speed to influence the brightness of a uniform region neighbouring it . therefore , the wider stripe of the square grating will take longer time to get filled in compared to the thinnest one . the speed of filling - in comes out to be when estimated from the phase measurement on the same experiment . + filling - in can play a crucial role in producing the temporal limit of brightness induction in square wave grating as well as achromatic craik - obrien - cornsweet ( coc ) effect but fails to explain the temporal limit of chromatic version of coc effect . the temporal cut - off frequency for perceiving chromatic coc illusion decreases with increasing spatial frequency . more so , while increasing the spatial frequency above 0.02c / deg , they observed that temporal modulation cut - off for achromatic grating followed the shape of human achromatic contrast sensitivity function ( csf ) which is found to be inconsistent with filling - in theory . + + if brightness induction is supposed to get mediated via filling - in then it will be a slow process . revisiting the idea , (2008 ) have explored whether the strength of brightness induction decreases as the exposure time of the stimulus is made shorter and shorter and at what limit of exposure time the illusion gets disappeared . the limit was expected to be different for different spatial frequencies as it was observed in the earlier experiments . they have replaced the modulating inducing stripes of the square grating with static ones . the whole grating is displayed for a short exposure of time . immediately after it , a noise mask of same horizontal frequency is set on for a comparatively longer time to stop further processing initiated by the previous stimuli . subjects were asked to match the brightness of a particular grating stripe of luminance either of or which is bordered by either of 12 or 102 . + it is observed that human observer can perceive brightness induction ( brightness contrast ) only for a brief presentation ( 58 ms ) of the stimuli irrespective of their spatial frequencies . contrary to their expectation , it can be observed from the results of their experiments 1 and 2 that induced brightness on the target stripe of the square grating is maximum for shortest on time ( 58 ms ) and with a prolonged exposure ( 82 ms and more ) its strength gets reduced . the difference in matching luminance of the same target stripe appearing with different bordered stripes ( 12 and 102 ) also decreases with exposure time resulting in a decrease in illusion strength . even , white s illusion is also perceived for a short exposure ( 82 ms ) and in contrast to brightness induction , illusion strength increases if more exposure time is allowed . + though debated by many researchers including themselves on the speed of filling in , the widest grating ( ) in their experiment are supposed to get filled in within 29 - 37 ms with a speed of estimated from the work of (1996 ) . the estimated filling - in time does not include any other temporal delay . faster filling - in may be consistent with the initial brightness perception for shortest exposure length ( 58 ms ) if the signal delay from retinal ganglion cell to v1 is considered . but authors have also argued that the observed temporal dynamics of brightness induction can not be explained in the light of filling - in because the speed of filling - in could be too fast to limit the speed of brightness perception . + + however , the temporal frequency cut - off obtained for different spatial frequencies while observing brightness modulation in achromatic coc effect , agrees well with the shape of human achromatic contrast sensitivity function ( csf ) . therefore , the authors have suggested that temporal dynamics of coc illusion may arise due to spatio - temporal filtering of the stimulus by human luminance systems , instead of mediated via filling -in . on the other hand , in a steady visual condition psychophysical measurement on brightness contrast and assimilation can be modelled by spatial filtering followed by rms response normalization ( odog / flodog ) . according to robinson et al . ( 2008 ) , these multi - scale models do not include temporal dependence on spatial scale so they are compatible with the fast brightness perception for the stimuli of any spatial scale . but , there is no explicit time dependence in these models , therefore , they can not be used to predict the time course of brightness illusion as it is observed in their experiment . one possible way to incorporate time aspect in the odog / flodog models is to consider that spatial filtering and response normalization are completed at different time instances . the onset of noise mask after a short presentation of the stimuli is thus supposed to interfere with the ongoing response normalization process if it is not completed . therefore , the incompleteness of the processing is probably getting reflected in the induced brightness perceived for different length of exposure . + + the models like flodog and odog exhibit linearity while computing a weighted sum of the intensity distribution through spatial filtering but appear nonlinear in performing response normalization . this property is very often observed in the response of simple cells in the primary visual cortex of macaque visual information falls on their receptive field . nonlinearity in cell s response can be accounted for if shunting or divisive inhibition among a large number of cortical cells is being considered . thus , in visual network , intracoritcal feedback which possibly provides shunting inhibition , results into response normalization in their model . + _ ( i)intracoritcal feedback and orientation tuning dynamics in v1 _ + orientation tuning is an emergent property of the cells in primary visual cortex . this orientation selectivity of the cortical simple cells in primary visual cortex may arise due to geometrical alignment of the lgn receptive fields . sharpness of the orientation tuning will depend upon the aspect ratio of the convergent feedforward structure . however , weakly converging thalamocortical input either or along with the cortical inhibition can not explain the contrast invariant orientation tuning the cells in v1 . + in addition , experimental studies by (1997 ) have demonstrated that orientation tuning in the v1 of macaque evolves with time . the broadly tuned neurons in the layer and which receive direct input from lgn do not change their orientation preference in course of time though overall response is reduced . on the other hand , neurons in the output layer of 4b , 2 , 3 , 5 or 6 changes their preferred orientations with time . for example , orientation distribution of a typical neuron in shows a narrow peak around its preferred orientation at 53 ms from the onset of that particular orientation and produces a mexican hat distribution around 59 ms . finally at 71 ms , it exhibits broader tuning around an orientation orthogonal with respect to that of the earlier one . + though there exist a long debate and several modifications to the feedforward model of orientation tuning , the temporal dynamics of orientation tuning observed in cells of v1 may be accounted for by recurrent cortical excitation or inhibition . even recurrent network models are considered to be consistent with the observations on orientation plasticity in cortical cells . + this recurrent model considers intracortical feedback crucial for sharpening the orientation selectivity of cortical cell that receives weak feedforward orientation bias from converging lgn input . according to (1995 ) , in orientation domain , a balance between the narrowly tuned intracortical excitation and broadly tuned intracortical inhibition can produce contrast invariant cortical orientation tuning from the weekly tuned thalamocortical excitation in cat s v1 . + _ ( ii)relationship between orientation dynamics and psychophysical observations _ + similar to their earlier study on orientation tuning dynamics in v1 of macaque , ringach ( 1998 ) has conducted a psychophysical measurement on human observer with a sequence of flashed sinusoidal gratings of random orientations and spatial phases . it has been observed that orientation detector in human visual system exhibits a distribution of ` mexican hat ' shape which resembles the orientation distribution of some single neurons in the layers 4b , 2 + 3 and 5 of macaque s primary visual cortex . the author has inferred from their findings that lateral inhibition in the orientation domain which is thought to be responsible for tuning dynamics in v1 of cat and monkey , is probably present in the human visual cortex . even the orientation inversion in the probability distribution observed for some of the cells ( fig . 2 ,1997 ) is also been supported by the psychophysical study of cross orientation interaction in human vision by (2008 ) . following the similar methodology of (1998 ) , they have found that when the inter stimulus interval is 100 ms , the aligned gratings result in suppression but the misaligned gratings favour facilitation . + _ ( iv ) timecourse of brightness coding in v1 _ + more so , with the study of c1 component of visual event related potential ( erp ) in human observer perceiving white s illusion , (2004 ) have reported that the perceived brightness difference in white s effect is reflected in the early phase ( 50 - 80 ms after the onset of the stimulus ) of c1 . this early phase represents the initial activation of area like v1 in the striate cortex . + + in the following sections , we have investigated with the stimuli used by ( 2008 ) to check whether the time dependent intracortical feedback which generates the dynamics of orientation tuning in v1 can be used along with static odog filters to predict qualitatively the nature of brightness perception over time . + [ [ section-1 ] ] psychophysical observation together with physiological measurement on orientation selectivity suggest that time evolution of orientation distribution of cells in v1 might have an effect on the time course of brightness perception . the dynamics of orientation tuning thus indicates that computational model of v1 should not only comprise of spatial filtering by bank of static oriented filters but also include the contribution for dynamical response facilitation or suppression . + the multi - scale orientation filtering ( odog / flodog ) which has been used for successful brightness prediction is supposed to mimic the visual processing of area like v1 . so , the same spatial filters can be thought of using in the prediction of brightness perception over time if intracortical feedback is associated with them . + it can be assumed that observed probability distribution of orientation selectivity of the cells in v1 the orientation impulse response at a particular instant . if the orientation detectors in our visual system are identical and connected with each other in a ring fashion then response of the orientation detectors at that particular instant can be computed by a circular convolution of the orientation impulse response and the input orientation distribution at that moment . + if we consider the initial time delay for the visual signal from retina to reach the cells in v1 is about 20 ms , then the signal arising due to onset of noise mask will stop additional processing of the stimulus signal after 78 ms from the onset of the stimulus . therefore , it can be expected that initial brightness percept of the stimulus formed at 78 ms is due to the effect of time evolution of orientation detectors for 78 ms . similarly , when the exposure time is increased to 82 ms i.e. , at the time delay of 102 ms from the onset of the stimulus , the observed effect will be changed due to the change in orientation impulse response in course of time . if we look at the time evolution of orientation tuning of some cells in v1 ( fig . 2 ,1997 ) , it is found that around 20 ms from the sharply tuned state , the distribution gets inverted and relatively broader tuning appears around an orientation orthogonal to the most preferred orientation of the earlier distribution . + temporal dynamics of the response distribution of orientation detectors can be implemented in the following way . + _ ( i ) generation of the orientation impulse response _ + following the recurrent model of (1995 ) , in our proposition we have considered a balanced difference of gaussians ( dog ) to construct the mexican hat shape of the orientation impulse response at time t from the onset of the stimulus . where , and are considered to achieve narrow tuning halfwidth for orientaion impulse response . and are the mean position of the gaussians generating the time dependent orientation impulse response and for t ms of exposure they coincide . whereas , for t ms of exposure , the gaussians are centred on two orthogonal orientations with and to achieve slightly broader tuning around orthogonal orientation , in compared to the previous condition . balance condition will keep area under the distribution curve constant before and after tuning . modelling inversion of orientation impulse response with mean shifted dog may not have the physiological equivalence like the model of (1995 ) but can be treated as a computational manipulation to mimic the physiological observation . + _ ( ii ) response of the orientation detectors _ + similar to the local rms computation in flodog model , input orientation distribution from the output of scale odog filter is computed from gaussian weighted mean response of a region of area . where , is the standard deviation of the center gaussian of scale odog filter . it can be anticipated from the observations of ( 1997 ) on dynamics of orientation selectivity that both the coefficients and in our expression can be varied with time . therefore the model s prediction could be different for several combinations of and . however , in our study , we do not vary either with time or scale but choose different for two different instants . + here , we have considered the same frequency power law as it has been used in odog model by (1999 ) in combining information of a single orientation over multiple scales . the exponent of the power function was selected as 0.01 to approximate supra - threshold contrast sensitivity of human visual system . + are the spatial frequency weight factors . s are the spectral mode of the multi - scale odog . + _ ( iii)prediction of brightness from the orientation distribution _ + if a simple cell in v1 is excited by a moving grating for a short time of exposure , the post stimulus time histogram ( psth ) obtained from the single cell recording shows that firing rate increases to a maximum around 50 ms from the onset of the stimulus then decays towards a sustained level . looking at this finding , (2008 ) suggested that fastest brightness percept might arise from the prediction of elevated firing rate by our visual system . since , our visual system treats the ` white ' and ` black ' with equal status , our proposed model decides the brightness of an induced stripe of the square grating depending on the maximum contrast response obtained from equation ( 4 ) for a very short presentation of time e.g. 58 ms and 82 ms from the onset of the stimuli . [ [ section-2 ] ] stimuli used in our computer simulation are the same as those used in the experiment of (2008 ) except the matching gray patch and its surround . the upper half of which is black and lower half is filled with the illusory stimuli under study . we have considered the stimulus size of . stripe width of the thinner grating is taken as 31px and that of wider grating is 340px . for generating white s stimulus , height and width of the test patches are considered as 62px and 31px . the oriented spatial filter outputs are generated by the matlab code of (2007 ) . orientation filters are generated for in the step of . mean response of visual information over all scales is evaluated at the middle of the illusory stimuli . though it has been mentioned in the earlier section that window size is proportional to the scale of the gaussian , we have fixed the size of the gaussian window by for estimating mean response . all and are considered to be 1 during this study . [ [ section-3 ] ] with the propositions made in the earlier section , our observations from the computer simulation can be treated with the following categories and which could be related to the experiments of (2008 ) . because there appears variability in the subjects prediction while judging the brightness for same type of stimuli . + ( i ) visual information over all spatial scales is supposed to be present at both the instants and prediction is made based on the mean response computed by all the scales of gaussian window . + figure 2(a ) shows the predicted response at two different exposure lengths ( say 58 ms and 82 ms ) for the thin square grating with induced stripe of equivalent luminance bordered by inducing stripe of luminance ( dotted line with square ) and ( dotted line with diamond ) respectively . similarly solid line with square and diamond symbols in the same figure represents the predicted response for induced stripe of equivalent luminance . + at both the instants , target stripe appears brighter when it is induced by the stripe of lower luminance ( ) than of higher luminance ( ) . vertical distance between the dotted lines or the solid lines is reduced at the later instant i.e. , with the increasing exposure length the difference between their predicted responses ( illusion strength ) gets reduced . this is also the observation of ( 2008 ) . + predicted response for the wider grating also ( fig . 2b ) follows the similar behaviour as that with a thinner grating . + * * predicted response for a gray test region placed on a white stripe ( dotted line ) and the same on black stripe ( solid line ) of square grating is depicted in figure 2(c ) for two different ( 58ms and 82ms)length of stimulus exposure . if response determines the perceived brightness then the test patch positioned on a white stripe and flanked by black stripes on either side will be judged brighter than the similar one placed on the black stripe and flanked by two white stripes , for the shortest length of exposure ( e.g. 58 ms ) . this is opposite to the white s illusion what human observer perceives . however , observer participated in the experiment of ( 2008 ) , found it difficult to see the test patch in the shortest time interval ( 58 ms ) . on a relatively longer exposure ( 82 ms ) , inverted orientation impulse response produces strong response suppression at the preferred orientation but facilitation at the orthogonal orientation ( fig 5i & j ) relative to it . the predicted response of the test patches indicates that if the visual system follows the same rule as in the earlier instant , observers might not be able to perceive white s illusion . though the difference in predicted response ( fig . 2c)is very small , the gray test patch on the white stripe still appears brighter than the identical one placed on black stripe . + ( ii ) there are seven spatial scales in the odog model . for 82 ms of stimulus exposure , mean response of visual information over relatively higher spatial scale filters(three largest spatial scales of odog ) is computed with a gaussian window of smallest spatial scale among them . prediction for wide and thin grating ( fig . 3a & b)do not alter from what it appears in the previous condition . the illusion strength decreases with the increase in exposure length.on the other hand in white s stimulus , predicted response of the gray test patch placed on the white stripe of the grating is appearing darker ( fig . 3c ) than the same gray test patch positioned on the black stripe at later instant . thus , the use of smaller sale window function in the prediction could be relevant with the observation ( fig 7 ) in the experiment of ( 2008 ) because subjects might be trying to see the test patches clearly to judge the brightness they perceived . + ( iii ) in the above two cases , mean response is evaluated at middle of the illusory stimuli . if the point of observation is shifted towards the interface of black region and the illusory stimulus , the model s prediction differs . this is because of the significant contrast response produced by larger spatial scale filters . slope of the response curve of the target stripe of luminance 31 of square grating does not change ( fig . 4a & b ) from that observed in earlier cases . but the slope of the response curve for the target stripe of higher luminance ( 72 ) is reversed when it is induced by the bordering stripe of luminance 102 . similar observation is also reported by 2 out of 4 observers in the experiment of , ( fig . 4 , 2008 ) . in contrast , the response curves ( fig . 4 ) for wide grating at the shortest exposure of time crosses each other which does not follow the observation in their experiment . + _ balanced dog and other possibility _ + in our proposition , we have considered only balanced dog as the cortical contribution to generate the orientation distribution for different length of exposure . as a result area under the distribution curve before and after tuning is remained constant . time evolution may be explored with the use of unbalanced dog ( ) which considers inhibition is stronger than the excitation . another possibility is to use a family of dog to get different orientation distribution at different time . + while predicting the mexican hat shape of orientation selectivity of the cells in macaque v1 , have used the von mises distribution to approximate cortical excitation and inhibition . this distribution function can also be used for predicting orientation impulse response . + _ beyond 82 ms _ + robinson et al . ( 2008 ) have investigated the dynamics of brightness perception for exposure length longer than 82 ms . it is observed from the matched luminance ( fig 3 & 4 , robinson et al . 2008 ) that induction strength gradually decreases with increasing exposure time and the perceived brightness tends towards the actual luminance of the target stripe of the square grating . in contrast , luminance matching in the white s stimulus shows that illusion strength increases with increase in exposure length and the perceived brightness of the gray test patch shifts away from its actual luminance ( fig . 7 , , 2008 ) . though , not exactly similar to their experiment even sluggish , fmri studies on contrast detection task by human observers also indicate that activity in the early visual area like v1 may correspond to two phases of response ; immediate response due to the stimulus and later feedback signal ( after 100 ms ) in generating the subjects visual percepts . therefore , on a longer stimulus exposure , the feedback from hierarchial visual areas can modify the brightness perception of the stimulus . + even , it can be anticipated that brightness matching technique by looking at the target stripe several times during a trial of few seconds , can exhibit the influence of feedback signal responsible for percept on to the instantaneous stimulus response . this could be one possibility of getting minimal difference between the matched brightness of a square grating for 58 ms and 82 ms of exposure in their experiment ( robinson et al . 2008 ) . + [ [ section-4 ] ] we have modelled that time dependence in brightness perception can be accommodated through the time evolution of cortical contribution to the orientation tuning of the oriented difference of gaussians ( odog ) filter responses . orientation tuning has been implemented using a set of difference of gaussians functions . our results can qualitatively explain the temporal dynamics of brightness perception observed by ( 2008 ) for 58 and 82 ms of stimulus exposure . computing mean response for three largest spatial scales of odog with a gaussian window of smallest spatial scale among them , we observe that model s prediction on brightness induction ( for 58 and 82 ms of exposure length ) and white s illusion ( for 82ms of stimulus exposure ) matches with the psychophysical observation . whereas , if mean response is computed by all scales of gaussian window for 58 and 82 ms of stimulus exposure , our model predicts successfully the time evolution of brightness induction in square grating but the prediction of white s illusion is opposite to the observed one . when the point of observation is shifted towards the interface of black region and illusory stimulus , the prediction for the target stripe of higher luminance ( 72 ) of wide grating does not corroborate the psychophysical observation . +
|
josephson junctions ( jjs ) were proposed as threshold detectors of the full counting statistics ( fcs ) by tobiska and nazarov and independently by pekola in 2004 . sincethen there has been continuing effort to implement the proposed schemes experimentally as well as to improve them and better understand their potential theoretically . the original scheme by tobiska and nazarov proposed using overdamped jj as the threshold detector .this appears to be problematic since in the overdamped junction when the effective phase particle overcomes the tilted washboard potential barrier it gets immediately retrapped in the adjacent minimum .this results in the phase diffusion which , however , does not yield enough sensitivity for detecting the whole fcs .this could in principle be overcome by employing a negative - inductance device which apparently hasnt appealed to the experimentalists enough to actually implement it .instead they opted for an obvious alternative to use underdamped junctions where , under suitable conditions , once the particle overcomes the first barrier it keeps on sliding down the potential thus producing finite voltage .thus , the switching of an underdamped junction between the supercurrent ( static phase ) and running ( finite phase velocity , i.e. finite voltage ) state would provide a prime example of a threshold detector .unfortunately , this innocently - looking change in the setup dramatically changes the level of difficulties involved in the theoretical analysis .this paper addresses those difficulties in some detail .the structure of the paper is the following . in the next section [ sec2 ]i report the theoretical concept of calculating the non - equilibrium escape rate due to a non - gaussian noise source whose fcs is to be determined . the general theory based on the wkb - like approximation for the weak noise intensityis further carried out to an analytical result in case of the linear perturbation theory in the third cumulant for very weak junction damping in subsection [ subsec21 ] . in this subsectioni also make comparison with alternative existing theories . in the next subsection [ subsec22 ]the full theory is numerically implemented and the numerical results in an experimentally relevant regime are discussed and further compared with various analytical predictions . in section [ sec3 ]i briefly raise some experimentally relevant questions such as what is the effect on the rate asymmetry of the nominally subleading terms entering the rate and whether one can actually experimentally leave the linear regime and achieve the measurement of the whole fcs of a noise source . in the last section [ concl ]i summarize what has been achieved in this work and review the remaining open problems .the josephson element in an electrical circuit is often modeled as a current biased ( ) resistively ( ) and capacitively ( ) shunted ideal jj with the josephson current - phase relation .the voltage across the junction is determined by the second josephson relation with the time - derivative denoted by the dot . moreover , due to the action of ubiquitous thermal ( gaussian ) noise characterized by the temperature and non - equilibrium electronic noise from the measured device whose fcs is to be determined , the jj is subjected to stochastic forces and its dynamics is thus described by the following langevin equation ( rcsj model ) in a realistic experimental situation the current - bias assumption can be inadequate and one may need to generalize the above model .the general consequences of an imperfect current bias are so called environmental or cascade " corrections to the measured cumulants of the source fcs which were studied in previous works .they could be straightforwardly included here in the same spirit as in those works , especially ref . , but since they appear to be of minor importance in the so far reported experiments i will neglect them . in this studyi will consider in detail exclusively the simplest case of the poissonian shot noise corresponding to the measured device being a tunnel junction .in such a case is just a train of -function - like spikes which are separated by an exponentially distributed waiting time with a single parameter ( mean waiting time ) being the mean ( particle ) current flowing through the tunnel junction .this case is also the only one studied experimentally by this type experiments to date . assuming the temporal width of the pulses composing to be very small compared to a characteristic time of the junction dynamics ( which is its plasma frequency )one can obtain a master equation ( analogous to the fokker - planck equation in case of gaussian noise only ) for the probability density in dimensionless units ,\end{aligned}\ ] ] with the rescaled bias current , the quality factor of the ( unbiased ) junction and with the josephson energy of the junction proportional to the critical current .the last term in the equation can be identified as stemming from the cumulant generating function of the poissonian process and the eq .( [ fpeq ] ) holds for the positive polarity of the tunneling current .the opposite polarity would just change the sign in the exponential in eq .( [ fpeq ] ) . ] which suggests how to deal with non - poissonian noise source provided the markovian approximation is made .thus , the substitution \rightarrow f(-\lambda\partial/\partial v) ] for the probability density with being a small parameter related to the noise intensity : .thus , is a dimensionless effective temperature of the junction due to the summed effect of the thermal noise and the gaussian part of the non - equilibrium noise . when this ansatz is put into eq .( [ fpeq ] ) and the lowest order in is only retained ( corresponding to the wkb approximation and justified for small ) we obtain the following hamilton - jacobi ( hj ) equation , i.e. a first order partial differential equation for for a general noise source the last term ( given by the sum ) in the preceding equations would be replaced by the corresponding expression , i.e. by the reduced cumulant generating function with the first two moments ( mean current and the zero - frequency noise ) subtracted ( notice that by definition ) .this hamilton - jacobi equation can be solved via the method of characteristics , i.e. one can recast the equation as a dynamical system in a 4-dimensional phase - space ] and ] with the function introduced in ref . reading =\frac{1}{3}\int^{x_{\rmmax}}_{x_{\rm min}}dx(x - x_{\rm min})v'(x)\nonumber\\ & = & \frac{1}{3}\int^{\pi-\arcsin s}_{\arcsin s } dx(x-\arcsin s)(\sin x - s)\nonumber\\ & = & \frac{2}{3}\arccos s\left[\sqrt{1-s^2}-s\arccos s\right]\end{aligned}\ ] ] where the second part applies to the particular case of the tilted washboard potential .the value of at zero is in accordance with ref . while the asymptotics for large bias is with which is exactly equal to the result by ankerhold but it actually differs from sj s numerical finding further supported by an independent study by grabert with .although the difference is not severe being on the order of 10% only and , therefore , practically most likely irrelevant , from the conceptual point of view it matters because all the works claim to calculate the same quantity for the very same model and , thus , the correct result should be unique .it s not simple to follow and reproduce sj s approach but grabert s method is very transparent and i have fully recovered his numerical findings. minor generalization of his approach to the full tilted washboard potential ( grabert uses the usual cubic approximation to the tilted - washboard potential for a large bias ) gives ( within numerical precision ) and for .the discrepancy with the above analytical result is thus not a problem of the numerical precision of works but a conceptual problem .since grabert uses the trajectory approach of eq .( [ eom ] ) i will put off the discussion of his work to the next subsection where the same formalism is also used .ankerhold was looking for the correction in the form and got certain conditions for the arbitrary functions s from the leading order solution of the fokker - planck equation accounting for the noise with the nonzero third cumulant . when his ansatz is plugged into eq .( [ hjlin ] ) one can easily find that the set of equations obtained for s is internally inconsistent ( suggesting that the truncation at the third power in in the ansatz is insufficient ) for a general and potential . ) .] there are several exceptions when the inconsistencies are removed , namely for a strictly harmonic potential ( this potential does nt exhibit a barrier , at least not a smooth one approximating the tilted washboard potential ) and in the limits either or .this suggest that ankerhold s solution could yield correctly the two limiting cases for the considered cubic approximation to the potential . indeed , in the limit his solution is equal to mine for as already mentioned above ..the main difference is that i used the ansatz only in the case where it does solve eq .( [ hjlin ] ) and also the discussion of fixing the freedom in the solution due to the homogeneous part etc .( present for any q ) seems absent in his work . ]the opposite limit is simple since that case is integrable for any strength of the third cumulant and the linear response can be easily calculated analytically yielding = \exp\left[2 d_2(s ) q^2 e_j^2 < \!\!<\!i_m^3\!>\!\!>/c i_0(k_b t_{\rm eff})^3\right] ] for .this is identical to ankerhold s solution in the corresponding limit ( recall the multiplicative correction factor of in the erratum and ankerhold s definition of the bias - dependent quality factor of the jj ) .now , ankerhold s solution can be interpreted as a simple interpolation formula between the two limiting cases reading ] .it turns out that ankerhold s expression ( eq . ( 13 ) of ref . ) is a neat interpolation scheme between the highly underdamped and overdamped junction limits .it certainly provides a very efficient and quite precise interpolation formula for a finite .its detailed comparison with the numerically exact solution will be shown in the next subsection .now , we turn to the general case of the calculation of the rate asymmetry for an arbitrary intensity of the non - equilibrium noise acting on the junction .this is achieved by the numerical solution of the effective dynamical system equations ( [ eom ] ) . as already mentioned the solutionconsists in finding a trajectory satisfying the equations of motion ( [ eom ] ) and connecting in infinite time ( corresponding to the zero auxiliary energy ) the two fix - points ] being the ( metastable ) minimum of the potential and the top of the barrier , respectively .there always exists a classical , relaxation " solution corresponding to the dissipative but noise - free motion of the effective particle from the barrier top down to the minimum .this solution has and also the associated action is zero .on the other hand we are interested in the other , escape " solution connecting the two potential extrema via trajectory with non - zero conjugated momenta . for equilibrium ,i.e. gaussian , noise the two types of trajectories are connected by ( generalized ) time - reversal which forms the basis of the onsager - machlup theory and was used by grabert for his linear response calculations . for general non - equilibrium noise sources ,however , the two trajectories are not simply related and one has to calculate the escape trajectory directly by solving the full system ( [ eom ] ) .this is exactly done here .the problem is formulated as a boundary value problem ( bvp ) on an infinite time interval reflecting the stationarity condition of the original escape problem .obviously , this makes the bvp rather tricky and one has to be cautious in its solution .once the solution ] is a solution then for an arbitrary shift in time ] which probes the linearity of the calculated action in the third cumulant . if in the linear regime , the two polarities contain opposite contributions from the third cumulant which cancel in the sum and the subtracted action for zero third cumulant should nullify this quantity . in the second fig .[ fig2 ] with this is indeed the case due to more stable numerics but one can see that those two control quantities are not strictly zero for the high - q case .their overlap , however , actually confirms the linear response regime .the deviation from zero of /2-s_{\rm anal}(0)$ ] are solely due to the imprecision of the mean action without any influence of the third cumulant .this is further confirmed by the essentially regular behavior of the asymmetry .moreover , it should be stressed that each point presents an independent calculation .thus , the values of where the control quantities are zero as expected should be trustworthy regardless of the fact that the next value of may be calculated with insufficient precision .moreover , the overall precision even in the case is not catastrophically bad although it does not allow a fully reliable comparison with the concurrent theories . the asymmetry in fig .[ fig1 ] is compared with four different theories grouped into two sets ( within the set they are virtually equal in the limit ) .it is ( generalized numerical ) evaluation la grabert together with the result by sukhorukov and jordan both of which predict basically while the other set is my eq .( [ d1 ] ) and ankerhold s result . while the difference in the predictions is only on the order of 10% and , thus , most likely irrelevant for experiments , it is relevant from a purely conceptual point of view which one is actually correct since it should help with the identification of possible misconceptions hidden in the failed approach(es ) . from the data presented in fig .[ fig1 ] it is clear that the more promising set is the ankerhold - novotn one . despite of the scatter in the data , there are reliable points ( where the control quantities turn into zero ) which are closer to the 8/9-curve .the numerical calculation did nt use any linear perturbation theory or any approximation at all .the data are sheer results of the numerical evaluation of the bvp for general values of the parameters .the discrepancy of the data with the theoretical predictions may be caused by the finite , although rather high , value of .this can account for the difference between the numerics and 8/9-curve , however , it is inconsistent with grabert s theory which predicts monotonic increase of the asymmetry with increasing , see fig . 4 in ref . . moreover , grabert s curve " was calculated for even though it hardly deviates from its limiting counterpart by sj .thus , the only salvation for the two theories could come from the numerics being wrong which is in principle possible but does not seem too plausible at this point .mk and quality factor . for detailed explanation of various quantities and the values of other parameters , which are the same as in fig .[ fig1 ] and correspond to the saclay experiment , see the main text.,scaledwidth=90.0% ] if we now turn to the other fig .[ fig2 ] we see at the first place much better precision of the numerics as revealed by the control quantities being zero .the numerical data are again compared with grabert s and ankerhold s theories which provide alternatives for finite .i also show a curve for to demonstrate significant deviations of the results for still relatively high from the infinite- limit .this should be remembered when interpreting experimental data of , e.g. , the helsinki group with via theories .ankerhold s theory ( eq . ( 13 ) of ref . with the 2/3-correction from the erratum ) is off the numerical data as well as grabert s result thus clearly demonstrating only the interpolating status of this theory .the discrepancy is , however , rather small and , therefore , ankerhold s formula seems to be a very cheap and efficient analytical interpolation scheme for an arbitrary .grabert s result on the other hand lies exactly on top of the numerical data in stark contrast to its apparent failure for the high- case in fig .this is somewhat mysterious behavior which certainly deserves better understanding .what could go wrong in grabert s reasoning ?i have no clear answer to that , however , i do have a conjecture where there could be a problem hidden .of course , i am fully aware that the problem could in fact be also in my numerics for although its correspondence with my analytics represented by eq .( [ d1 ] ) is encouraging and not quite typical for bug - plagued numerics .grabert s approach uses a straightforward perturbation theory at the level of trajectories connecting the fix - points .he argues that within the linear response in the third cumulant the equilibrium ( unperturbed ) solution is enough for evaluating the correction to the action . in more detail , provided the auxiliary hamiltonian is split into equilibrium part and non - gaussian perturbation the correction to the action reads ( compare with his eq .( 77 ) in ref .this is analogous to the standard first order perturbation theory in quantum mechanics , the correction to energy is just the mean value of the hamiltonian in the unperturbed state .however , one should recall that this formula is only applicable if the unperturbed state is non - degenerate .it s not obvious what is the analogous condition for classical trajectories , nevertheless , one may expect certain subtleties involved due to several conditions specific for the current problem .first of all , the bvp is formulated on an infinite time interval , there exists the continuous time shift symmetry , and the unstable / stable manifolds around the respective fix - points are two - dimensional ( could this be the degeneracy " ? ) .the above formula for can be easily derived for finite time interval with fixed boundary conditions , however , ca nt the infinite time interval bring about omitted surface terms ?i am quite sure these questions can be successfully handled by dynamical system theory experts .to this date ( end of june 2008 ) there are two publicly available experimental results by the helsinki group and by the saclay group .the helsinki experiment finds the asymmetry curve as a function of the measured current through the tunnel junction which has its shape in qualitative agreement with all previously mentioned theories ( the difference between different theories is undetectable at the level of precision of the experiment ) .however , the quantitative comparison with , e.g. , ankerhold s theory shows discrepancy on the order of ( see the comparison in ref . , recall the correction factor 2/3 missing in that reference and further account for finite contributing another factor of 1/2 ) .i havent discussed the theory used by the helsinki group for fitting the experiment since it s conceptually different from all the other discussed theories and i consider it to be semi - phenomenological with the prefactor ( calculated in other theories ) being adjusted to the experimental outcome , thus lacking a real predictive power .the other experiment by the saclay group has been identified as most likely faulty due to a leak in the measurement circuit which prohibited the reliable determination of the bias current .such an effect largely overshadows any asymmetry due to the third cumulant and , thus , no quantitatively reliable data are available from this experiment .regardless of this unsatisfactory status we may consider possible problems which are likely to be encountered , and maybe have already been encountered in the helsinki experiment , when trying to compare the experimental outcome with theoretical predictions .the first issue is the one of the actual relevance of the exponential part of the rate asymmetry. clearly , the experiment measures the rate asymmetry , not a theoretical concept of its exponential part .the rationale behind the dominance of the exponential part of the rate unfortunately does nt necessarily carry over to the _ rate asymmetry _, especially in the _ linear regime_. the standard argument behind the dominance of the exponential part of ( thermal ) rates is that the large dimensionless barrier entering the exponent simply dominates the whole expression ; moreover , the noise intensity ( temperature ) enters only the exponential part via the boltzmann factor while the prefactor ( attempt frequency ) is temperature - independent .now consider a weak noise with the third cumulant nonzero .this weak noise will supposedly weakly modify the rate .this will in general happen both through the exponent and the prefactor . in the linear response regime in the third cumulant the correction in the exponent can be safely expanded and the resulting linear correction will add to the linear correction stemming from the prefactor . at this stage there is no a priori difference between these two contributions .of course , in practice one of them ( presumably the prefactor part ) can still be negligible .what are the prospects for this to happen ?we have seen that in the realistic setup studied in the previous section the asymmetry due to the exponential part of the rate reaches values on the order of at maximum .the expected correction due to the prefactor is of the form .the first factor , dimensionless temperature , is of the order of while the other factor , dimensionless measured current , is of the order of .thus , in total , we have an effect of the order of which can be , depending on the actual numerical prefactor , comparable to the exponential part .this somewhat pessimistic scenario can be further supported both by the discrepancy found in the helsinki experiment as well as by the mismatch between ankerhold s theory and direct stochastic simulations performed in connection with the saclay experiment in ref . ( see their fig .7a ) where a multiplicative factor of 2 difference was found for the dimensionless barrier height .while this value lies at the border of reliability of the wkb approach and corrections for larger barriers ( especially the experimentally relevant one 10.4 ) may be expected , they are not expected to be of order of 100% .therefore , it seems that the asymmetry stemming from the prefactor may be relevant for experiments .this is a rather bad news for theoreticians since the calculation of the prefactor for non - equilibrium rates is an involved task , see the discussion in ref . and references therein .so we finally come to the question whether one can achieve a nonlinear regime with underdamped jjs .the problem is apparently in the fact that the effective temperature raises with the measured current in such a way that it simply dominates the escape mechanism and corrections due to higher order cumulants are just negligible .this is clearly reflected in the plot in fig .[ fig1 ] where the originally growing ( with ) curve for small eventually bends downwards again for larger .while the first part is governed by the third cumulant growing with , the declining part corresponds to the case when the contribution of to the effective temperature beats the raising third cumulant .the same effect would be seen in fig .[ fig2 ] for larger values of .this behavior could be diminished by weakening the effect of the measured current on the effective temperature , see the expression for the effective temperature .this should be achieved by decreasing the value of the effective shunt resistance .this , in turn , would imply decreasing quality factor which seems experimentally unacceptable beyond the point when the switching ceases to exist and only phase diffusion is present .the quality factor thus should be maintained at a reasonably high value which can be achieved by increasing capacitance .that in turn will decrease the third cumulant contribution via the formula preceding eq .( [ d1 ] ) . at this pointthe problem turns into a bad joke .there may be , however , a parameter window where a subtle compromise can be achieved .this should be seriously considered by carefully examining different parameter dependencies and testing experimentally acceptable numbers .in this work i have reviewed in detail the status of the problem of the measurement of the full counting statistics by the switching dynamics of an underdamped josephson junction .i have presented a general theory for the weak noise based on the wkb - like approximation and calculated the rate asymmetry due to a weak third cumulant analytically in the limit of very high quality factor of the junction .this calculation has been critically compared to other theories and their possible shortcomings have been identified and pointed out .further , i have developed a numerical scheme for solving the boundary value problem determining the exponential part of the non - equilibrium escape rate under general circumstances , i.e. beyond the linear perturbation theory . using this schemei have calculated the exponential part of the rate asymmetry for experimentally relevant set of parameters and compared the findings with various linear theories .again , this helped with the identification of the status of concurrent theories . eventually , i have briefly discussed issues related to the interpretation of present and future experiments , in particular the question of the relevance of the rate exponential prefactor for the rate asymmetry and the feasibility of achieving the nonlinear regime .there are plenty of unsolved problems and open issues within this field of research . starting with those more particular and technical, it would be rewarding to fully clarify the status of concurrent theories , in particular that by grabert which performs amazingly well for intermediate range of s while it seems to fail for large values of .although the discrepancy is not too large , grabert s theory is supposed to work in that regime as well and , thus , the discrepancy raises serious questions about its very foundation . on the other hand, it would be very helpful to further develop and fully stabilize ( if possible ) my numerical scheme used for the solution of the bvp .if this attempt were successful the numerical code could be used for interpreting future experiments routinely since it is very fast and efficient as long as it converges which unfortunately occasionally does nt happen . on a more general level, it should be further studied what is the effect of nominally subleading terms in the rate on the rate asymmetry .it appears that the conventional arguments for the dominance of the exponential part of the rate may not be applicable to the rate asymmetry , especially in the linear regime . andlast but not least , a most important question whether one can actually use underdamped josephson junctions for the measurement of the whole fcs and not only the third cumulant in the linear regime is still open and waiting for final answer which , if affirmative , could bring the field of the fcs to new milestones .i am grateful to p. hnggi , i. khovanov , k. neton , and p. talkner for useful and stimulating discussions , continuous encouragement , and for pointing out relevant references .i acknowledge the support by the grant 202/07/j051 of the czech science foundation .this work is also a part of the research plan msm 0021620834 financed by the ministry of education of the czech republic . after the acceptance of this manuscript with minor corrections i became aware of a comment arxiv:0807.2675 by sukhorukov and jordan .although at this point i am unable to decide whether that comment really settles the above mentioned discrepancy between our theories , i recommend the interested reader to check out their paper for an independent point of view on the issue .r. graham .macroscopic potentials , bifurcations and noise in dissipative systems . in frank moss and p. v. e. mcclintock , editors , _ theory of continuous fokker - planck systems _ , volume 1 of _ noise in non - linear dynamical systems _ , chapter 7 , page 225 .cambridge university press , 1989 .
|
i study the dynamics of a josephson junction serving as a threshold detector of fluctuations which is subjected to a general non - equilibrium electronic noise source whose characteristics is to be determined by the junction . this experimental setup has been proposed several years ago as a prospective scheme for determining the full counting statistics of the electronic noise source . despite of intensive theoretical as well as experimental research in this direction the promise has not been quite fulfilled yet and i will discuss what are the unsolved issues . first , i review a general theory for the calculation of the exponential part of the non - equilibrium switching rates of the junction and compare its predictions with previous results found in different limiting cases by several authors . i identify several possible weak points in the previous studies and i report a new analytical result for the linear correction to the rate due to the third cumulant of a non - gaussian noise source in the limit of a very weak junction damping . the various analytical predictions are then compared with the results of the developed numerical method . finally , i analyze the status of the so - far publicly available experimental data with respect to the theoretical predictions and discuss briefly the suitability of the present experimental schemes in view of their potential to measure the whole fcs of non - gaussian noise sources as well as their relation to the available theories .
|
communication protocols based on quantum signals have attracted increasing interest in the recent years , since they offer the possibility of enhancing either the communication capacity or the security by exploiting the very quantum nature of the information carriers .information may be indeed conveyed from a sender to a receiver through quantum channels . in order to achieve this goala transmitter prepares a quantum state drawn from a collection of known states and sends it through a given quantum channel .the receiver retrieves the information by measuring the channel in order to discriminate among the set of possible preparations and , in turn , to determine the transmitted signal .the encoding states are generally not orthogonal and also when orthogonal signals are transmitted , they usually lose orthogonality because of noisy propagation along the communication channel . therefore , in general , no measurement allows the receiver to distinguish perfectly between the signals and the need of optimizing the detection strategy unavoidably arises .a different approach , which will be used in this paper , is to encode information in the degrees of freedom of a correlated state shared by the two parties . in this framework ,the two parties jointly ( and independently ) measure the state and extract the transmitted symbol according to a previously agreed inference rule .this kind of schemes , which may be symmetric or asymmetric depending on the nature of the channels , may serve either to send a message or to share a cryptographic key .in particular , entanglement - based protocols with nonlocal correlations between spatially separated locations have been proved very effective to provide a pair of legitimate users with methods to share a secure cryptographic key via quantum key distribution ( qkd ) . besides , the nonclassicality of entangled states can be used to improve the monitoring of a state against disturbance and/or decoherence , which , in turn , made entanglement useful to detect unwanted measurement attempts , _i.e _ increasing the security of communication . indeed , several quantum - based cryptographic protocols have been suggested and implemented either for qubits or continuous variable ( cv ) systems . communication protocols and qkd schemes have been firstly developed for single qubit or entangled qubit pairs , and practically implemented using faint laser pulses or photon pairs from spontaneous parametric downconversion ( spdc ) in a pumped nonlinear crystal .recently , much attention has been devoted to investigating the use of cv systems in quantum information processing .in fact , continuous - spectrum quantum variables may be easily manipulated in order to perform various quantum information processes .this is the case of multiphoton gaussian state of light , _e.g. _ squeezed - coherent beams and twin - beams , by means of linear optical circuits , photodetection and homodyne detection .in addition , non gaussian cv states of two modes may be generated either by conditional measurements or concurrent nonlinearities . in turn , cv multiphoton states , may be used to increase the effectiveness and reliability of quantum communications and qkd .several cv qkd protocols have already been developed on the basis of quadrature modulations coding of single squeezed , coherent and entangled beam pairs .protocols using the sub - shot - noise fluctuations of photon - number difference of two correlated beams , the sub - shot - noise modulations and the sub - shot - noise fluctuations of the photon numbers in each of the correlated modes have been proposed .although for cv protocols unconditional security proofs have not been obtained yet , they are of interest and deserve investigations mostly due to the potential gain in communication effectiveness . in this paperwe address binary and quaternary communication channels based on photon - number continuous - variable multiphoton entangled states ( pnes ) , in particular we consider two - mode coherently - correlated ( tmc ) or twin - beam ( twb ) states .the communication protocol is based on photon number correlations and realized upon choosing a shared ( set of ) threshold(s ) to convert the outcome of a joint photon number measurement into a symbol from a discrete alphabet .notice that , in principle , entanglement itself is not needed to establish this class of communication channels , which are based on photon - number correlations owned also by separable mixed states . on the other hand , purity of the support state is relevant to increase security of the channel .in fact , if the information is encoded in classically correlated ( mixed ) states an intruder can easily measure the number of photons in either of the modes and then recreate the mixed state mode by activating the corresponding number of single - photon or photon - pair sources ( the latter case with the degenerate spdc process is already quite realistic ) . thus the information encoded in the photon number of a mixed state mode can be effectively intercepted . on the other hand ,this attack is not effective in case of pnes - based channels , since the destruction of the mutual second order coherence of a pnes state can be revealed by a joint measurement on the two modes , which can be accomplished if the receiver , instead or besides extracting the bit value , randomly sends his mode or part of it back to the sender to let her check the presence of an eavesdropper , analogously to `` two - way '' quantum cryptography based on individually coherent entangled beams .for the pnes - based protocols although no strict proofs of security can be offered , tmc - based protocols may be proved secure against realistic intercept - resend eavesdropping .the security mostly relies on the fact that the generation of traveling fock states of radiation , despite several theoretical proposals based on tailored nonlinear interactions , conditional measurements , or state engineering , is still extremely challenging from the experimental point of view .overall , it is the joint requirement of correlation and purity that leads to individuate pnes as a suitable choice for building effective and , to some extent , secure communication channels. the main goal of the paper is twofold .on the one hand we consider communication channels based on realistic class of pnes and analyze the effects of losses on the performances of the protocol . on the other hand ,we optimize the performances of our protocol and compare the results of pnes - based schemes to that obtained using a realistic kind of classically correlated ( mixed , separable ) states as a support .the evolution of tmc and twb in lossy channels , as well as that of classically correlated states , is analyzed to calculate the joint photon number distribution and evaluate the survival of correlations . using this results we determine the optimized bit discrimination thresholds and the corresponding channel capacity ( maximized mutual information ) for binary and quaternary alphabets .the effects of losses on security of the protocols against intercept - resend attacks are briefly discussed .the paper is structured as follows : in section [ s : pnes ] we describe the communication protocol and introduce the correlated states , either pnes or classically correlated , that will be considered as a support . in section [s : loss ] we analyze the propagation of the above states in a lossy channel , and evaluate the joint photon number distribution and the correlations . in section [s : im ] we optimize the bit discrimination threshold for binary and quaternary alphabets and evaluate the corresponding channel capacity .finally , in section [ s : outro ] we briefly discuss security and close the paper with some concluding remarks .quantum optical communication channels can be established using multiphoton entangled states of two field - mode , which provide the necessary correlations between the two parties . in this workwe investigate the information capacity of quantum channels built using as a support a specific class of bipartite entangled states , which we refer to as photon - number entangled states ( pnes ) . in the fock number basis , pnes may be written as where and . as we will see an effective channelmay be established exploiting the strong correlation between the photon number distributions of the two modes .indeed , pnes show perfect ( total ) correlations in the photon number , _ i.e. _ the correlation index with , and , is equal to one for any pnes . on the other hand ,the degree of entanglement strongly depends on the profile .pnes may be generated by means of parametric optical oscillator ( opo ) exploiting seeded pdc process in a nonlinear crystal placed in and optical cavity .several implementations have already been reported , with the generation of pnes with photon number statistics varying from super - poissonian to sub - poissonian after post - selection . meanwhile ,several quantum communication schemes and qkd protocols were proposed using pnes , with information encoded in the beam intensity or intensity difference .the bits coding / decoding for a pnes - based communication protocol is rather natural : in the binary case each of the legitimate users measure the incoming photon number in a predetermined time slot and compare the obtained value to a given bit threshold .if the detected value is above the threshold the corresponding bit value is assigned to one , zero otherwise the scheme may be also extended to a -letter protocol by introducing different thresholds the effectiveness of these generalizations depends of course on the beam intensity and on the resolution thresholds of the detectors .pnes - based protocols of this kind have been suggested and analyzed in the ideal case using two - mode coherently correlated ( tmc ) states using , in the the binary case , a threshold value equal to the integer part of the average photon number . in the fock basis tmc states are written as follows where and denotes a modified bessel function of the first kind . without loss of generalitywe will consider as real throughout the paper .the average photon number of the state is given by tmc are eigenstates of the product of the annihilation operators of the two radiation modes = \rho_2 \equiv \hbox{tr}_1 [ |\lambda\rangle\rangle\langle\langle \lambda | ] \nonumber \\ & = \frac{1}{i_0 ( 2 \lambda ) } \sum\limits_n \: \frac{\lambda ^{2n}}{n!^2}\ : sub - poissonian photon statistics .in fact , the mandel parameter is given by and it is negative for any value of .a communication channel based on tmc relies on the strong photon number correlations , which allow to decode a random bit sequence by carrying out independent and simultaneous intensity measurements at two remote locations . on the other hand ,the security of the scheme is based on checking the beam statistics coming from the measurement results against the ( sub - poissonian ) expected one .it was shown that any realistic eavesdropping attempts introduce perturbations that are significant enough to be detected , thus making eavesdropping ineffective .in addition , the extension to -letter alphabets was shown to be effective , _i.e _ increase the information capacity to bits per measurements , also making the protocol more secure against intercept - resend attacks .another relevant class of pnes is given by the so - called twin - beam state ( twb ) where and , which are entangled two - mode gaussian states of the field and represents the crucial ingredient for cv teleportation and dense coding .without loss of generality we assume as real , the average photon number of twb is thus given by the two partial traces of are equal to thermal states ^{a^\dag a} ] of the partial traces , _i.e. _ = s[\varrho_2]$ ] , as a function of the average photon number .notice that twbs show larger entanglement ; indeed they are maximally entangled states for a cv two - mode system at fixed energy . [ cols= " < , < " , ]a question arises on whether security may be proved for pnes based communication channels . in the ideal case of no loss ,the intercept - resend strategy has been considered assuming that eve is able to produce strongly correlated beams source ( optimally the tmc - source ) and it has been shown that the state - cloning attempts can be revealed by checking the beam statistics , which is modified from sub - poissonian to super - poissonian by any eavesdropping attempt . since , as we have proved in this paper , the statistical properties are not changed by the propagation , tmc - based protocols are secure also in the presence of loss . as concern twb , security , remarkably security against intercept - resend attacks , can not be guaranteed through a check of the beam statistics .the twb - based protocols require the use of additional degrees of freedom , as for example binary randomization of polarization to guarantee security and to reveal eavesdropping actions .this may also be useful for tmc - based protocols , in order to achieve unconditional security .overall , there is a trade - off between the quantity of information one is able to transmit at fixed energy and the security of this transmission , with tmc offering more security at the price of decreasing the channel capacity . in conclusion, we have analyzed lossy communication channels based on photon number entangled states and realized upon choosing a shared set of thresholds to convert the outcome of a joint photon number measurement into a symbol from a binary or a quaternary alphabet .we have focused on channels build using two - mode coherently - correlated or twin - beam states a support .the explicit optimization of the bit discrimination thresholds have been performed and the corresponding channel capacities have been compared to that of channels built using classically correlated ( separable ) states .we found that pnes are useful to improve capacity in the presence of noise , and that twb - based channels may transmit a larger amount of information than tmc - based ones at fixed energy and overall loss .the evolution of the entangled support , either tmc or twb , in lossy channels have been analyzed in details , showing that the beam statistics , either sub - poissonian for tmc or super - poissonian for twb , is not altered during propagation .the preservation of sub - poissonian statistics indicates that tmc - based protocols are secure against intercept - resend eavesdropping attacks , whereas twb - based protocols require the use of additional degrees of freedom , as for example binary randomization of polarization .we have analyzed the effects of asymmetric losses on the two beams , showing that i ) asymmetry of the channel does not dramatically affect the performances and ii ) placing the source of entanglement closer to one of the two parties results in a slight increase of the capacity for tmc - based protocols and a slight decrease for twb - based ones .we conclude that photon - number entangled states , either gaussian or non gaussian ones , are useful resources to implement effective quantum - enhanced communication channels in the presence of loss .this work has been supported by miur through the project prin-2005024254 - 002 .the work of vu has been supported by the landau network through the cariplo foundation fellowship program and by nato through grant cpb.nukr.ev 982379 .99 c. w. helstrom , j. w. s. liu and j. p. gordon , proceedings of the ieee * 58 * , 1578 ( 1970 ) . c. w. helstrom , _ quantum detection and estimation theory _ , academic press , new york , ( 1976 ) .n. gisin , g. ribordy , w. tittel , h. zbinden , rev .mod . phys . * 74 * , 145 ( 2002 ) . c. h. bennett and g. brassard , ieee proc . ,bangalore , india , pp .175 - 179 ( 1984 ) .a. k. ekert , phys rev .* 67 * , 661 ( 1991 ) .l. mandel l and e. wolf , _ optical coherence and quantum optics _ , ( cambridge university press , 1995 ) pp .816 - 827 .t. opatrn , g. kurizki , and d .-welsch , phys .a * 61 * , 032302 ( 2000 ) ; m. dakna , t. anhut , t. opatrny , l. knoll , and d. g. welsch , phys . rev .a * 55 * , 3184 ( 1997 ) ; p. t. cochrane , t. c. ralph , and g. j. milburn , phys . rev .a * 65 * , 062306 ( 2002 ) ; s. olivares , m. g. a. paris , and r. bonifacio , phys .a * 67 * , 032314 ( 2003 ) .j. wenger , r. tualle - brouri , and p. grangier , phys .rev . lett . * 92 * , 153601 ( 2004 ) ; m. s. kim , e. park , p. l. knight , and h. jeong , phys .a * 71 * , 043805 ( 2005 ) ; s. olivares , and m. g. a. paris , j. opt .b : quantum and semiclass .opt . 7 , s616 ( 2005 ) ; s. olivares , and m. g. a. paris , phys .a * 70 * , 032112 ( 2004 ) ; s. olivares , and m. g. a. paris , j. opt .b : quantum and semiclass . opt .* 7 * , s392 ( 2005 ) ; h. nha , and h. j. carmichael , phys .. lett . * 93 * , 020401 ( 2004 ) ; r. garca - patrn et al .* 93 * , 130409 ( 2004 ) ; r. garca - patrn , j. fiur ' aek , and n. j cerf , phys .a * 71 * , 022105 ( 2005 ) ; s. daffer , and p. l. knight phys .a * 72 * , 034101 ( 2005 ) ; c. invernizzi , s. olivares , m. g. a. paris , and k. banaszek , phys . rev .a * 72 * 042105 ( 2005 ) .s. l. braunstein and p. van loock , rev .77 * , 513 ( 2004 ) .m. hillery , phys .a. * 61 * , 022309 ( 1999 ) .f. grosshans , and p.grangier , phys .* 88 * , 057902 ( 2002 ) .s. f. pereira , z. y. ou and h. j. kimble , phys . rev .a 62 , 042311 ( 2000 ) .t. c. ralph , phys .a * 61 * , 010303(r ) ( 2000 ) .m. d. reid , phys .a * 62 * , 062308 ( 2000 ) . c. silberhorn , n. korolkova , and g. leuchs , phys .. lett . * 88 * , 167902 ( 2002 ) .a. c. funk and m. g. raymer , phys .a 65 , 042307 ( 2002 ) .a. porzio , v. dauria , p. aniello , m. g. a. paris and s. solimeno , opt .eng . , to appear v. c. usenko and c. v. usenko , am .inst . of phys .cp * 734 * , 319 ( 2004 ) .a proof may be obtained under additional security amplification assumptions , see d. gottesman , j. preskill , phys .63 * , 022309 ( 2001 ) .s. pirandola , s. mancini , s. lloyd , s. l. braunstein , arxiv.org , quant - ph/0611167 ( 2006 ) .s. y. kilin and d. b. horoshko , phys .lett . * 74 * , 5206 ( 1995 ) ; w. leonski , s. dyrting , and r. tanas , j. mod .44 * , 2105 ( 1997 ) ; a. vidiella - barranco , and j. a. roversi , phys .a * 58 * , 3349 ( 1998 ) ; w. leonski , phys .a * 54 * , 3369 ( 1999 ) .m. g. a. paris , int .b * 11 * , 1913 ( 1997 ) ; m. dakna , t. anhut , t. opatrny , l. knoll , and d. g. welsch , phys . rev .a * 55 * , 3184 ( 1997 ) .k. vogel , v. m. akulin , and w. p. schleich , phys .* 71 * , 1816 ( 1993 ) .g. m. dariano , l. maccone , m. g. a. paris , m. f. sacchi , phys .a * 61 * 053817 ( 2000 ) .d. f. walls and g. j. milburn , _ quantum optics _( springer , heidelberg , 1995 ) pp .146 - 157 .j. laurat , t. coudreau , n. treps , a. maitre and c. fabre , phys .lett . * 91 * , 213601 ( 2003 ) o. haderka , j. perina jr ., m. hamar , j. perina , phys . rev .a * 71 * , 033815 ( 2005 ) .k. hayasaka , y. zhang , k. kasai , opt . lett . * 29 * , 1665 ( 2004 ) .v. c. usenko and b. i. lev , phys .a. * 348 * , 17 ( 2005 ) .y. zhang , k. kasai and k. hayasaka , opt . expr . * 11 * , 3592 ( 2003 ) . g. s. agarwal , phys . rev . lett . * 57 * , 827 ( 1986 ) .g. s. agarwal and a. biswas , j. opt .b * 7 * , 350 ( 2005 ) .m. g. a. paris , phys .a * 59 * , 1615 ( 1999 ) .a. gatti et al , j. mod. opt . * 53 * , 739 ( 2006 ) .m. bache et al , phys .a * 73 * , 053802 ( 2006 ) .a. agliati , m. bondani , a. andreoni , g. de cillis and m. g. a. paris , j. opt .b , * 7 * , 652 ( 2005 ) .s. olivares , m. g. a. paris , j. opt .b * 6 * , 69 ( 2004 ) .
|
we address binary and quaternary communication channels based on correlated multiphoton two - mode states of radiation in the presence of losses . the protocol are based on photon number correlations and realized upon choosing a shared set of thresholds to convert the outcome of a joint photon number measurement into a symbol from a discrete alphabet . in particular , we focus on channels build using feasible photon - number entangled states ( pnes ) as two - mode coherently - correlated ( tmc ) or twin - beam ( twb ) states and compare their performances with that of channels built using feasible classically correlated ( separable ) states . we found that pnes provide larger channel capacity in the presence of loss , and that twb - based channels may transmit a larger amount of information than tmc - based ones at fixed energy and overall loss . optimized bit discrimination thresholds , as well as the corresponding maximized mutual information , are explicitly evaluated as a function of the beam intensity and the loss parameter . the propagation of tmc and twb in lossy channels is analyzed and the joint photon number distribution is evaluated , showing that the beam statistics , either sub - poissonian for tmc or super - poissonian for twb , is not altered by losses . although entanglement is not strictly needed to establish the channels , which are based on photon - number correlations owned also by separable mixed states , purity of the support state is relevant to increase security . the joint requirement of correlation and purity individuates pnes as a suitable choice to build effective channels . the effects of losses on channel security are briefly discussed .
|
the inverse source problem arises from many scientific and industrial areas such as antenna synthesis , biomedical imaging , and photo - acoustic tomography . as specific examples in medical imaging such as magnetoencephalography , electroencephalography , electroneurography , these imaging modalities are non - invasive neurophysiological techniques that measure the electric or magnetic fields generated by neuronal activity of the brain .the spatial distributions of the measured fields are analyzed to localize the sources of the activity within the brain to provide information about both the structure and function of the brain .in addition , the inverse source problem has been considered as a basic mathematical tool for the solution of reflection tomography , diffusion - based optical tomography , lidar imaging for chemical and biological threat detection , and fluorescence microscopy . motivated by these significant applications , the inverse source problem , as an important research subject in inverse scattering theory , has been extensively studied by many researchers .a lot of information is available concerning its solution .although the inverse source problem is a linear problem , there are many issues .it is known that there is no uniqueness for the inverse source problem at a fixed frequency due to the existence of non - radiating sources .therefore , addition information are needed for the source in order to obtain a unique solution to the problem , such as to seek the minimum energy solution . from the computational aspect ,a more serious issue is the lack of stability .a small variation of the data may lead to a huge error in the reconstruction .recently , it has been realized that the use of multi - frequency data is an effective approach to overcome the difficulties of non - uniqueness and instability which are presented at a single frequency .we refer to for the study of the inverse source problem by using multiple frequency information .a topic review can be found in on the inverse source problem and many other inverse scattering problems by using multiple frequencies to enhance stability . in ,the authors initialized the mathematical study on the stability of the inverse source problem for the helmholtz equation by using multi - frequency data .the increasing stability was studied in for the inverse source problem of the three - dimensional helmholtz equation . based on the huygens principle, the method assumes a special form of the source function , and requires both the dirichlet and neumann boundary data .a different approach was developed in to obtain the same increasing stability result for both the two- and three - dimensional helmholtz equation .the method removes the assumption on the source function and requires the dirichlet data only .an attempt was made in to extend the stability result to the inverse random source of the one - dimensional stochastic helmholtz equation .we refer to on the increasing stability of determining potentials for the schdinger equation .related results can be found in on the increasing stability in the solution of the cauchy problem for the acoustic and electromagnetic wave equations .although many work have been done on the inverse source problem for acoustic waves , little is known on the inverse source problem for elastic and electromagnetic waves , especially their stability . in , the authors discussed the uniqueness and non - uniqueness of maxwell s equations , which is the only available mathematical result so far . in this paper , we develop a unified theory to address the stability in the inverse source problem for time - harmonic elastic and electromagnetic scattering , where the wave propagation is governed by the navier equation and maxwell s equations , respectively .the elastic and electromagnetic wave scattering problems have received ever - increasing attention from both the engineering and mathematical communities , due to their significant applications in diverse scientific areas such as geophysics , seismology , elastography , and optics . for elastic waves ,the inverse source problem is to determine the external force that produces the measured displacement .we show that the increasing instability can be achieved by using the dirichlet boundary data only at multiple frequencies . for electromagnetic waves, the inverse source problem is to reconstruct the electric current density from the tangential trace of the electric field .first we discuss the uniqueness of the problem and distinguish the detectable radiating sources from non - radiating sources .then we prove that the increasing stability can be obtained to reconstruct the radiating electric current densities from the boundary measurement at multiple frequencies .for each wave , we give the stability estimates for both the continuous frequency data and the discrete frequency data .the estimates consist of two parts : the first part is the lipschitz type of data discrepancy and the second part is the high frequency tail of the source function .the former is analyzed via the green tensor .the latter is estimated by the analytical continuation , and it decreases as the frequency of the data increases .the results explain that the inverse source problem becomes more stable as higher frequency is used for the data . in our analysis ,the main ingredients are to use the transparent boundary conditions and green s tensors for the wave equations .the transparent boundary condition establishes the relationship between the dirichlet data and the neumann data .the neumann data can not only be represented in terms of the dirichlet data , but also be computed once the dirichlet data is available in practice .this work initializes the mathematical study and provides the first stability results on the inverse source problem for elastic and electromagnetic wave .it significantly extends the approach developed in to handle the more complicated navier and maxwell equations .apparently , more careful study are needed for sophisticated green s tensors of these two wave equations .the results shed light on the stability analysis of the more challenging nonlinear inverse medium scattering problem .in addition , they motivate us to study the time - domain inverse problem where all frequencies are available in order to have better stability . throughout , we assume that the source of either the external force or the electric current density has a compact support , or .let be a sufficiently large constant such that .let be a constant such that . denote by the boundary of where the measurement of the wave field is taken . let be a rectangular box in .clearly we have .the problem geometry is shown in figure [ pg ] .the paper is organized as follows . in section 2 , we show the increasing stability in the inverse source problem for elastic waves . section 3 is devoted to the inverse source problem for electromagnetic waves .the uniqueness is discussed and the increasing stability is obtained .in both sections , the analysis is carried for continuous frequency data and then followed by the discussion for the discrete frequency data .the paper is concluded with some general remarks and future work in section 4 . to make the paper easily accessible, we introduce in the appendices some necessary notation and useful results on the differential operators , helmholtz decomposition , and soblev spaces .this section addresses the inverse source problem for elastic waves .the increasing stability is established to reconstruct the external force from the boundary measurement of the displacement at multiple frequencies .we consider the time - harmonic navier equation in a homogeneous medium : where is the angular frequency , and are the lam constants satisfying and , is the displacement field , and accounts for the external force which is assumed to have a compact support . an appropriate radiation condition is needed to complete the definition of the scattering problem since it is imposed in the open domain .as is discussed in appendix [ hd ] , the displacement can be decomposed into the compressional part and the shear part : the kupradze sommerfeld radiation condition requires that and satisfy the sommerfeld radiation condition : where are the compressional and shear wavenumbers , given by where note that are independent of and .given , it is known that the scattering problem has a unique solution ( cf . ) : where is green s tensor for the navier equation and the dot is the matrix - vector multiplication .explicitly , we have where is the identity matrix , are the fundamental solutions for the two- and three - dimensional helmholtz equation , respectively , and is the hankel function of the first kind with order zero .define a boundary operator where is the unit normal vector on .it is shown in that there exists a dirichlet - to - neumann ( dtn ) operator such that which is the transparent boundary condition for the scattering problem of the navier equation .this section discusses the stability from the data with frequency ranging over a finite interval .given the dirichlet data on , can be viewed as the neumann data .it follows from that the neumann data can be computed via the dtn operator once the dirichlet data is available on .hence we may just define a boundary measurement in terms of the dirichlet data only : let the external force be a complex function with the compact support .the inverse source problem is to determine from the displacement , where is a constant .the dirichlet data does not have to be given exactly on the sphere .it can be measured on any lipschitz continuous boundary which encloses the support , e.g. , take .when is available on , we may consider the following boundary value problem : it can be shown that the problem has a unique solution in .therefore , the dirichlet data is immediately available on once the problem is solved , and then the neumann data can be computed on . denote a functional space : where is an integer and is a constant .hereafter , the notation stands for , where is a generic constant independent of , but may change step by step in the proofs .the following stability estimate is the main result of problem [ p1 ] .[ mrn ] let be the solution of the scattering problem corresponding to the source .then where first , it is clear to note that the stability estimate implies the uniqueness of problem [ p1 ] , i.e. , if .second , we observe that the stability estimate consists of two parts : the data discrepancy and the high frequency tail .the former is of the lipschitz type .the latter decreases as increases which makes the problem have an almost lipschitz stability .the result explains that the problem becomes more stable when higher frequency data is used . to prove theorem [ mrn ] , we begin with introducing two auxiliary functions : where is the unit propagation direction vector and are unit polarization vectors .these unit vectors may be chosen as follows : 1 . for , , and satisfy and for all ] .in fact , and are known as the compressional and shear plane waves .it is easy to verify that they satisfy the homogeneous navier equation in : and [ nfe ] let be the solution of the scattering problem corresponding to the source .then we first show the proof for the two dimensional case in details , and then briefly present the proof for the three dimensional case since the steps are similar .\(i ) consider .let with . the compressional plane wave in can be written as .multiplying the both sides of by , using the integration by parts over , and noting , we obtain a simple calculation yields that which gives noting , we get combining the above estimates and using the cauchy schwarz inequality yields hence we have using the polar coordinates , we have let with .the shear plane wave in can be written as . multiplying on both sides of , using the integration by parts , and noting , we may similarly get using the polar coordinates, we can verify that let and take .then and they form an orthonormal basis in for any ] . using the pythagorean theorem yields following similar arguments as those in , we get from that which completes the proof . for ,let for , let denote a sector the integrands in are analytic functions of the angular .the integrals with respect to can be taken over any path joining points and in .thus and are analytic functions of .[ ni ] let .for any , we have : 1 .when , 2 .when , we first show the proof for the three - dimensional case and then show the proof for the two - dimensional case .\(i ) consider . recalling , we split the green tensor into two parts : where let .noting , we have from a simple calculation that where and here we have used first is to estimate . noting that and we have from the cauchy schwarz inequality that next is to estimate . for any ,considering the following power series we obtain substituting into and using the integration by parts , we have noting , we have using and the cauchy schwarz inequality gives combining and proves . for , we have from , , and the integrations by parts that where and following the similar steps for and , we may estimate and , respectively , and prove the inequality .\(ii ) consider .similarly , let and where noting , we get and here we have used first is to estimate .recall and the expansions of in : where is the euler mascheroni constant and is a harmonic number defined by it is easy to verify which gives on the other hand , it can be shown that which yields it follows from the cauchy schwarz inequality that we have next is to estimate , which requires to evaluate the integral where letting and using the expansion of , we obtain it follows from the integration by parts that using the inequality , we get for any that combining and using the cauchy schwarz inequality , we obtain let where \\ & -\frac{2}{\pi}\frac{1}{(st)^2}\nabla_{\boldsymbol y}\nabla_{\boldsymbol y}^{\top}\left[\big ( { \rm ln}(\frac{1}{2}c_{\rm p}st|\boldsymbolx-\boldsymbol y|)+\gamma \big)j_0(c_{\rm p}st|\boldsymbol x-\boldsymbol y|)\right],\\ \mathbf{b}=&\frac{2}{\pi}\frac{1}{(st)^2 } \nabla_{\boldsymbol y}\nabla_{\boldsymbol y}^{\top}\sum_{k=1}^{\infty}(-1)^{k+1}h_k\frac { ( c_{\rm s}st|\boldsymbol x-\boldsymbol y|)^{2k}}{4^k(k!)^2}\\ & -\frac{2}{\pi}\frac{1}{(st)^2}\nabla_{\boldsymbol y}\nabla_{\boldsymbol y}^{\top}\sum_{k=1}^{\infty}(-1)^{k+1}h_k\frac{(c_{\rm p}st|\boldsymbol x-\boldsymbol y|)^{2k}}{4^k(k!)^2}.\end{aligned}\ ] ] we consider the matrix first .using the integration by parts yields it is easy to verify from that which gives for any that using the cauchy schwarz inequality and noting , we have now we consider the matrix . using the identity for any two smooth functions and : and we split into three parts : where for , we have from and the expansion of that noting is analytic when , we get from the cauchy schwarz inequality that for , using the analyticity of for , the integration by parts , and the estimate of , we have now we consider noting and using the expansion of : we have from the integration by parts that where {\rm d}\boldsymbol y\\ & -\int_{\omega}\frac{2}{\pi}\frac{1}{(st)^2 } \sum_{k=1}^{\infty } \frac{(-1)^k}{4^k } \frac{(c_{\rm p}st|\boldsymbol x-\boldsymbol y|)^{2k}}{(k!)^2}\nabla_{\boldsymbol y}\nabla_{\boldsymbol y}\cdot \left[\big({\rm ln}(\frac{1}{2}c_{\rm p}|\boldsymbol x-\boldsymbol y| ) + c_0\big)\boldsymbol{f}(\boldsymbol y)\right]{\rm d}\boldsymbol y\end{aligned}\ ] ] and since the function is analytic for , , we have from and the cauchy schwarz inequality that it is easy to verify that multiplying by and using the cauchy schwarz inequality , we obtain combining , we obtain which implies combining and completes the proof for the inequality . for , we have from and the integration by parts that using the estimates for the integrals involving and , which we have obtained for , and the cauchy schwarz inequality , we may similarly get which shows and completes the proof .[ nhfe ] let .we have for any that let where \(i ) consider . using and noting , we have where noting , using the polar coordinates originated at with respect to and the integration by parts , we obtain which gives after using that changing back to the cartesian coordinates with respect to , we have where we have used the fact that . for , it follows from the integration by parts that we may follow the same steps as those for to show noting , using and the integration by parts , we get again , we may follow similar arguments as those for and to get combining completes the proof for the three dimensional case .\(ii ) consider .noting , we have where the hankel function can be expressed by the following integral when ( e.g. , , chapter vi ) : using the polar coordinates originated at with respect to and noting , we have let it is easy to verify that using the integration by parts yields consequently , we have it is easy to note from that there exists a constant such that for .hence , changing back to the cartesian coordinates with respect to , we have using the integration by parts yields we may follow a similar proof for to show that next is to consider again , we use and the integration by parts to get following similar arguments as those for and , we have combing completes the proof for the two dimensional case .[ ni12 ] let .then there exists a function satisfying such that it follows from lemma [ ni ] that recalling , we have .\ ] ] an direct application of lemma [ caf ] shows that there exists a function satisfying such that which completes the proof .now we show the proof of theorem [ mrn ] .we can assume that , otherwise the estimate is obvious .let if , then we have from lemma [ ni12 ] that we have using the elementary inequality we get if , then .we have from that here we have used the fact that hence we obtain from lemma [ nhfe ] and that , we have since , we finish the proof and obtain the stability estimate . in this section , we discuss the stability at a discrete set of frequencies .let us first specify the discrete frequency data . for , let and define two angular frequencies the corresponding wavenumbers are recall the boundary measurement at continuous frequencies : now we define the boundary measurements at discrete frequencies : since the discrete frequency data can not recover the fourier coefficient of at , i.e. , is missing , we assume that . otherwise we may replace by , where is the characteristic function , such that has a compact support and .in fact , when , the navier equation reduces to integrating on both sides on and using the integration by parts , we have which implies that can be indeed recovered by the data corresponding to the static navier equation .hence we define [ p2 ] let .the inverse source problem is to determine from the displacement \cup\cup_{n=1}^n \{\omega_{{\rm p } , n } , \omega_{{\rm s } , n}\} ] and , the following estimate holds : taking the compressional plane wave and using similar arguments as those in lemma [ nfed ] , we are able to obtain let the shear plane wave be , where is a unit vector such that we may similarly get noting , we have from the pythagorean theorem that the proof is the same for the three - dimensional case when we take two orthonormal polarization vectors and such that form an orthonormal basis in .the details is omitted for brevity .[ ni12e ] let .then there exists a function satisfying such that we fix a propagation direction vector and consider those which are parallel to .define it follows from the cauchy schwarz inequality that there exists a positive constant depending on such that which gives noting and using lemma [ ff ] , we have .\ ] ] applying lemma [ caf ] shows that there exists a function satisfying such that which yields that noting that the constant does not depend on , we have obtained for all that which completes the proof .now we show the proof of theorem [ mrnd ] . applying lemma [ fst ] and lemma [ nfed ] ,we have let , & n^{\frac{3}{8}}<\frac{1}{2^{\frac{5}{6 } } \pi^{\frac{2}{3}}}|\ln\epsilon_3|^{\frac{1}{9}},\\ n,&n^{\frac{3}{8}}\geq \frac{1}{2^{\frac{5}{6 } } \pi^{\frac{2}{3}}}|\ln\epsilon_3|^{\frac{1}{9 } } \end{cases}.\ ] ] using lemma [ ni12e ] leads to hence we have .\ ] ] if , then and .\end{aligned}\ ] ] using yields consequently , we obtain here we have used that when . if , we have + 1\right)^{2m - d+1}\geq \left(|\ln\epsilon_3|^{\frac{1}{9}}n^{\frac{3}{4 } } \right)^{2m - d+1}.\ ] ] if , then .it follows from lemma [ nfed ] that combining the above estimates and lemma [ tfe ] , we obtain noting that and , , we complete the proof after combining the above estimates .this section discusses the inverse source problem for electromagnetic waves .we discuss the uniqueness of the problem and then show that the increasing stability can be achieved to reconstruct the radiating electric current density from the tangential trace of the electric field at multiple frequencies .we consider the time - harmonic maxwell equations in a homogeneous medium : where is the wavenumber , and are the electric field and the magnetic field , respectively , is the electric current density and is assumed to have a compact support .the problem geometry is the same as that for elastic waves and is shown in figure [ pg ] .the silver mller radiation condition is required to make the direct problem well - posed : eliminating the magnetic field from ,we obtain the decoupled maxwell system for the electric field : given , it is known that the scattering problem has a unique solution ( cf . ) : where is green s tensor for the maxwell system . explicitly, we have where is the fundamental solution of the three - dimensional helmholtz equation and is given in .let and be the tangential trace of the electric field and the magnetic field , respectively .it is shown in that there exists a capacity operator such that which implies that can be computed once is available on .the transparent boundary condition can be equivalently written as it follows from that we define the following boundary measurement in terms of the tangential trace of the electric field only : [ p3 ] let be the electric current density with the compact support .the inverse source problem of electromagnetic waves is to determine from the tangential trace of the electric field for in this section , we discuss the uniqueness and non - uniqueness of problem [ p3 ] .the goal is to distinguish the radiating and non - radiating current densities .we study a variational equation relating the unknown current density to the data on .multiplying by the complex conjugate of a test function on both sides , integrating over , and using the integration by parts , we obtain \cdot \bar{\boldsymbol \xi } { \rm d}\gamma = { \rm i}\kappa\int_{b_r } \boldsymbol{j}\cdot\bar{\boldsymbol \xi } { \rm d}\boldsymbol{x}.\end{aligned}\ ] ] substituting into , we obtain the variational problem : to find such that given , the variational problem can be shown to have a unique weak solution ( cf . ) .assuming that is a smooth function , we take the integration by parts one more time of and get the identity : now we choose to satisfy which implies that is a weak solution of the maxwell system : using this choice of , we can see that becomes for all satisfying. denote by be the closure of the set in the norm .we have the following orthogonal decomposition of : it is shown in that is an infinitely dimensional subspace of , which is stated in the following lemma .[ phi ] let . if , then .it follows from lemma [ phi ] that is a proper subspace of .given , only the component of in can be determined from the data on .moreover , it is impossible that some other equation could be derived to determine the component of in from the data on .[ nuniq ] suppose .then does not produce any tangential trace of electric fields on and thus can not be identified . since , we have from that which yields here is the adjoint operator of .let more precisely , satisfies the variational problem it is shown in that there exists a unique solution to the above boundary value problem for any , where is the surface gradient .hence we have from that which yields that on and completes the proof .the electric current densities in are called non - radiating sources .it corresponds to find a minimum norm solution when computing the component of the source in .it is shown in theorem [ nuniq ] that can not be determined from the tangential trace of the electric field on if .we show in the following theorem that it is also impossible to determine from the normal component of the electric field on if .suppose .then does not produce any normal component of electric fields on .let .multiplying both sides of by and integrating on , we have it follows from the integration by parts that noting and , and using theorem [ nuniq ] , we obtain combining the above equations gives which implies we have from the integration by parts that on the other hand , since then by taking the divergence on both sides , we have hence combing , we get which implies that on and completes the proof .the following theorem concerns the uniqueness result of problem [ p3 ] .suppose , then can be uniquely determined by the data on .it suffices to show that if on .it follows from that we have taking yields that which completes the proof .taking account of the uniqueness result , we revise the inverse source problem for electromagnetic waves and seek to determine in the smaller space .[ p4 ] let .the inverse source problem of electromagnetic waves is to determine from the tangential trace of the electric field for , where is a constant .define a functional space where is an integer and is a constant .the following is our main result regarding the stability for problem [ p4 ] .[ mrm ] let be the solution of the scattering problem corresponding to .then where the stability estimate is consistent with that for elastic waves in .it also has two parts : the data discrepancy and the high frequency tail .the ill - posedness of the inverse problem decreases as increases .we begin with several useful lemmas .[ mje ] let be the solution of corresponding to the source .then let and be the electric and magnetic plane waves .explicitly , we have where is the unit propagation vector , and are two unit polarization vectors and satisfy for all , \varphi\in[0 , 2\pi] ] , where .the following stability estimate is the main result of problem [ p5 ] .[ dm ] let be the solution of the scattering problem corresponding to the source .then where }\|\boldsymbol e(\cdot , \kappa)\|_{\gamma_r}.\end{aligned}\ ] ] the estimate for the discrete frequency data is also consistent with the estimate for the continuous frequency data .they are analogous to the relationship between and for elastic waves .we begin with several useful lemmas .[ mjed ] let be the solution of corresponding to the source .then for all , the fourier coefficients of satisfy give any , let .consider the following electric and magnetic plane waves : where and are chosen such that form an orthonormal basis in .it is easy to verify that and satisfy the maxwell equations : and multiplying the both sides of by , using the integration by parts over and , we obtain a simple calculation yields that which gives similarly , we have on the other hand , since has a compact support contained in and , we obtain that is a weak solution of the maxwell system : multiplying the above equation by and using integration by parts , we get noting , we get which yields from the pythagorean theorem that which completes the proof .[ jhfd ] let for any , the following estimate holds : let and choose .then we have , which means that .let .noting and using integration by parts , we obtain hence we have noting that there are at most elements in \{ , } , we get which completes the proof .[ ffm ] let be the solution of corresponding to the source .for any $ ] and , the following estimate holds : let such that and consider the electric plane wave and magnetic plane wave . noting and using similar arguments as those in lemma [ mjed ] , we get and hence we have from the pythagorean theorem that which completes the proof .[ ni12 m ] let .then there exists a function satisfying such that we fix and consider which parallel to .define it is easy to show from the cauchy schwarz inequality that there exists a constant depending on such that which gives using lemma [ ffm ] yields .\ ] ] an direct application of lemma [ caf ] shows that there exists a function satisfying such that hence we obtain noting that the constant does not depend on , we have obtained that for all with such that which completes the proof .the proof of theorem [ dm ] is similar to that for theorem [ mrnd ] .we briefly present it for completeness .applying lemma [ fst ] and the parseval identity , we have let , & n^{\frac{3}{8}}<\frac{1}{2^{\frac{5}{6 } } \pi^{\frac{2}{3}}}|\ln\epsilon_6|^{\frac{1}{9}},\\ n,&n^{\frac{3}{8}}\geq \frac{1}{2^{\frac{5}{6 } } \pi^{\frac{2}{3}}}|\ln\epsilon_6|^{\frac{1}{9 } } \end{cases}.\ ] ] using lemma [ ni12 m ] leads to hence we have .\ ] ] if , then and .\end{aligned}\ ] ] using , we have consequently , we obtain here we have noted that when . if , we have + 1\right)^{2m-4}\geq\left(|\ln\epsilon_2|^{\frac{1}{9}}n^{\frac{3}{4}}\right)^{2m-4}.\ ] ] if , then .it follows from lemma [ mjed ] that combining the above estimates and lemma [ jhfd ] , we obtain noting that and , . the proof is completed by combining the above estimates .we have a unified theory on the stability in the inverse source problem for elastic and electromagnetic waves . for elastic waves ,the increasing stability is achieved to reconstruct the external force . for electromagnetic waves ,the increasing stability is obtained to reconstruct the radiating electric current density .the analysis requires the dirichlet data only at multiple frequencies .the stability estimates consist of the data discrepancy and the high frequency tail .the result shows that the ill - posedness of the inverse source problem decreases as the frequency increases for the data .a possible continuation of this work is to investigate the stability with a limited aperture data , i.e. , the data is only available on a part of the boundary .since the neumann data can not be represented via the limited dirichlet data by using the dtn map , a new technique is needed , and maybe both the dirichlet and neumann data are required in order to obtain the increasing stability . another more challenging problem is to study the stability in the inverse source problem for inhomogeneous media , where the analytical green tensors are not available any more and the present method may not be directly applied .we hope to address these issues and report the progress in the future .in this section , we list the notation for some differential operators used in this paper . first we introduce the notation in two - dimensions .let .let and and be a scalar and vector function , respectively .we introduce the gradient and the jacobi matrix : and the scalar curl and the vector curl : it is easy to verify that and next we introduce the notation in three - dimensions .let .let and and be a scalar and vector function , respectively .we introduce the gradient , the curl , and the jacobi matrix : it can be also verified that and this section , we present the helmholtz decomposition for the displacement which is used to introduce the kupradze sommerfeld radiation condition in section 2 .since the source has a compact support , the elastic wave equation reduces to first we introduce the helmholtz decomposition in the two - dimensions . for any solution of, we let where and are scalar potential functions . substituting into gives which is fulfilled if and satisfy the helmholtz equations : it follows from and that we get using again yields where and are the compressional part the shear part , respectively , given by next we introduce the helmholtz decomposition in the three - dimensions .for any solution of , the helmholtz decomposition reads where is a scalar potential function and is a vector potential function . substituting into gives which implies that and satisfy the helmholtz equations : similarly , we have from and that where by the hilbert space of square integrable functions .denote by the soblev space which consists of square integrable weak derivatives up to order and has the norm characterized by introduce the soblev space which is equipped with the norm let be the standard trace functional space .given , it has the fourier expansion the -norm is characterized by given , it has the fourier expansion where is the spherical harmonics of order .the -norm is characterized by define a tangential trace functional space where is the unit outward normal vector on and is the surface scalar curl on .[ fst ] let be a box . for , define the fourier coefficients then has the fourier series expansion in the -sense , i.e. , moreover , [ caf ] let be analytic in the sector and continuous in satisfying ,\\ |p(z)|\leq m , & z\in\mathcal{v},\\ |p(0)|=0 , & z=0 , \end{cases}\ ] ] where are positive constants .then there exits a function satisfying such that j. h. bramble and j. e. pasciak , a note on the existence and uniqueness of solutions of frequency domain elastic wave problems : a prior estimates in , j. math .appl . , 345 ( 2008 ) , 396404 . m. eller , v. isakov , g. nakamura , d. tataru , uniqueness and stability in the cauchy problem for maxwell s and elasticity systems , in : d. cioranescu , j .- l. lions ( eds . ) , nonlinear partial differential equations and their applications , north - holland , elsevier science , 2002 , 329351 .t. nara , j. oohama , m. hashimoto , t. takeda , and s. ando , direct reconstruction algorithm of current dipoles for vector magnetoencephalography and electroencephalography , phys .52 ( 2007 ) , 38593879 .
|
this paper concerns the stability in the inverse source problem for time - harmonic elastic and electromagnetic waves , which is to determine the external force and the electric current density from the boundary measurement of the radiated wave field , respectively . we develop a unified theory to show that the increasing stability can be achieved to reconstruct the external force of the navier equation and the radiating electric current density of maxwell s equations for either the continuous frequency data or the discrete frequency data . the stability estimates consist of the lipschitz type data discrepancy and the high frequency tail of the source functions which decreases as the frequency increases . the results explain that the inverse source problem becomes more stable when higher frequency data is used . our analysis uses the transparent boundary conditions and green s tensors for the wave equations , and requires the dirichlet boundary data only at multiple frequencies .
|
magnetic field extrapolation is an important tool to study the three - dimensional ( 3d ) solar coronal magnetic field , which is difficult to measure directly .the models being used most popularly for field extrapolation are the potential field model , the linear force - free field model , and the nonlinear force - free field ( nlfff ) model .these models are all based on the same assumption that the lorentz force is self - balancing in the corona , but adopt different simplifications of the current distribution . among these models ,the nlfff model is the most precise one for characterizing magnetic field in the low corona , where there is significant and localized electric current , especially in active regions . regarding the nlfff extrapolation, it is routine to use the vector magnetograms observed on the photosphere as input , at least in most of the available extrapolation codes ( _ e.g. _ , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?this , however , poses a basic problem ( also a major headache ) to the force - free field modelers , because the magnetic field in the photosphere is forced by the plasma significantly , which is in conflict with the fundamental assumption of force - freeness . from the photosphere to the corona , the magnetic field passes through a highly stratified and inhomogeneous plasma environment with plasma varying abruptly from to ; thus the force - free condition can not be fulfilled globally . by a study of the observed chromospheric field in a sampled active region , conclude that the magnetic field is not force - free in the photosphere , but becomes force - free roughly km above the photosphere .a recent statistical study by using a large number of magnetograms from huairou solar observing station gives similar conclusions .this complication leads to the desire to use measurements of the vector field in the force - free upper chromosphere instead .however , the vector field is not as easily measured in the chromosphere as in the photosphere .even the chromospheric field is measured , it is still problematic for extrapolation since the surface in which any particular magnetically - sensitive line will form varies in time and space , and in particular the height will be different along different lines of sight .so one can not assume that the vector field is given on a plane or sphere at the bottom of the extrapolation volume as in the case of the photospheric magnetograms .the practical use of the chromospheric magnetograms as boundary conditions for extrapolations is still to be explored . as an alternative way to alleviate the problem, one can consider to modify the photospheric magnetograms to simulate the force - free chromospheric magnetograms , which is first suggested by . since the interface between the photosphere and the bottom of the force - free domain is rather thin ( say , about 400 km ) , especially if compared with the spatial scale of the coronal field ( about tens of megameters ) , the basic field structures of the chromosphere should be very similar to those of the photosphere , except that ( i ) there must be some smoothness of the structures due to the fast expansion of field from the high- to low- regions and ( ii ) the very fine magnetic elements are just closed within this interface and thus show no signal in the force - free domain . thanks to these reasons , modifications that need to be made on the photospheric field to mimic the force - free chromospheric field ought to be not significant and can hopefully be made within / around the error margins of the measurement .the procedure of modifying a raw photospheric magnetogram to a force - free chromospheric one is usually called ` preprocessing ' . to guide the preprocessing , there are constraints that must be fulfilled by the target magnetogram . on the boundary surface of an ideally force - free field in a volume ,the field satisfies the following necessary conditions where .these expressions are derived from the volume integrals of the total magnetic force and torque where is magnetic stress tensor , and .generally the surface integration has to be carried out over a closed volume , but in preprocessing magnetograms for extrapolation of a computational cube , the surface integrals of equation ( [ eq : m11 ] ) is usually restricted within the bottom magnetogram since the contribution from other ( side and top ) boundaries is small and negligible , and in the following will represent only the area of magnetograms . with this assumption ,equation ( [ eq : m1 ] ) is the component form of the surface integrals in equation ( [ eq : m11 ] ) .so the first task of preprocessing is to drive the raw magnetogram to fulfill the constraints of equation ( [ eq : m1 ] ) and thus to be closer to an ideally force - free magnetogram .this task is also dubbed as ` removing force ' in the forced magnetogram .the second task of preprocessing is to smooth the raw data to mimic the field expansion .smoothing is also very necessary for the practical computation based on numerical difference with limited resolution , which can not resolve sufficiently small structures in the raw data . besides, smoothing can remove measurement noise and increase signal - to - noise ratio .several preprocessing codes have been developed and they share the basic approach proposed by . a functional is designed by adding up the deviations from the constraints of equation ( [ eq : m1 ] ) , the terms that control deviation from the raw data and the smoothness with different weights , _e.g. _ , where is the weighting factor , , , , and measures the roughness of the data .then the target magnetogram is searched by minimizing the functional using an optimization method .different algorithms of smoothing and optimization have been utilized and the results are more or less different , as shown in a comparison study by .also the differences can result from different choices of the weighting factors . under this framework of preprocessing ,there are two problems not well addressed , namely , to what extent the force is needed to be removed and to what extent the smoothing can be performed ?we care about these problems from both numerical and physical considerations .ideally we prefer the map to satisfy the force - free constraints precisely , but this condition need not be satisfied strictly considering that numerical discretization error is unavoidable in the extrapolation with finite resolution .the smoothing also ought not to be done arbitrarily if we want to mimic the expansion of the field from the photosphere to some specific height above . over - smoothing of the data may smear the basic structures while a too - limited smoothing can not filter the small - scale noise sufficiently .a careful choice of the weighting factors is required to deal with these problems .this paper is devoted to handling these problems in the preprocessing .we use the values of force - freeness and smoothness calculated from numerical potential - field solution at some height above the photosphere as a reference to guide the preprocessing . based on a simple rule that any magnetic field can be split into two parts : a potential field and a non - potential field ,we develop a new preprocessing code using this splitting of the magnetic field , which is consistent with our extrapolation code cese nlfff .we show below how the raw magnetogram can be driven to force - free and smooth with the same level as that of the numerical potential field at a height of roughly 400 km above the photosphere , _i.e. _ , the bottom of the force - free domain .the remainder of the paper is organized as follows . in section[ sec : method ] we give the basic method and formulas , and we show how to choose the weighting factors in section [ sec : mu ] .we then apply the method to preprocess two sampled magnetograms taken by sdo / hmi and analyze the results in section [ sec : res ] .finally discussion and conclusions are given in section [ sec : con ] .generally the coronal magnetic field can be split into two parts : a potential field matching the normal component of the bottom magnetogram , and a non - potential part with the normal field vanishing at the bottom .particularly , of the vector magnetogram , the magnetic field can be written as where are the components of the potential part and the components of the non - potential part .note that and .supposing is a force - free magnetogram and since its potential part already fulfills the force - free conditions of equation ( [ eq : m1 ] ) , we can derive special force - free conditions for its non - potential part , which are expressed as where we denote .the derivation is straightforward , for example and we have all other expressions in equation ( [ eq : m3 ] ) can be derived easily in the similar way .let denote the observed photospheric field , _ i.e. _ , the raw magnetogram , and its splitting form is with and denoting the potential and non - potential parts , respectively . here is computed based on using the potential field model and then is also obtained .the computation of a potential field needs only the normal component of the field on the bottom and is now a trivial task , which can be carried out conveniently by using the green s function method or other much faster scheme .generally , does not fulfill the force - free conditions of equation ( [ eq : m3 ] ) . if without smoothing , we only need to let and reduce the non - potential part to satisfying equation ( [ eq : m3 ] ) .for the purpose of smoothing , is obtained by taking the data at a plane just one pixel above the photosphere from the 3d potential field extrapolated from the observed .this is suitable for the sdo / hmi data which has a pixel size of about 360 km ( _ i.e. _ , 0.5 arcsec ) , an approximate height above which the coronal field becomes force - free according to . for magnetograms with other sizes of pixel , we need to take the potential field data at a given physical height ( where the force - free assumption becomes valid , _e.g. _ , 400 km ) and not necessarily one pixel above the photosphere . obtained in this way can be regarded as the potential part of the chromospheric field , also a preprocessed counterpart of .of course , if the measurements of chromospheric longitudinal fields are available ( _ e.g. _ , * ? ? ?* ) , we recommend using those data directly to construct the potential part , which is certainly preferred over that based on the photospheric . the second task , to reduce to , is carried out using an optimization method similarly to .we intend to minimize the total magnetic force and torque which are quantified by where for convenience of presentation we denote here the summation is over all the pixels of the magnetogram , and these summations are the numerical counterparts of the integrals in equation ( [ eq : m3 ] ) .the observation term ( to restrict the deviation from the observed data ) and smoothing functional ( to control the smoothness ) are also considered by , \nonumber\\ l_{4 } & = & \sum_{\rm p}\left[(\delta b_{1x})^{2}+ ( \delta b_{1y})^{2}\right]\end{aligned}\ ] ] where is a usual five - point 2d - laplace operator , _i.e. _ , for the pixel this simply states that the smaller gives the smoother data . additionally , the above functionals ( where ) are normalized by which are given by \end{aligned}\ ] ] where we use a steepest descent method to minimize a weighted average of the above functionals where is the weighting factor .generally , the weighting factors for the magnetic force and torque are simply given by since there is no obvious reason to give bias on any of these two quantities .the determination of and will be described in the next section , and different combinations of them are tested for two hmi magnetograms to search the optimal choice in section [ sec : res ] .since is an explicit functional of the arguments , its gradient can be expressed at each pixel as \nonumber\\ + 2\frac{\mu_{2}}{n_{2}}\left [ l_{21}(2xb_{1x}+2xb_{0x})_{q}+ l_{22}(2yb_{1x}+2yb_{0x})_{q}+ l_{23}(yb_{0z})_{q } \right ] \nonumber\\ + 2\frac{\mu_{3}}{n_{3}}(b_{1x}-{\mathcal{b}}_{1x})_{q } + 2\frac{\mu_{4}}{n_{4}}(\delta(\delta b_{1x}))_{q},\\ \frac{\partial l}{\partial ( b_{1y})_{q } } = 2\frac{\mu_{1}}{n_{1}}\left [ l_{12}(b_{0z})_{q}+ l_{13}(2b_{1y}+2b_{0y})_{q}\right ] \nonumber\\ + 2\frac{\mu_{2}}{n_{2}}\left [ l_{21}(2xb_{1y}+2xb_{0y})_{q}+l_{22}(2yb_{1y}+2yb_{0y})_{q}+ l_{23}(-xb_{0z})_{q } \right ] \nonumber\\ + 2\frac{\mu_{3}}{n_{3}}(b_{1y}-{\mathcal{b}}_{1y})_{q } + 2\frac{\mu_{4}}{n_{4}}(\delta(\delta b_{1y}))_{q}.\end{aligned}\ ] ] the procedure of the steepest descent is performed as follows .we start from an initial guess , _e.g. _ , the observed data , and march the solution in each iteration along the steepest descent direction ( _ i.e. _ , opposite to the gradient direction ) by it is important to choose a proper step size at each step to maximize the local descent .this can be determined by a bisection line - search algorithm to solve the one - dimensional optimization problem at every iteration the iteration is terminated if the maximum residual of the field {\max}$ ] , defined by {\max } = \lambda_{k } \max[\nabla l ( { { \mathbf b } } _ { 1}^{k } ) ] , \ ] ] is smaller than gauss ( g ) for 10 successive steps .a careful choice of optimal weighting factors is critical for a good performance of preprocessing . using the qualities of force - freeness and smoothness of the numerical potential part as a reference , we guide the optimization of according to the follow constraints : \i ) the residual force and torque in the target magnetogram be reduced to the same order of those in ; \ii ) the smoothness of the target magnetogram should reach the same level as that of ( since , it means that the smoothness of and should match that of .this is reasonable since there is no preference for any component of the vector ) ; \iii ) with the constraints i ) and ii ) fulfilled , the deviation between the target magnetogram and the observed data should be minimized . in the constraints the residual magnetic force and torque of the data are quantified by two parameters , and , defined as usual and the smoothness of component ( ) is measured by / \sum_{\rm p}\left[(\overline{\delta } b_{m})^{2}\right].\ ] ] beside the above constraints the total iteration steps needed by computation is also considered if the magnetogram s resolution is very high , since the computing time of the preprocessing may be rather long .g ; the vectors represent the transverse field and only the field stronger than g is plotted .the length unit is 0.5 arcsec.,title="fig:",scaledwidth=48.0% ] g ; the vectors represent the transverse field and only the field stronger than g is plotted .the length unit is 0.5 arcsec.,title="fig:",scaledwidth=48.0% ] and .results for different values of are plotted using different colors and line styles as denoted in the top left panel .the dashed lines in the top right and middle right panels represent the parameter values of the potential part , which is used as the reference to choose the optimal weights .the optimal weights are chosen such that , , and are close to those of and is minimized . ] but for ar 11429 . ].quality of the magnetograms .the preprocessed results are produced with the optimal weighting factors and .the parameter is the total magnetic flux normalized by the total unsigned flux . [cols="<,<,<,<,<,<,<",options="header " , ] [ tab : quality ] , respectively . over the images of , and are the contour lines of their zero values . ] but for ar 11429 . ] ) . ]in this section we apply the preprocessing code to several magnetograms taken by sdo / hmi , and search the optimal values for the weighting factors . the _ helioseismic and magnetic imager _ ( hmi ) on board the _ solar dynamics observatory _ ( sdo )provides photospheric vector magnetograms with a high resolution both in space and time .it observes the full sun with a 4k ccd whose spatial sampling is 0.5 arcsec per pixel .raw filtergrams are obtained at six different wavelengths and six polarization states in the fe i 6173 absorption line , and are collected and converted to observable quantities ( like dopplergrams , continuum filtergrams , and line - of - sight and vector magnetograms ) on a rapid time cadence . for the vector magnetic data ,each set of filtergrams takes 135 s to be completed . to obtain vector magnetograms ,stokes parameters are first derived from filtergrams observed over a 12-min interval and then inverted through the very fast inversion of the stokes vector . the 180 azimuthal ambiguity in the transverse fieldis resolved by an improved version of the `` minimum energy '' algorithm .regions of interest with strong magnetic field are automatically identified near real time .a detailed description on how the vector magnetograms are produced can be found on the website //http://jsoc.stanford.edu/ jsocwiki / vectorpaper .the raw magnetograms we use here were downloaded from http://jsoc.stanford.edu/jsocwiki/releasenotes2 , where the hmi vector magnetic field data series ` hmi.b_720s_e15w1332 ` are released for several active regions .there are two special formats , _i.e. _ , direct cutouts and remapped images .we use the remapped format which is more suitable for modeling in local cartesian coordinates , since the images are computed with a lambert cylindrical equal area projection centered on the tracked region . for our test , we select two active regions , ar 11283 and ar 11429 , both of which produced x - class flares and thus were very non - potential .figure [ fig : rawmaps ] shows the magnetograms for ar 11283 at 05:36 ut on 8 september 2011 and ar 11429 at 00:00 ut on 7 march 2012 .the size of the magnetograms are respectively and pixels . in figures [ fig : result11283 ] and [ fig : result11429 ] we show the preprocessing results with different sets of and .since for such large magnetograms it is nontrivial to perform plenty of tests with continuous sets of weighting factors to pick the optimal one , we only compute the results for several groups of and as shown in the figures , _i.e. _ , and .for each set of weighting factors , the normalized terms and the smoothness and the force - free quality and are plotted . by comparing the results with fixed but different , we can see that the force - free parameters , and are almost entirely determined by . when decreasing , _i.e. _ , allowing more freedom of modifying the raw data , and decrease very quickly ( their magnitude decreases quicker than that of ) , but the residual force parameters and reach a minimum and can not be reduced any further .this is because the potential part has a non - zero value of and ( due to numerical error of finite resolution ) , which is the minimum of that can be reached for the target magnetograms .the results for both tests show that is small enough which gives and nearly the same as those of , meaning that the force in the non - potential part is decreased near or under the level of numerical error .even smaller can not improve the quality of force - freeness , but deviates the target magnetograms farther away from the original data .thus we set the optimal value of . with a given value of , the values of smoothness are controlled by .obviously is a good choice which gives the values of and very similar to , meaning that the smoothness of the target magnetograms is consistent with their potential part .we believe the choice of weight for smoothness here is more physics - based than in other methods in which it is given more or less arbitrarily .the results of preprocessing the two magnetograms with the optimal weighting factors and are summarized in table [ tab : quality ] .figures [ fig : map11238 ] and [ fig : map11429 ] show a comparison of the original and preprocessed magnetograms . herethe color - map is designed to manifest both strong and weak fields .as shown , the map after preprocessing resamples the feature of the original data while small structures tend to be smoothed out . in the bottom of the figures we plot the results for the vertical current which is calculated by taking finite differences of the transverse field since the numerical differences are very sensitive to noise , random noise in the dataexhibits more clearly in the map , particularly in weak field regions , and they are suppressed effectively by the smoothing .histograms are plotted in figure [ fig : dbhistrogram ] to show the distributions of the absolute differences between the raw and preprocessed magnetograms over all the pixels .apparently different approaches of modifying the data give different distributions , _ i.e. _ , the distribution for the -component is distinct from those for the and -components .this is because the modification for is determined solely by the potential model , while modifications for are additionally made by the optimization process .the change in the vertical field is less than those for the transverse field .this is consistent with the observation which measures the line - of - sight component much more precisely than the transverse field , and thus we have more freedom to modify the transverse field .still it should be noted that here the potential - field modeled may not approximate well the real chromospheric , which is preferred to be taken by direct measurements if available .we finally show in figure [ fig : iter ] the process of iteration in the optimization method . only the result for ar 11283is plotted as an example .as shown , the functional decreased very quickly . by only 200 iteration steps, it almost reaches its minimum , reduced by about two orders of magnitude from its initial value .the sub - functionals and have similar evolution , although with small oscillations , and the sub - functional keeps decreasing but very slowly after 200 steps .in this paper we have developed a new code of preprocessing the photospheric vector magnetograms for nlfff extrapolation .the method is based on a simple rule that any vector magnetograms can be split into a potential field part and a non - potential part and we deal with two parts separately .preprocessing of the potential part is simply performed by taking the data sliced at a plane about km above the photosphere from the 3d potential - field numerical solution , which is extrapolated from the observed vertical field .then the non - potential part is modified by an optimization method to fulfill the constraints of total magnetic force - freeness and torque - freeness . as for practical computation based on numerical discretization , a strict satisfaction of force - free constraintsis apparently not necessary .also the extent of the smoothing to be applied to the data need to be carefully determined , if we want to mimic the field expansion from the photosphere to some specific height above .we use the values of force - freeness and smoothness calculated from the preprocessed potential - field part as a reference to guide the preprocessing of the non - potential field part , _ i.e. _ , we require that the target magnetograms have the same level of force - freeness and smoothness as the reference data. these requirements can restrict well the values of the free parameters , _i.e. _ , the weighting factors in the optimization functional .the code is applied to the sdo / hmi data .preprocessed results show that the method can remove efficiently the force and noise , if we properly choose the weighting factors . for two sampled hmi magnetograms , we find that the optimal weights are and , with which the target magnetgrams can be driven to force - free and smooth with the same level as that of the numerical potential field at the bottom of the force - free domain .the preprocessing code here is developed as a sub - program for a project of applying our extrapolation code cese nlfff to observed data .we have applied cese mhd nlfff to sdo / hmi data with both raw and preprocessed magnetograms . by a careful comparison of the results , we find that the quality of extrapolation is indeed improved using the preprocessed magnetograms , including the force - freeness of the results ( _ e.g. _ , measured by a current - weighted mean angle between the magnetic field and electric current ) and the free energy contents .for example in the extrapolation of ar 11283 , the mean angle between and for the entire extrapolation box of pixels is reduced from to and the free energy is increased from erg to erg .this work is jointly supported by the 973 program under grant 2012cb825601 , the chinese academy of sciences ( kzzd - ew-01 - 4 ) , the national natural science foundation of china ( 41204126 , 41274192 , 41031066 , and 41074122 ) , and the specialized research fund for state key laboratories .data are courtesy of nasa / sdo and the hmi science teams .the authors thank the anonymous referee for invaluable comments .
|
numerical reconstruction / extrapolation of coronal nonlinear force - free magnetic field ( nlfff ) usually takes the photospheric vector magnetogram as input at the bottom boundary . magnetic field observed at the photosphere , however , contains force which is in conflict with the fundamental assumption of the force - free model and measurement noise which is unfavorable for practical computation . preprocessing of the raw magnetogram has been proposed by to remove the force and noise for providing better input for nlfff modeling . in this paper we develop a new code of magnetogram preprocessing which is consistent with our extrapolation method cese mhd nlfff . basing on a magnetic - splitting rule that a magnetic field can be split into a potential field part and a non - potential part , we split the magnetogram and deal with the two parts separately . preprocessing of the magnetogram s potential part is based on a numerical potential field model , and the non - potential part is preprocessed using the similar optimization method of . the code is applied to the sdo / hmi data and results show that the method can remove efficiently the force and noise and improve the quality of extrapolation . = 1
|
our purpose is to build an intensity - based modeling framework that can be used in trading and calibrating across the credit and equity markets .the same company has stocks , stock options , bonds , credit default swaps on these bonds , and several other derivatives .when this company defaults , the payoffs of all of these instruments are affected ; therefore , their prices all contain information about the default risk of the company .we build a model that can be jointly calibrated to corporate bond prices and stock options , and can be used to price more exotic derivatives . in our frameworkwe use the vasicek model for the interest rate , and use doubly stochastic poisson process to model the default of a given company .we assume that the bonds have recovery of market value and that stocks become valueless at the time of default . using the multi - scale modeling approach of obtain explicit bond pricing equation with three free parameters which we calibrate to the corporate bond term structure . on the other hand ,stock option pricing formula contain seven parameters , three of which are common with the bond option pricing formula .( the common parameters are multiplied with the loss rate in the bond pricing formula . )we calibrate the remaining set of parameters to the stock option prices .this hybrid model , therefore , is able to account for the default risk premium in the implied volatility surface .the calibration results reveal that our model is able to produce implied volatility surfaces that match the data closely .we compare the implied volatility surfaces that our model produces to those of .we see that even for longer maturities our model has a prominent skew : compare figures [ fig : imp7p ] and [ fig : impvfouque ] .even when we ignore the stochastic volatility effects , our model fits the implied volatility of the ford motor company well and performs better than the model of ; see figure [ fig : impvolford ] .this points to the importance of accounting for the default risk for companies with low ratings .once the model parameters are calibrated , the model can be used to compute the prices of more exotic options . to test whether our model produces correct prices we use the cds spread data and show that the model implied cds spread matches the out of sample " cds data . to compute the cds spread , under our assumption on the recovery , one needs to reconstruct the term structure of the treasury and the corporate bonds .moreover , one needs to separate the loss rate from the other parameters in the bond pricing formula ( see or for the cds spread formula ) .this separation is possible since we calibrate our model to corporate bond data and stock option data jointly as described above .the model - implied cds spread time series matches the observed cds spread time series of ford motor company for over a long period of time ; see figures [ fig : cds3yr ] and [ fig : cds5yr ] .this is an interesting observation since we did not make use of the cds spread data in our calibration .this observation also shows that one can use our model to trade across different markets that contain information about the default risk of a given firm .our model has three building blocks : ( 1 ) we model the default event using the multi - scale stochastic intensity model of .we also model the interest rate using an ornstein - uhlenbeck process ( vasicek model ) . as it was demonstrated in , these modeling assumptions are effective in capturing the corporate yield curve ; ( 2 ) we take the stock price process to follow a stochastic volatility model which jumps to zero when the company defaults .this stock price model was considered in .our model specification for the stock price differs from the jump to default models for the stock price considered by and , which take the volatility and the default intensity to be functions of the stock price ; ( 3 ) we also account for the stochastic volatility in the modeling of the stocks since even the index options ( when there is no risk of default ) possess implied volatility skew .we model the volatility using the fast scale stochastic volatility model of .we demonstrate on index options ( when there is no risk of default ) that ( see section [ sec : demnst ] ) , we match the performance of the two time scale volatility model .the latter model extends by including a slow factor in the volatility to get a better fit to longer maturity option .we see from section [ sec : demnst ] that when one assumes the interest rate to be stochastic , the calibration performance of the stochastic volatility model with only the fast factor is as good as the two scale stochastic volatility model , which is why we choose the volatility to be driven by only the fast factor . even though the interest rate is stochastic in our model , we are able to obtain explicit asymptotic pricing formulas for stock options .thanks to these explicit pricing formulas the inverse problem that we face in calibrating to the corporate bond and stock data can be solved with considerable ease .our modeling framework can be thought of as a hybrid of the models of , which only considers pricing options in a stochastic volatility model with constant interest rate , and , which only considers a framework for pricing derivatives on bonds .neither of these models has the means to transfer information from the equity markets to bond market or vice versa , which we are set to do in this paper .we should also note that our model also takes input from the treasury yield curve , historical stock prices , and historical spot rate data to estimate some of its parameters ( see section [ sec : calibration ] ) .our model extends by taking the interest rate process to be stochastic , which leads to a richer theory and more calibration parameters , and therefore , better fit to data : ( i ) when the interest rate is deterministic the corporate bond pricing formula turns out to be very crude and does not fit the bond term structure well ( compare ( 2.57 ) in and ) ; ( ii ) with deterministic interest rates the bond pricing and the stock option pricing formulas share only one common term , `` the average intensity of default '' ( this parameter is multiplied by the loss rate in the bond pricing equation , under our loss assumptions ) .therefore , the default premium in the implied volatility surface is not accounted for as much as it should be . andour calibration analysis demonstrates that this has a significant impact .when the volatility is taken to be constant , both our new model and the model in have three free parameters .the model in produces a below par fit to the implied volatility surface ( see e.g. figure 5 in that paper ) , whereas our model produces an excellent fit ( see section [ sec : fordsimplied ] and figure [ fig : impvolford ] ) ; ( iii ) to calculate the cds spread , in the constant interest rate model , one needs to separate the loss rate and the average intensity of default .this is again established calibrating the model to the bond term structure data and the stock option implied volatility surface .the estimates for the average intensity and the loss rate are not as accurate in as it is in our model because of ( i ) and ( ii ) .this crude estimation leads to a poor out of sample match to the cds spread time series .the other defaultable stock models are those of , and , which assume that the interest rate is deterministic . , take the volatility and the intensity to be functions of the stock price and obtain a one - dimensional diffusion for the pre - default stock price evolution .using the fact that the resolvents of particular markov processes can be computed explicitly , they obtain pricing formulas for stock option prices . on the otherhand uses a cir stochastic volatility model and also models the intensity to be a function of the volatility and another endogenous cir factor .the option prices in this framework are computed numerically using inverse the fourier transform .we , on the other hand , use asymptotic expansions to provide explicit pricing formulas for stock options in a framework that combines a ) the vasicek interest rate model , b ) fast - mean reverting stochastic volatility model , c ) defaultable stock price model , d ) multi - scale stochastic intensity model .our calibration exercise differs from that of since they perform a time series analysis to obtain the parameters of the underlying factors ( from the the stock option prices and credit default swap spread time series ) , whereas we calibrate our pricing parameters to the daily implied volatility surface and bond term structure data .our purpose is to find a risk neutral model that matches a set of observed market prices .this risk neutral model can then be used to price more exotic , illiquid or over - the - counter derivatives . for further discussion of this calibration methodologywe refer to ( see chapter 13 ) , , and .we also provide daily prediction of the cds spread only using the data from the bond term structure and implied volatility surface of the options .the rest of the paper is organized as follows : in section 2 , we introduce our modeling framework and describe the credit and equity derivatives we will consider and obtain an expression for the cds spread under the assumption that the recovery rate of a bond that defaults is a constant fraction of its predefault value . in section 3 , we introduce the asymptotic expansion method .we obtain explicit ( asymptotic ) prices for bonds and equity options in section 3.3 . in section 4 , we describe the calibration of our parameters and discuss our empirical results . figures , which show our calibration results , are located after the references .let be a complete probability space supporting ( i ) correlated standard brownian motions , , with =\rho_i t , \quad { \mathbb{e}}[w^i_t , w^j_t]=\rho_{ij } t,\quad i , j \\in \{1,2,3,4\ } , \ ; t \geq 0,\ ] ] for some constants , and ( ii ) a poisson process independent of .let us introduce the cox process ( time - changed poisson process ) , , where in which are ( small ) positive constants and is a strictly positive , bounded , smooth function .we also assume that the functions and satisfy lipschitz continuity and growth conditions so that the diffusion process for has a unique strong solution .we model the time of default as we also take interest rate to be stochastic and model it as an ornstein - uhlenbeck process for positive constants , , and .we model the stock price as the solution of the stochastic differential equation where the volatility is stochastic and is defined through here , is a smooth , bounded function of one variable which represents the market price of volatility risk .the function is also a bounded , smooth function .note that the discounted stock price is a martingale under the measure , and at the time of default , the stock price jumps down to zero .the pre - banktruptcy stock price coincides with the solution of it will be useful to keep track of different flows of information .let be the natural filtration of .denote the default indicator process by , , and let be the filtration generated by .finally , let be an enlargement of such that , . since we will take and to be small positive constants , the processes and are fast mean reverting , and evolves on a slower time scale .see for an exposition and motivation of multi - scale modeling in the context of stochastic volatility models .we note that our specification of the intensity of default coincides with that of , who considered only a framework for pricing credit derivatives .our stock price specification is similar to that of and who considered a framework for only pricing equity options on defaultable stocks .our volatility specification , on the other hand , is in the spirit of . considered a similar modeling framework to the one considered here , but the interest rate was taken to be deterministic . in this paper , by extending this modeling framework to incorporate stochastic interest rates , we are able to consistently price credit and equity derivatives and produce more realistic yield curve and implied volatility surfaces .we are also able to take the equity option surface and the yield curve data as given and predict the credit default swap spread on a given day . testingour model prediction against real data demonstrates the power of our pricing framework . in our framework, we will price european options , bonds , and credit default swaps of the same company in a consistent way .the price of a european call option with maturity and strike price is given by \\&=1_{\{\tau > t\ } } \mathbb{e}\left[\exp\left(-\int_t^t ( r_s+\lambda_s ) ds\right)(x_t - k)^+\bigg| { \mathcal{f}}_t\right ] , \end{split}\ ] ] in which the equality follows from lemma 5.1.2 of .( this lemma , which lets us write a conditional expectation with respect to in terms of conditional expectations with respect to , will be used in developing several identities below ) . also , see and for a similar computation . on the other hand , the price of a put option with the same maturity and strike price is +\mathbb{e}\left[\exp\left(-\int_t^t r_sds\right)k1_{\{\tau\leq t\}}\big| { \mathcal{g}}_t\right]\\ & = 1_{\{\tau > t\}}\bigg(\mathbb{e}\left[\exp\left(-\int_t^t ( r_s+\lambda_s ) ds\right)(k - x_t)^+\bigg| { \mathcal{f}}_t\right]\\&+k\mathbb{e}\left[\exp\left(-\int_t^t r_sds\right)\bigg|{\mathcal{f}}_t\right]-k\mathbb{e}\left[\exp\left(-\int_t^t ( r_s+\lambda_s)ds\right)\bigg|{\mathcal{f}}_t\right]\bigg ) .\end{split}\ ] ] * 2 . *consider a defaultable bond with maturity and par value of 1 dollar .we assume the recovery of the market value , introduced by . in this model , if the issuer company defaults prior to maturity , the holder of the bond recovers a constant fraction of the pre - default value , with ] and choose the point that minimizes ( [ eq : bond - lls ] ) .here , we take guided by the results of . * step 2 .estimation of from the equity option data * : these parameters are calibrated from the stock options data by a least - squares fit to the observed implied volatility .we choose the parameters to minimize in which and are observed black - scholes implied volatility and model black - scholes implied volatility , respectively .the right hand side of ( [ eq : impliedvolfit ] ) is from , page 439 . here , is the market price of a european option ( a put or a call ) that matures at time and with strike price and is the corresponding model price which is obtained from ( [ eq : apprx ] ) .as in , is the market implied black - scholes vega .let be either of ( [ eq : call - opt ] ) and ( [ eq : put - opt ] ) with and .let us introduce the greeks , , \end{split}\ ] ] in which each term can be explicitly evaluated ( see appendix ) .now from ( [ eq : apprx ] ) and the results of section [ sec : explct ] ( with ) , we can write first , let us fix the value of . then, from step 1 , we can infer the values of .now the fitting problem in ( [ eq : impliedvolfit ] ) is a linear least squares problem for .next , we vary $ ] and choose so that ( [ eq : impliedvolfit ] ) is minimized .let denote the approximation for the price at time of a defaultable bond that matures at time , and has loss rate ( see ( [ eq : apprx - bndprice ] ) ) .let be the price of a risk - free bond .then , the model implied cds spead with maturity is recall that we have already estimated all of the model parameters in section [ sec : parest ] using both corporate term structure data and the stock option implied volatility surface . therefore , using ( [ eq : imcdssp ] ) we can plot the model implied cds spread over time and compare it with the cds spread data available in the market .this is precisely what we do in figures [ fig : cds3yr ] and [ fig : cds5yr ] .we look at the time series and and compare them to the cds spread time series of the ford motor company .the match seems to be extremely good , which attests to the power of our modeling framework . by varying in ( [ eq : imcdssp ] )we can obtain the model implied term structure of the cds spread .figure [ fig : cdssamples ] shows the range of shapes we can produce .we will compare how well our model fits the implied volatility against the model of , which does not account for the default risk and for the randomness of the interest rates .although , we only calibrate seven parameters ( hence we refer to our model as the 7-parameter model ) to the option prices ( see the second step of the estimation in section [ sec : parest ] ) , we have many more parameters than the model of , which only has four parameters ( we refer to this model as the 4-parameter model ) . therefore ,for a fair comparison , we also consider a model in which the volatility is a constant . in this case , as we shall see below , there are only three parameters to calibrate to the option prices , therefore we call it the 3-parameter model .* constant volatility model * in this case , we take in the expression for in corollary [ cor : p0 ] .the expression for remains the same as before .however , simplifies to this model has only three parameters , that need to be calibrated to the options prices , as opposed to the 4-parameter model of .as it can be seen from figure [ fig : impvolford ] as expected our 7-parameter model outperforms the 4-parameter model of as expected and fits the implied volatility data well .but , what is surprising is that the 3-parameter model , which does not account for the volatility but accounts for the default risk and stochastic interest rate , has almost the same performance as the 7-parameter model .the 7-parameter model has a very rich implied volatility surface structure , the surface has more curvature than that of the 4-parameter model of , whose volatility surface is more flat ; see figures [ fig : imp7p ] and [ fig : impvfouque ] .( the parameters to draw these figures are obtained by calibrating the models to the data implied volatility surface on june 8 2007 . ) the 7-parameter model has a recognizable skew even for longer maturities and has a much sharper skew for shorter maturities .the purpose of this section is to show the importance of accounting for stochastic interest rates in fitting the implied volatility surface .interest rate changes should , indeed , be accounted for in pricing long maturity options .when we price index options , we set and our approximation in ( [ eq : apprx ] ) simplifies to in which is given by corollary [ cor : p0 ] after settiing , and note that the difference of ( [ eq : indx ] ) with the model of is that the latter allows for a slow evolving volatility factor to better match the implied volatility at the longer maturities .this was an improvement on the model of , which only has a fast scale component in the volatility model .we , on the other hand , by accounting for stochastic interest rates , capture the same performance by using only a fast scale volatility model . from figure[ fig : spx ] , we see that both ( [ eq : indx ] ) and outperform the model of , especially at the longer maturities ( 9 months , 1 year , 1.5 years and 2 years ) , and their performance is very similar . this observation emphasizes the importance of accounting for stochastic interest rates for long maturity contracts .when , we can explicitly express the greeks in ( [ eq : greeks ] ) in terms of as .\end{aligned}\ ] ]
|
we propose a model which can be jointly calibrated to the corporate bond term structure and equity option volatility surface of the same company . our purpose is to obtain explicit bond and equity option pricing formulas that can be calibrated to find a risk neutral model that matches a set of observed market prices . this risk neutral model can then be used to price more exotic , illiquid or over - the - counter derivatives . we observe that the model implied credit default swap ( cds ) spread matches the market cds spread and that our model produces a very desirable cds spread term structure . this is observation is worth noticing since without calibrating any parameter to the cds spread data , it is matched by the cds spread that our model generates using the available information from the equity options and corporate bond markets . we also observe that our model matches the equity option implied volatility surface well since we properly account for the default risk premium in the implied volatility surface . we demonstrate the importance of accounting for the default risk and stochastic interest rate in equity option pricing by comparing our results to , which only accounts for stochastic volatility . * keywords : * credit default swap , defaultable bond , defaultable stock , equity options , stochastic interest rate , implied volatility , multiscale perturbation method .
|
we consider a linear system with endogenous and exogenous variables . with a sample of observations from this system ,we denote by and the observed values of endogenous and exogenous variables , respectively .the interactions between endogenous variables and direct causal effects by exogenous variables can be described by a system of structural equations , where the matrix has zero diagonal elements and contains regulatory effects , the matrix contains causal effects , and is an matrix of error terms .we assume that and are independent of each other , and each component of is independently distributed as normal with zero mean while rows of are identically distributed . with gene expression levels and genotypic values as endogenous and exogenous variables , respectively , model ( [ eqn - fullinfo ] ) has been used to represent a gene regulatory network with each equation modeling the regulatory effects of other genes and causal effects of cis - eqtl ( i.e. , expression quantitative trait loci located within the region of their target gene ) on a given gene , see , , , and , among others .genetical genomics experiments have been widely undertaken to obtain genome - wide gene expressions and genotypic values . however , fitting a system of structural equations in ( [ eqn - fullinfo ] ) to genetical genomics data for the purpose of revealing a whole - genome gene regulatory network is still hindered by lack of an effective statistical method which addresses issues brought by large numbers of endogenous and exogenous variables .several efforts have been put to construct the system ( [ eqn - fullinfo ] ) with genetic genomics data . proposed to use a genetic algorithm to search for genetic networks which minimize the akaike information criterion ( aic ; ) , and instead proposed to minimize the bayesian information criterion ( bic ; ) and its modification for the optimal genetic networks .both aic and bic are applicable to inferring networks for only a small number of endogenous variables . for a large system with many endogenous and exogenous variables, proposed to maximize a penalized likelihood to construct a sparse system .however , it is computationally prohibitive to fit a large system based on the likelihood function of the complete model . instead proposed to apply the adaptive lasso to fitting each structural equation separately , then recover the network relying on additional assumption on unique exogenous variables .however , demonstrated its inferior performance via simulation studies , which is consistent with our conclusion . instead of the full information model specified in ( [ eqn - fullinfo ] ) ,we here seek to establish the large system via constructing a large number of limited information models , each for one endogenous variable .for example , when -th endogenous variable is concerned , we can focus on the -th structural equation in ( [ eqn - fullinfo ] ) which models the regulatory effects of other endogenous variables and direct causal effects of exogenous variables , however , the system structures contained in other structural equations are skipped , leading to the following limited - information model , here refers to excluding the -th column , refers to the -th column of excluding the diagonal zero , and and refer to the -th columns of and respectively .the second part of the model ( [ eqn - limitedinfo ] ) is from the following reduced model by excluding the -th equation , with and , in a classical low - dimensional setting , it is well known that a two - stage least squares ( 2sls ) method can produce consistent estimates of the parameters when the system is identifiable .however , as in a typical genetical genomics experiment , we here are interested in constructing a large system with the number of endogenous variables much larger than the sample size .such a high - dimensional and small sample size data set makes it infeasible to directly apply 2sls method .indeed , results in perfect fits of reduced form equations at the first stage , which implies to regress against the observed values of endogenous variables at the second stage and therefore obtain ordinary least squares estimates of the parameters .it is well known that such ordinary least squares estimates are inconsistent .furthermore , constructing a large system demands , at the second stage , selecting regulatory endogenous variables among massive candidates , i.e. , variable selection in fitting high - dimensional linear models . herewe propose a two - stage penalized least squares ( 2spls ) method to address the challenges in establishing system ( [ eqn - fullinfo ] ) in the case .the method fits one regularized linear model for each endogenous variable at each stage . at the first stage ,the penalty is employed to obtain consistent estimates of a set of well - defined surrogate variables which allow to separately investigate individual structural models and consistently estimate all regulatory effects for each endogenous variable . at the second stage ,each endogenous variable is regressed against the estimates of surrogate variables , and the penalty is employed to identify regulatory variables among massive candidates .the use of regularization techniques helps avoid overfitting at the first stage and allows to exploit sparse structure of the system at the second stage .we show that the resultant estimates of regulatory effects enjoy the oracle properties .the proposed method addresses three challenging issues in constructing a large system of structural equations , i.e. , memory capacity , computational time , and statistical power .first , the limited information models are employed to develop the algorithm so as to avoid managing the full information models which may consist of many subnetworks and involve a massive number of endogenous variables .second , allowing to fit one linear model for each endogenous variable at each stage makes the algorithm computationally fast .it also makes it feasible to parallel the large number of model fittings at each stage .third , the oracle properties of the resultant estimates show that the proposed method can achieve optimal power in identifying and estimating regulatory effects .furthermore , the efficient computation makes it feasible to use the bootstrap method to evaluate the significance of regulatory effects for small data sets .the rest of this paper is organized as follows .we first state an identifiable model in the next section .provided in section [ sec-2sls ] is a new view on the classical 2sls method , which motivates our development of the 2spls method in section [ sec-2spls ] . in section[ sec - theory ] , we show that the estimates from 2spls have the oracle properties with the proof included in the appendix .simulation studies are carried out in section [ sec - simu ] to evaluate the performance of 2spls .an application to a real data set to infer a yeast gene regulatory network is presented in section [ sec - rdata ] .we conclude this paper with a discussion in section [ sec - disc ] .we follow the practice of constructing system ( [ eqn - fullinfo ] ) in analyzing genetic genomics data , and assume that each endogenous variable is affected by a unique set of exogenous variables .that is , the structural equation in ( [ eqn - limitedinfo ] ) has known zero elements of .explicitly , we use to denote the set of row indices of known nonzero elements in . then we have known sets , which dissect the set .we explicitly state this assumption in the below .: : * assumption a. * for , but as long as .the above assumption indeed satisfies the rank condition , which is a sufficient condition for model identification .since each has a set of known zero components , hereafter we ignore them and simply rewrite the structural equation in model ( [ eqn - limitedinfo ] ) as , and are correlated , fitting solely model ( [ eqn - ykstructural ] ) results in biased estimates of and . however , we notice that the following two sets of variables are independent , = \mathbf{x}\boldsymbol{\pi}_{-k},\\ \boldsymbol{\varepsilon}_k = \boldsymbol{\epsilon}_k+\boldsymbol{\xi}_{-k}\boldsymbol{\gamma}_k . \end{array}\right.\end{aligned}\ ] ] consequently , consistent estimates of and can be obtained by applying least squares method to the following model , when regulatory effects are considered , = \mathbf{x}\boldsymbol{\pi}_j : j=1 , 2 , \cdots , p\}$ ] serves as a set of surrogate variables which can help estimate both and in model ( [ eqn - fullinfo ] ) . in practice, is unknown as it involves unknown .suppose we instead have a consistent estimate of , i.e. , let , and further take the following assumption .: : * assumption b. * , where is a positive definite matrix .it is easy to see that when replacing with in model ( [ eqn - idealmodel ] ) , we obtain the following least squares estimators of and , certainly , the properties in ( [ eqn - asymz ] ) imply that the above estimators approach to the least squares estimators of fitting model ( [ eqn - idealmodel ] ) , which are also consistent .[ thm-2sls ] suppose assumptions a and b are satisfied for the system ( [ eqn - fullinfo ] ) with fixed and .when there exists a consistent estimator of , the ordinary least squares estimators of obtained by regressing against are also consistent .when a -consistent least squares estimator of is obtained by fitting each equation in ( [ eqn - reducedform ] ) for , the resultant estimators of and are exactly the 2sls estimators by and . in the following ,we consider to construct the system ( [ eqn - fullinfo ] ) in the case that .such a high - dimensional and small sample size data set makes it infeasible to directly apply the 2sls method .to construct the limited - information model ( [ eqn - limitedinfo ] ) , we can obtain consistent estimates of the surrogate variables by fitting high - dimensional linear models , and then conduct a high - dimensional variable selection following our view on model ( [ eqn - idealmodel ] ) . hence we propose a two - stage penalized least squares ( 2spls ) procedure to construct each model in ( [ eqn - limitedinfo ] ) so as to establish the large system ( [ eqn - fullinfo ] ) . at the first stage, we use the ridge regression to fit each reduced - form model in ( [ eqn - reducedform ] ) to obtain consistent estimates of the surrogate variables .that is , for each , we obtain the ridge regression estimator of by minimizing the following penalized sum of squares where is a tuning parameter that controls the strength of the penalty .the solution to the minimization problem is , which leads to a consistent estimate of , where . with a proper choice of ,ridge regression has very good prediction performance as shown in the next section . at the second stage, we replace with in model ( [ eqn - idealmodel ] ) to derive estimates of and . specifically , we minimize the following penalized error squares to obtain estimates of and , where implies to componentwisely take absolute values of , is a known weight vector , and is a tuning parameter . minimizing for in ( [ est - gammapsi ] ) leads to where is usually of low dimension , and the above least squares estimator of is easy to obtain .plugging into ( [ est - gammapsi ] ) , we can solve the following minimization problem to obtain an estimate of , this is equivalent to a variable selection problem in regressing against high - dimensional .we will resort to adaptive lasso to select nonzero components of and estimate them . specifically , picking up a and obtaining as a -consistent estimate of , we calculate the weight vector with components inversely proportional to components of . the above minimization problem ( [ est - gamma ] ) is a convex optimization problem which is computationally efficient . in this method, we need to select tuning parameters at each stage . at the first stage, we propose to choose each in ( [ est - pi ] ) by the method of generalized cross - validation ( gcv ; ) , that is , it is a rotation - invariant version of ordinary cross - validation , and leads to an approximately optimal estimate of the surrogate variable .at the second stage , the tuning parameter in ( [ est - gamma ] ) is obtained via -fold cross validation .as an extension of the classical 2sls method to high dimensions , the proposed 2spls method also has some good theoretical properties . in this section , we will show that the 2spls estimates enjoy the oracle properties . as the second - stage estimation replies on the ridge estimates obtained from the first stage , we first discuss the theoretical properties on , which provide guarantee for the oracle properties of our proposed estimates . as aforementioned , each in ( [ est - pi ] )is obtained by the method of generalized cross - validation .interestingly , as stated by , obtained by gcv is closely related to the one minimizing indeed , the following result follows theorem 2 of .[ thm - gcv ] suppose that all components of are i.i.d .with mean zero and variance , then \right ] = \arg\min_{\tau>0 } e\left[e[t_j(\tau)|\boldsymbol{\pi}_j]\right ] = \frac{\sigma_{\boldsymbol{\xi}_j}^2}{\sigma_{\boldsymbol{\pi}}^2},\ ] ] where is the variance component of in model ( [ eqn - limitedinfo ] ) .this theorem implies that the gcv estimate is approximately the optimal estimate of the surrogate variable .furthermore , as the optimal tuning parameter approximates a constant determined by the variance components ratio , hereafter we take the following assumption on . : : * assumption c. * as , for . denote , we then have the following properties on .[ thm - ridgeest ] for , let where each is a submatrix of identified with row indices in and column indices in ( the dot implies all rows or columns ) . then , under assumptions a , b , and c , 1 . , as ; 2 . , as . since , theorem [ thm - ridgeest].a states that is a good approximation to . on the other hand , is the error term in regressing against , and theorem [ thm - ridgeest].b implies that .thus results in regression errors with good properties , i.e. , the error effects on the 2spls estimators will vanish when the sample size gets sufficiently large . in summary, the above theorem indicates that behaves the same way as asymptotically , which makes it feasible to replace with at the second stage .the crucial properties of in theorem [ thm - ridgeest ] , together with the good theoretical properties of adaptive lasso , will lead to the oracle properties of our proposed estimates .we denote the -th elements of and as and , respectively .then , with a proper choice of , the proposed method enjoys the following oracle properties .( oracle properties ) [ thm - oracle ] let , , and be the submatrix of identified with both row and column indices in . suppose that and .then , under assumptions a , b , and c , the estimates from the proposed 2spls method satisfy the following properties , 1 . consistency in variable selection : ; 2 .asymptotic normality : , as .it is worthwhile to mention that theorem [ thm - ridgeest ] plays an essential role in establishing the oracle properties of 2spls .in fact , as long as the properties in theorem [ thm - ridgeest ] hold true for the first - stage estimates of , we can generalize the second - stage regularization to a wide class of regularization methods , all the theoretical properties of which can be inherited by our proposed two - stage method .we conducted simulation studies to compare 2spls with the adaptive lasso based algorithm ( al ) by , and the sparsity - aware maximum likelihood algorithm ( sml ) by . both acyclic networks and cyclic networks were simulated , each involving endogenous variables .each endogenous variable was simulated to have , on average , one regulatory effect for sparse networks , or three regulatory effects for dense networks .the regulatory effects were independently simulated from a uniform distribution over . to allow the use of al and sml , every endogenous variable in the same network was simulated to have the same number ( either one or three ) of known causal effects by the exogenous variables , with all effects equal to one .each exogenous variable was simulated to take values 0 , 1 and 2 with probabilities 0.25 , 0.5 and 0.25 , respectively , emulating genotypes of an f2 cross in a genetic genomics experiment .all error terms were independently simulated from , and the sample size varied from to . for each network setup, we simulated 100 data sets and applied all three algorithms to calculate the power and false discovery rate ( fdr ) . for inferring acyclic networks ,the power and fdr of the three different algorithms are plotted in figure [ figure - acycliclarge ] . in the casethat each endogenous variable has only one known exogenous effect ( ee ) , 2spls has the greatest power to infer both sparse and dense acyclic networks from data sets with different sample sizes . in the case of three ees available for each endogenous variable ,2spls still has greater power than the other two algorithms when the sample size is small or moderate . when the sample size is large , 2spls and sml are comparable for constructing both sparse and dense acyclic networks . in any case ,2spls and sml provide much greater power than al .indeed , al provides power as low as under when the sample size is not large , and its power is still under even when the sample size increases to . on the other hand ,2spls provides power over for small sample sizes , and over for moderate to large sample sizes .power of sparse networks \b .fdr of sparse networks \c .power of dense networks \d .fdr of dense networks as shown in figure [ figure - acycliclarge ] , 2spls controls the fdr under except the case with three available ees and very small sample sizes ( ) . while it controls the fdr as low as under for sparse acyclic networks when the sample sizes are large , sml reports large fdrs when the sample sizes are not large .indeed , when the sample sizes are under 200 , sml reports fdr over for dense acyclic networks . in general , both 2spls and sml outperform al in terms of fdr though al reports fdr lower than 2spls when inferring sparse acyclic networks with one available ee from data sets of very large sample sizes .plotted in figure [ figure - cycliclarge ] are the power and fdr of the three different algorithms when inferring cyclic networks . similar to the results on acyclic networks , 2spls has greater power than the other two algorithms across all sample sizes and has lower fdr when the sample size is not large . for dense cyclic networks, al has power mostly under and fdr over . while it improves the fdr for sparse cyclic networks with large sample sizes, al has power as low as under .sml provides power competitive to 2spls for sparse cyclic networks , but its power is much lower than that of 2spls for dense cyclic networks .similar to the case of acyclic networks , sml reports much higher fdr for inferring dense networks from data sets with small sample sizes though it reports very small fdr when the sample sizes are large .we also conducted simulation studies on both acyclic and cyclic networks with small to moderate number of endogenous variables ( e.g. , 10 to 50 endogenous variables ) .the performance of 2spls is better than al and comparable to sml in those scenarios ( results are not shown ) .indeed , the power of 2spls exceeds while maintaining low fdr in most of the scenarios .power of sparse networks \b .fdr of sparse networks \c .power of dense networks \d .fdr of dense networks while it generally reports higher power and more robust fdr than sml , 2spls significantly reduces the computation time in comparison to sml as it assembles the network by investigating limited - information models . to demonstrate such advantage of 2spls over sml , we recorded the computing time of all algorithms in inferring the same networks from small data sets ( ) .each algorithm analyzed the same data set using only one cpu in a server with quad - core amd opteron processor 8380 .reported in table [ table - runtime ] are the running times of the three algorithms for inferring different networks .apparently , al is the fastest , and the running time of 2spls usually doubles or triples that of al .the slowest algorithm is sml which generally takes more than 40 times longer than 2spls to infer different networks . in particular , sml is almost 200 times slower than 2spls when inferring acyclic sparse networks ..the running time ( in seconds ) of inferring networks from a data set with . [ cols="^,^,^,^,^,^,^,^,^ " , ]we analyzed a yeast data set with 112 segregants from a cross between two strains by4716 and rm11-la .a total of 5,727 genes were measured for their expression values , and 2,956 markers were genotyped . each marker within a genetic region ( including 1 kb upstream and downstream regions )was evaluated for its association with the corresponding gene expression , yielding 722 genes with marginally significant cis - eqtl ( -value ) .the set of cis - eqtl for each gene was filtered to control the pairwise correlation under , and then further filtered to keep up to three cis - eqtl which have the strongest association with the corresponding gene expression . with 112 observations of 722 endogenous variables and 732 exogenous variables, we applied 2spls to infer the gene regulatory network in yeast .the constructed network includes 7,300 regulatory effects in total . to evaluate the reliability of constructed gene regulations, we generated 10,000 bootstrap data sets ( each with ) by randomly sampling the original data with replacement , and applied 2spls to each data set to infer the gene regulatory network . among the 7,300 regulatory effects ,323 effects were repeatedly identified in more than 80% of the 10,000 data sets , and figure [ figure - subnetwork ] shows the three largest subnetworks formed by these 323 effects . specifically , the largest subnetwork consists of 22 endogenous variables and 26 regulatory effects , the second largest one includes 14 endogenous variables and 18 regulatory effects , and the third largest one has 11 endogenous variables and 16 regulatory effects .a. b. c. a gene - enrichment analysis with david showed that the three subnetworks are enriched in different gene clusters ( controlling -values from fisher s exact tests under ) .a total of six gene clusters are enriched with genes from the first subnetwork , and four of them are related to either methylation or methyltransferase .six of 22 genes in the first subnetwork are found in a gene cluster which is related to none - coding rna processing .the second subnetwork is enriched in nine gene clusters .while three of them are related to electron , one cluster includes half of the genes from the second subnetwork and is related to oxidation reduction .the third subnetwork is also enriched in nine different gene clusters , with seven clusters related to proteasome .a total of 18 regulations were constructed from each of the 10,000 bootstrap data sets , and are shown in figure [ figure - yeastedge ] .there are seven pairs of genes which regulate each other .it is interesting to observe that all regulatory genes up regulate the target genes except two genes , namely , ycl018w and yel021w . the yeast gene regulatory subnetworks constructed in each of 10,000 bootstrap data sets ( with arrow- and bar - headed lines implying up and down regulations , respectively).,width=480,height=168 ]in a classical setting with small numbers of endogenous / exogenous variables , constructing a system of structural equations has been well studied since . first proposed to estimate the parameters of a single structural equation with the limited information maximum likelihood estimator .later on , and independently developed the 2sls estimator , which is the simplest and most common estimation method for fitting a system of structural equations .however , the genetical genomics experiments usually collect data in which both the number of endogenous variables and the number of exogenous variables can be very large , invalidating the classical methods for building gene regulatory networks . replacing the ordinary least squares at the two stages with ridge regression and adaptive lasso respectively , the proposed 2spls method can consistently identify and further estimate the regulatory effects of the endogenous variables , even with a large number of endogenous variables . as a high - dimensional extension of the classical 2sls method , the 2spls method is also computationally fast and easy to implement . as shown in constructing a genome - wide gene regulatory network of yeast , the high computational efficiency of 2spls allows us to employ the bootstrap method to calculate the -values of regulatory effects .meanwhile , each of the two steps , especially the second one , may be further improved by incorporating recent progresses in high - dimensional variable selection , see , for example , , , and .\a . since for any , the different choice of for each does not affect the following asymptotic property involving , without loss of generality , we assume . then . we will consider the asymptotic property of each of the above four terms .first , implies that the above result and ( [ asy - xxtau ] ) easily lead to the following result , the other three terms approaching to zero directly follows that .thus , . \b .since , we have in the following , we will prove that the second term approaches to zero , and the first term asymptotically approaches to the required distribution , i.e. , we notice that following ( [ asy - mk ] ) , we have because of ( [ asy - xhx ] ) and we have since , we can apply slutsky s theorem and obtain that pooling the above result and ( [ asy - ehpxpi ] ) leads to the asymptotic distribution in ( [ asy - ehpx ] ) . to prove that the second term asymptotically approaches to zero , we further partition it as follows , it suffices to prove each of these four parts asymptotically approaches to zero .first , notice that we have which follows ( [ asy - xhx ] ) and that as . because , we have which implies that since var is proportional to an identity matrix , the above result leads to that which implies that similarly, we can prove that , for each , which implies that note that since we have therefore , which , together with , leads to that pooling ( [ asy - xiphpxpi ] ) , ( [ asy - gxiphpxpi ] ) , ( [ asy - gpixiphpx ] ) and ( [ asy - gxiphpx ] ) , we have proved that , which concludes our proof .let . let , then or . note that , where denote the -th elements of and as and , respectively . if , then and . by slutsky s theorem, we have . if , then and , where .thus , hence , following theorem [ thm - ridgeest ] and slutsky s theorem , we see that for every , where is convex , and the unique minimizer of is . following the epi - convergence results of and ,we have since , we indeed have proved the asymptotic normality .now we show the consistency in variable selection . for , the asymptotic normality indicates that , thus .then it suffices to show that , . when , by the kkt normality conditions , we know that .note that , whereas .following theorem [ thm - ridgeest ] and the asymptotic normality , asymptotically follows a normal distribution .thus , . then we have proved the consistency in variable selection .akaike , h. ( 1974 ) , a new look at the statistical model identification , " , 19 , 716 - 723 .schadt , e. e. , monks , s. a. , drake , t. a. , lusis , a. j. , che , n. , colinayo , v. , ruff , t. g. , milligan , s. b. , lamb , j. r. , cavet , g. , linsley , p. s. , mao , m. , stoughton , r. b. , and friend , s. h. ( 2003 ) , genetics of gene expression surveyed in maize , mouse and man , " , 422 , 297 - 302 .
|
linear systems of structural equations have been recently investigated to reveal the structures of genome - wide gene interactions in biological systems . however , building such a system usually involves a huge number of endogenous variables and even more exogenous variables , and hence demands a powerful statistical method which limits memory consumption and avoids intensive computation . we propose a two - stage penalized least squares method to build large systems of structural equations . fitting one linear model for each endogenous variable at each stage , the method employs the penalty at the first stage to obtain consistent estimation of a set of well - defined surrogate variables , and the penalty at the second stage to consistently select regulatory endogenous variables among massive candidates . without fitting a full information model , the method is computationally fast and allows for parallel implementation . the resultant estimates of the regulatory effects enjoy the oracle properties , that is , they perform as well as if the true regulatory endogenous variables were known . we also demonstrated the effectiveness of the method by conducting simulation studies , showing its improvements over other methods . our method was applied to construct a yeast gene regulatory network with a genetical genomics data . # 1 1 1 0 1 * two - stage penalized least squares method for constructing large systems of structural equations * _ keywords : _ graphical model ; high - dimensional data ; reciprocal graphical model ; simultaneous equation model ; structural equation model .
|
let be a sample of independent , identically distributed random variables with an absolutely continuous distribution function ( cdf ) and density .the kernel estimator of ( kdfe ) at a point is where is the kernel , with being symmetric about the origin and integrating to unity , and is the bandwidth sequence which determines the degree of smoothing . the empirical distribution function ( edf ) can be obtained as a special case of with , viz .hereinafter kdfe will refer to with to distinguish it from the edf .it is well known that under very mild conditions is a uniformly strongly consistent and asymptotically normal estimator of .relative to edf , kdfe is an asymptotically more efficient estimator in the mean integrated squared error and hodges - lehmann sense .smoothness of the kernel estimates and the reduction in mise are the two main reasons to prefer kdfe .it is also reasonable to expect that replacing the edf with the kdfe will improve performance of the resultant estimators and test statistics .examples include quantile estimation and smoothed cdf - based goodness - of - fit and symmetry tests , to mention just a few scenarios wherein there are benefits to using kdfe rather than the edf or the corresponding ( kernel ) density estimators .the mean integrated squared error , =\operatorname{e}[\int_{-\infty}^{\infty}\{\widehat{f}(x;h)-f(x)\}^{2}dx] ] over . since and hence ] and = \int_{-\infty}^{\infty}\operatorname{var}[\widehat{f}_{2r}(x;h)]dx ] .let denote the odd factorial , i.e. for , , , , and for odd , .[ thm : exact.mise ] ( exact mise ) + let be a random sample from a normal mixture distribution , and be the -order gaussian - based kernel .then for , , = -\sum_{s=0}^{r-1}\sum_{t=0}^{r-1}\frac{(-1)^{s+t}}{2^{s+t}s!t!}v(h;s+t,2 ) + 2\sum_{s=0}^{r-1}\frac{(-1)^{s}}{2^{s}s!}v(h;s,1 ) - v(h;0,0),\ ] ] = -\frac{h}{n\sqrt{\pi}}c(r ) + \frac{1}{n}\sum_{s=0}^{r-1}\sum_{t=0}^{r-1}\frac{(-1)^{s+t}}{2^{s+t}s!t!}v(h;s+t,2),\ ] ] where , and where denotes the regularized incomplete beta function and it is understood that the sum over in is zero when . the proof of the theorem , given in appendix [ app : proofs ] , is based on the convolution formulae in which apply after a judiciously chosen change of coordinates .alternative , computationally convenient expressions for and ] follows from theorem 1 in ) .calculations reported in section [ sec : analysis.of.mise ] were performed by first obtaining the optimal bandwidth for the second order kernel and then using it as the starting value for , and so on .a special case of theorem [ thm : exact.mise ] worth stating separately is the second order gaussian kernel which is commonly used in practice .since , with the expressions simplify to = -u(h;2 ) + 2u(h;1 ) - u(h;0) ] , where .\ ] ] expressions in theorem [ thm : exact.mise ] are also valid with , which corresponds to the edf . in this case , where = \int_{-\infty}^{\infty}f(x)\left[1-f(x)\right]dx,\ ] ] and we recover the well - known result that = 0 ] .the constant depends on the kernel only via the quantity .it is evident from that for gaussian kernels for all , and this property implies that asymptotically the kdfe provides a second order improvement in mise relative to edf ( via the second term in ) . as , ( appendix [ app : proofs ] ) , and in particular , as ( cf .second term in ) .let be a general symmetric order kernel , i.e. satisfies , for , and , where . then , under the standard smoothness and integrability conditions on which are satisfied by nm distributions , as , = \frac{1}{n}v_{0}-\frac{h}{n}\psi_{1}(k)+\frac{\mu_{2r}(k)^{2}}{(2r)!^{2}}r(f^{(2r)})h^{4r}+o(hn^{-1}+h^{4r}),\ ] ] where .for the nm distribution and gaussian - based kernel , , and .thus , the asymptotically optimal bandwidth is exact mise of a kdfe with the sinc kernel has been derived in and . for the nm distributionthe absolute square of the characteristic function is \mathrm{e}^{-(\sigma_{i}^{2}+\sigma_{j}^{2})t^{2}/2} ] .otherwise , numerical integration techniques such as the gauss - kronrod quadrature can be used to evaluate .the optimal bandwidth solves . for the normal distributionthe solution is . in general , however , there does not appear to be a way of obtaining a closed form solution for , and it has to be found using numerical techniques with the caveat that the solution may not be unique ( existence of a global minimiser of has been established in ( * ? ? ?* thm.3 ) ) ; see also related discussion in .this section provides a brief analysis of mise using the fifteen nm distributions shown in figure [ fig : mwnmdens1 - 15 ] as examples ( see table 1 of mw for the definitions of these mixtures ) .since mise itself is not a unitless quantity , it is natural to perform comparisons relative to the mise of the edf ; hereinafter the relative mise , in percentages .indeed , if mise of a kernel estimator is larger than that of the edf , kdfe loses much of its appeal , even though a case can still be made for the benefits offered by smoothness alone .on the other hand , if an estimator achieves ( or is reasonably close to ) the infeasible minimum mise , ^{-1}dt\ ] ] ( * ? ? ? * proposition 2 ) , one can be satisfied that no further improvements are possible ( or are of practical interest ) .relative is shown as dashed lines in figure [ fig : reloptmise.1 ] ( left vertical axes ) .one immediate observation to be made is that for some distributions the best achievable reduction in mise is quite small ; e.g. for distributions # 3&4 and sample sizes more than about one thousand , no more than 2 - 3% reduction is possible . nonetheless , for the small sample sizes the available improvement in mise is substantial .of course , any such improvement comes from a decrease in variance at the cost of introducing a non - zero bias .[ cols="^,^,^ " , ] the ( solid line , right vertical axes ) and ( solid line , left vertical axes ) in figure [ fig : reloptmise.1 ] show the optimal and the resultant relative mise for the class of gaussian - based kernels of order .optimisation was performed over for a given and then over for a sufficiently large pre - specified . the kernel order necessary to achieve the best mise generally increases with the sample size , butnot necessarily in a monotone fashion . for the relatively ` uninteresting ' , i.e. close to normal distributions ( # 15 ) , the gaussian - based kernels offer performance remarkably close to the best achievable ( infeasible ) mise ; the largest difference ( shown by dimension lines ) is less than 1% for the gaussian and skewed unimodal distributions , and between 2.7 and 4.5% for the strongly skewed , kurtotic unimodal and outlier distributions . for distributions with more complicated features ( # 615 )the differences can be as large as 10 - 20% at the sample sizes of practical interest .@>c@@>c@@>c@ # 1 : gaussian & # 2 : skewed unimodal & # 3 : strongly skewed + & & + # 4 : kurtotic unimodal & # 5 : outlier & # 6 : bimodal + & & + legend : horizontal axes : common logarithm of the sample size , .+ right vertical axes : , optimal , integer .+ left vertical axes , % : , , and relative with 2^nd^ ( ) , infinite , and optimal ( ) order kernels , respectively ; achievable relative mise ( infeasible ) .@>c@@>c@@>c@ # 7 : separated bimodal & # 8 : skewed bimodal & # 9 : trimodal + & & + # 10 : claw & # 11 : double claw & # 12 : asymmetric claw + & & + # 13 : asymmetric double claw & # 14 : smooth comb & # 15 : discrete comb + & & + there is little surprising about the performance of the second ( ) and infinite ( ) order kernels .the former performs well for small , but as the bandwidth converges to zero at the fastest rate , the mise of the kdfe quickly approaches that of the edf . in contradistinction , the sinc kernel is expected to deliver best results as the sample size approaches infinity , but underperforms for finite thus rendering its practical usefulness questionable unless the sample size is very large .importantly , for distributions # 1,2,5 , which are close to normal , the benefits of using higher order kernels are realized for sample sizes as small as 10 observations ( for the gaussian distribution the order kernel becomes optimal when ) .the benefits are still clear for distributions # 3,4,69 , albeit higher order kernels become optimal at sample sizes of around 1000 . for the remaining distributions ( # 1015 )the picture is less clear .while the optimal is bigger than one over a range of sample sizes , the reduction in mise it confers is either too small to matter in practice or occurs over a limited range of sample sizes ( which , of course , would not be known a priori ) .finally , the optimal if one were to use the asymptotically optimal bandwidth instead of the exact mise - minimising bandwidth also increases with the sample size , but monotone and much slower ( results omitted from figure [ fig : reloptmise.1 ] for better readability ) .the corresponding relative mise is always bigger than mise with and , and usually much bigger in small and medium size samples illustrating the fact that the choice of the asymptotic bandwidth may lead to poor performance .interesting exceptions occur with distributions # 1,2,5 ( and less so with # 6 and 8) , where in sample sizes less than about 20 the asymptotically optimal bandwidth delivers results almost as good as the exact mise - minimising bandwidth .exact mise results for nm distributions lend themselves naturally to the possibility of estimating the optimal bandwidth _ and _ kernel order by parametrically fitting a finite normal mixture distribution to the data and plugging this preliminary estimate into the exact mise expression which can then be minimised over and .the main competitor of the proposed nm plug - in approach is the cross - validation ( cv ) bandwidth of which directly minimises an estimate of mise .( cv criterion is an unbiased estimator of mise for sample size , up to an additive constant which does not depend on ) .cv approach performs well in simulations and has the advantage that it does not require any preliminary estimates .however , it does not provide a way to select the optimal kernel order . in contradistinction ,the nm plug - in approach yields estimates of and , and thus can be expected to outcompete cv in cases where higher order kernels provide a substantial improvement in mise and the underlying distribution can be well approximated by a finite normal mixture .the presence of the approximation error and the need to fit a mixture distribution , including determining the number of components , are the main drawbacks of the proposed procedure . in the remainder of this section the performance of the nm plug - in approachis assessed via a simulation study using the same fifteen nm distributions as in section [ sec : analysis.of.mise ] ( figure [ fig : mwnmdens1 - 15 ] ) and three non - nm distributions : a gamma(2,1 ) distribution ( cf .* ) and student s distributions with and degrees of freedom . at a preliminary stage ,an -component normal mixture distribution is fitted to the data using the expectation maximization ( em ) algorithm ( see e.g. * ? ? ?* ) with the number of components chosen by the akaike ( aic ) and bayesian ( bic ) information criteria .the resultant estimates of are denoted by and , respectively .there is considerable evidence that supports the use of bic to select the number of components . in particular, showed that if the goal is to estimate a density by a univariate normal mixture , choosing by bic yields a consistent estimator of ; see also ( * ? ? ?* sec.6.9.3 ) , , and references therein . for the nm distributions , the true number of components , , is also included for comparison . at the main stage , the preliminary estimate is treatedas if it were the known true nm distribution to find the optimal bandwidth , , and kernel order , , by minimising the exact mise expression . as in section[ sec : analysis.of.mise ] , optimisation was performed over , where guided by the results presented in figure [ fig : reloptmise.1 ] , was set to 8 , 9 , 10 , and 13 for sample sizes 50 , 100 , 200 , and 400 , respectively .performance of the resultant kdfe , , is evaluated by its integrated squared error , =\int_{-\infty}^{\infty}\{\widehat{f}(x;\hat{h})-f(x)\}^{2}dx ] , can be expressed as & = \int_{-\infty}^{\infty}dz \int_{-\infty}^{0}dx\int_{-\infty}^{0 } \left[\psi(x+z)\psi(y+z ) - 2\psi(x+z)f(y+z ) + f(x+z)f(y+z)\right]dy \\ \label{a.t1.p.isb } & = \frac{1}{2}\int_{-\infty}^{0}dv\int_{v}^{-v } \xi(u ) du = -\xi^{(-2)}(0),\end{aligned}\ ] ] where \\\notag & = \sum_{i=1}^{m}\sum_{j=1}^{m}w_{i}w_{j}\sum_{s=0}^{r-1}\sum_{t=0}^{r-1}\frac{(-1)^{s+t}h^{2s+2t}}{2^{s+t}s!t!}\phi^{(2s+2t)}(u;\mu_{i}-\mu_{j},\sigma_{ij,2}^{2 } ) \\\label{a.t1.p.isb.xi } & \qquad -2\sum_{i=1}^{m}\sum_{j=1}^{m}w_{i } w_{j}\sum_{s=0}^{r-1}\frac{(-1)^{s}h^{2s}}{2^{s}s ! } \phi^{(2s)}(u;\mu_{i}-\mu_{j},\sigma_{ij,1}^{2 } ) + \sum_{i=1}^{m}\sum_{j=1}^{m}w_{i}w_{j}\phi(u;\mu_{i}-\mu_{j},\sigma_{ij,0}^{2}).\end{aligned}\ ] ] the second equality in follows by rotating about the -axis counterclockwise by and stretching the resultant and axes by . integrating over using yields the expression for in .the final equality in then follows because and ( for , by ampw corollary 3.3 , and the terms involving cancel out ) , and hence .equation is a rearrangement of using , .* iv*. the derivation of the integrated variance of follows the same steps as the derivation of isb .using and the expression for above we obtain & = n^{-1}\int_{-\infty}^{\infty}\left\{\operatorname{e}[g_{2r}(h^{-1}(z - x_{1}))^{2 } ] -\left(\operatorname{e}[g_{2r}(h^{-1}(z - x_{1}))]\right)^{2}\right\}dz\\ \label{a.t1.p.ivar } & = n^{-1}\frac{1}{2}\int_{-\infty}^{0}dv\int_{v}^{-v } \zeta(u)du = -n^{-1}\zeta^{(-2)}(0),\end{aligned}\ ] ] where , using , and the final equality in follows because and .equation obtains since for , ( ampw eq.2.13 , and by verification for ) , which gives the first equality in , , can be obtained using ampw corollary 6.2.2 , viz . for , , verifying by direct integration that holds with , we obtain from and , which equals since the last double sum is zero . expression for can be derived from by changing the summation over rows to summation over the diagonals and using to obtain where is the beta function . using the integral representation of ( dlmf http://dlmf.nist.gov/5.12.e1[5.12.1 ] )it is easy to see that .finally , substituting ( dlmf http://dlmf.nist.gov/8.17.e4[8.17.4 ] , http://dlmf.nist.gov/8.17.e5[8.17.5 ] ) gives .* as *. the second term in can be approximated for large by applying the euler - maclaurin sum formula ( dlmf http://dlmf.nist.gov/2.10.e1[2.10.1 ] ) , approximating by ] ( dlmf http://dlmf.nist.gov/5.11.e13[5.11.13 ] ) .this gives \phi\left(\frac{s-2r+1}{\sqrt{s}}\right)ds+o(r^{-3/2}).\ ] ] integrating by parts , changing the variables as , , and expanding into a taylor series around gives the leading term as , and thus .for computational reasons , especially when is large , it is convenient to express the kernels and the exact mise formulae in theorem [ thm : exact.mise ] using the kummer confluent hypergeometric function , . specifically , since for , ( dlmf http://dlmf.nist.gov/13.6.e17[13.6.17 ] , http://dlmf.nist.gov/13.2.e39[13.2.39 ] ) , can be written as similarly , using , ( dlmf http://dlmf.nist.gov/13.6.e16[13.6.16 ] ) , and changing the double summation over in and to summation over diagonals , the exact mise can be evaluated as = \frac{1}{\sqrt{2\pi}}\left(\frac{n-1}{2n}a_{2 } - a_{1}\right ) -\frac{h}{n\sqrt{\pi}}c(r ) -v_{0},\ ] ] where is defined in , and with defined in , , and , ,\\\label{thm1.mise.2.a2 } a_{2}&= -2\sqrt{2\pi}u(h;2 ) + \sum_{i=1}^{m}\sum_{j=1}^{m}w_{i}w_{j}\sigma_{ij,2 } \left[\sum_{s=1}^{2r-2}r_{s}\omega_{r , s}\left(\frac{h^{2}}{\sigma_{ij,2}^{2}/2}\right)^{s } { \prescript{}{1}{f}^{}_{1}}\left(s-\frac{1}{2},\frac{1}{2};-\frac{1}{2}\frac{(\mu_{j}-\mu_{i})^{2}}{\sigma_{ij,2}^{2}}\right)\right].\end{aligned}\ ] ] for the normal distribution ( ) , expressions - simplify to , the ratios of gamma functions can be evaluated either recursively , or as to avoid overflows with large positive .the kummer confluent hypergeometric function can be evaluated recursively in ( dlmf http://dlmf.nist.gov/13.3.e1[13.3.1 ] ) .thus , using the recurrence dlmf http://dlmf.nist.gov/8.17.e17[8.17.17 ] for the incomplete beta function appearing in , the quantities and in can be computed recursively in .a matlab ( http://www.mathworks.com/[www.mathworks.com ] ) implementation is available from the author upon request .results presented in this paper were computed with advanpix multiprecision computing toolbox for matlab ( http://www.advanpix.com/[www.advanpix.com ] ) . in the multi - precision implementation, a backward recursion is used to compute starting with and . in the standard double precision version, it is better to use the matlab built - in incomplete beta function instead , as the errors accumulate fast .same applies to computation of in and in .in practice it is common to choose the bandwidth by simple plug - in or reference rules , such as the normal reference rule ( nrr ) . for example , with the second order gaussian kernel , the asymptotically optimal bandwidth for the normal distribution with variance is .an exact mise nrr bandwidth can be defined in a similar fashion . for the normal distribution ,the exact mise - minimising bandwidth is of the form , where is the bandwidth optimal for the standard normal distribution , which is straightforward to compute .following , let , where is the interquartile range of the distribution and is the gaussian quantile function ; .then the version of the nrr ( or silverman s rule of thumb ) bandwidth based on the exact mise can be defined as , where is the standard deviation of .analogous definition for the infinite order kernel is simply .figure [ fig : nrrmise ] shows the relative mise achievable with the second order kernel and the nrr bandwidth . using the rather than either or alone turns out to better in virtually all examples and sample sizes considered . as expected , the nrr bandwidth performs well for the moderately skewed unimodal distribution # 2 ( it coincides with the optimal bandwidth for the normal distribution ) .surprisingly , it also performs well for the outlier , bimodal , skewed bimodal , and trimodal distributions , as well as generally for very small sample sizes .kdfe with the nrr bandwidth will also level off with the edf in terms of mise asymptotically .however , as is clearly seen for the strongly skewed and comb - like distributions , performance in samples as large as a million observations can be extremely poor .performance of the nrr bandwidth with higher order kernels and/or asymptotic nrr bandwidth is generally much worse and is therefore not shown .@>c@@>c@@>c@ # 1 : gaussian & # 2 : skewed unimodal & # 3 : strongly skewed + & & + # 4 : kurtotic unimodal & # 5 : outlier & # 6 : bimodal + & & + # 7 : separated bimodal & # 8 : skewed bimodal & # 9 : trimodal + & & + # 10 : claw & # 11 : double claw & # 12 : asymmetric claw + & & + # 13 : asymmetric double claw & # 14 : smooth comb & # 15 : discrete comb + & & + legend : horizontal axes : . vertical axes , % : minimum relative mise with the second order kernel and ( i ) the optimal bandwidth ( ) , ( ii ) the nrr bandwidth ( the smaller of and in parentheses ) . for reference ,grey dashed and dash - dot lines show the and the minimum relative with the optimal order kernel as in figure [ fig : reloptmise.1 ] .the author would like to thank anonymous referees for helpful comments .abdous , b. 1993 , ` note on the minimum mean integrated squared error of kernel estimates of a distribution function and its derivatives ' , _ communications in statistics - theory and methods _ * 22*(2 ) , 603609 .doi : http://dx.doi.org/10.1080/03610929308831040[10.1080/03610929308831040 ] aldershof , b. , marron , j. s. , park , b. u. wand , m. p. 1995, ` facts about the gaussian probability density function ' , _ applicable analysis _ * 59*(1 ) , 289306 .doi : http://dx.doi.org/10.1080/00036819508840406[10.1080/00036819508840406 ] altman , n. lger , c. 1995 , ` bandwidth selection for kernel distribution function estimation ' , _ journal of statistical planning and inference _ * 46*(2 ) , 195214 .doi : http://dx.doi.org/10.1016/0378-3758(94)00102-2[10.1016/0378-3758(94)00102-2 ] azzalini , a. 1981 , ` a note on the estimation of a distribution function and quantiles by a kernel method ' , _ biometrika _ * 68*(1 ) , 326328 .doi : http://dx.doi.org/10.1093/biomet/68.1.326[10.1093/biomet/68.1.326 ] bowman , a. , hall , p. prvan , t. 1998 , ` bandwidth selection for the smoothing of distribution functions ' , _ biometrika _ * 85*(4 ) , 799808 .doi : http://dx.doi.org/10.1093/biomet/85.4.799[10.1093/biomet/85.4.799 ] butorina , y. o. nikitin , y. y. 2011 , ` on large deviations of smoothed kolmogorov - smirnov s statistics ' , _ vestnik st .petersburg university : mathematics _ * 44*(2 ) , 97102 .doi : http://dx.doi.org/10.3103/s106345411102004x[10.3103/s106345411102004x ] chacn , j. e. , monfort , p. tenreiro , c. 2014 , ` fourier methods for smooth distribution function estimation ' , _ statistics and probability letters _ * 84 * , 223230 .doi : http://dx.doi.org/10.1016/j.spl.2013.10.010[10.1016/j.spl.2013.10.010 ] chernozhukov , v. , fernndez - val , i. galichon , a. 2009 , ` improving point and interval estimators of monotone functions by rearrangement ' , _ biometrika _ * 96*(3 ) , 559575 .doi : http://dx.doi.org/10.1093/biomet/asp030[10.1093/biomet/asp030 ] falk , m. 1983 , ` relative efficiency and deficiency of kernel type estimators of smooth distribution functions ' , _ statistica neerlandica _ * 37*(2 ) , 7383 .doi : http://dx.doi.org/10.1111/j.1467-9574.1983.tb00802.x[10.1111/j.1467-9574.1983.tb00802.x ] fraley , c. raftery , a. e. 2002 , ` model - based clustering , discriminant analysis , and density estimation ' , _ journal of the american statistical association _ * 97*(458 ) , 611631 .doi : http://dx.doi.org/10.1198/016214502760047131[10.1198/016214502760047131 ] fryer , m. j. 1976 , ` some errors associated with the non - parametric estimation of density functions ' , _ journal of the institute of mathematics and its applications _ * 18*(3 ) , 371380 .doi : http://dx.doi.org/10.1093/imamat/18.3.371[10.1093/imamat/18.3.371 ] glad , i. k. , hjort , n. l. ushakov , n. g. 2003 , ` correction of density estimators that are not densities ' , _ scandinavian journal of statistics _ * 30*(2 ) , 415427 .doi : http://dx.doi.org/10.1111/1467-9469.00339[10.1111/1467-9469.00339 ] glad , i. k. , hjort , n. l. ushakov , n. g. 2007 , density estimation using the sinc kernel , preprint statistics 2/2007 , norwegian university of science and technology .url : https://www.math.ntnu.no/preprint/statistics/2007/s2-2007.pdf hansen , b. e. 2005 , ` exact mean integrated squared error of higher order kernel estimators ' , _ econometric theory _ * 21*(6 ) , 10311057 .doi : http://dx.doi.org/10.1017/s0266466605050528[10.1017/s0266466605050528 ] marron , j. s. wand , m. p. 1992 , ` exact mean integrated squared error ' , _ annals of statistics _ *20*(2 ) , 712736 .doi : http://dx.doi.org/10.1214/aos/1176348653[10.1214/aos/1176348653 ] mclachlan , g. peel , d. 2000 , _ finite mixture models _ , john wiley & sons .nadaraya , e. a. 1964 , ` some new estimates for distribution functions ' , _ theory of probability and its applications _ * 9*(3 ) , 497500 .doi : http://dx.doi.org/10.1137/1109069[10.1137/1109069 ] olver , f. w. j. , lozier , d. w. , boisvert , r. f. clark , c. w. , eds 2010 , _ nist handbook of mathematical functions _ ,cambridge university press , new york , ny . polansky , a. m. baker , e. r. 2000 , ` multistage plug - in bandwidth selection for kernel distribution function estimates ' , _ journal of statistical computation and simulation _ * 65*(1 - 4 ) , 6380 .doi : http://dx.doi.org/10.1080/00949650008811990[10.1080/00949650008811990 ] reiss , r. 1981 , ` nonparametric estimation of smooth distribution functions ' , _ scandinavian journal of statistics _ * 8*(2 ) , 116119 .url : http://www.jstor.org/stable/4615820 roeder , k. wasserman , l. 1997 , ` practical bayesian density estimation using mixtures of normals ' , _ journal of the american statistical association _ * 92*(439 ) , 894902 .doi : http://dx.doi.org/10.1080/01621459.1997.10474044[10.1080/01621459.1997.10474044 ] silverman , b. w. 1986 , _ density estimation _ , chapman and hall .swanepoel , j. w. h. 1988 , ` mean intergrated squared error properties and optimal kernels when estimating a distribution function ' , _ communications in statistics theory and methods _ * 17*(11 ) , 37853799 .doi : http://dx.doi.org/10.1080/03610928808829835[10.1080/03610928808829835 ] tenreiro , c. 2006 , ` asymptotic behaviour of multistage plug - in bandwidth selections for kernel distribution function estimators ' , _ journal of nonparametric statistics _ * 18*(1 ) , 101116 .doi : http://dx.doi.org/10.1080/10485250600578334[10.1080/10485250600578334 ] wand , m. p. schucany , w. r. 1990 , ` gaussian - based kernels ' , _ canadian journal of statistics _ * 18*(3 ) , 197204 .doi : http://dx.doi.org/10.2307/3315450[10.2307/3315450 ] watson , g. s. leadbetter , m. r. 1964 , ` hazard analysis ii ' , _ sankhy : the indian journal of statistics , series a _ * 26*(1 ) , 101116 .url : http://www.jstor.org/stable/25049316
|
an exact , closed form , and easy to compute expression for the mean integrated squared error ( mise ) of a kernel estimator of a normal mixture cumulative distribution function is derived for the class of arbitrary order gaussian - based kernels , extending the results of ( ) , ` exact mise ' , _ ann . stat . _ 20(2 ) , 712736 , to estimation of distribution functions . comparisons are made with mise of the empirical distribution function and the infeasible minimum mise of kernel estimators . the analysis also offers a guide on when to use higher order kernels in distribution function estimation . a simple plug - in method of simultaneously selecting the optimal bandwidth and kernel order is proposed , wherein a normal mixture distribution is fitted to the data at a preliminary stage and the resultant estimate is plugged into the exact mise formula . a simulation study conducted to evaluate performance of this method in finite samples suggests that it works well , although it does not always outperform existing bandwidth selection procedures . * keywords * : smoothing , normal mixture , gaussian - based kernel , plug - in rule , finite samples . * ams subject classification * : 62g05
|
in the design and analysis of wireless communication networks , the signal - to - interference ratio ( sir ) , defined in the next section , is a key quantity . in this paper , we consider the probability distribution of the sir in the spatial stochastic models of downlink cellular networks , where the wireless base stations ( bss ) are deployed according to spatial point processes on the euclidian plane ( see , e.g. , ) .the sir distribution in these cellular network models can be provided as a closed - form or a numerically computable form for some restricted cases such as the bss are deployed according to homogeneous poisson point processes or kinds of determinantal point processes with specific propagation effects of fading and shadowing ( see , e.g. , ) .however , such special cases can not always represent the actual bs deployments and propagation effects in the real cellular networks , so that some approximation and/or asymptotic approaches have been proposed to cope with more general models ( see , e.g. , ) . in the current paper, we focus on the tail asymptotics of the sir distribution in the single - tier network models , where the bss are deployed according to general stationary point processes with general propagation effects .recently , ganti & haenggi consider the same general cellular network models and investigate the asymptotics of the sir distribution both at the origin and at infinity . in , they derive the tail asymptotic result which is just our concern , but they do not mention any conditions under which the result holds . in their proof ,a technique of changing the order of the limit and integrals is used , which generally requires a kind of uniform integrability condition .this paper then compensates for the lack of the uniform integrability condition and exposes a sufficient condition for the order change of the limit and integrals .we further give some examples satisfying such a sufficient condition as well as a counterexample violating it .the rest of the paper is organized as follows : first , we describe the spatial stochastic model of single - tier downlink cellular networks and define the sir for the typical user in the next section .section [ sec : asymptotic_result ] states the main result , where we give a sufficient condition under which the tail asymptotics of the sir distribution is properly obtained . in section [ sec : examples ] , we illustrate some examples satisfying the condition and indicate the asymptotic results for the corresponding models of the examples .we further give a simple counterexample violating the sufficient condition .let denote a point process on , which is assumed to be ordered such that .each point , , represents the location of a bs of the cellular network and we refer to the bs located at as bs .we assume that the point process is simple and locally finite almost surely in ( -a.s . ) and also stationary with intensity .assuming further that all bss transmit at the same power level and each user is associated with the nearest bs , we focus on a typical user located at the origin .let , , denote the random propagation effect representing the fading and shadowing from bs to the typical user , where , , are mutually independent and identically distributed ( i.i.d . ) , and independent of the point process as well .the path - loss function representing the attenuation of signals with distance is given by , , for .the downlink sir for the typical user is then given by where we should recall that is the nearest point of from the origin .our concern in the current paper is the tail asymptotics of the sir distribution ; that is , the asymptotic behavior of as .in this and later sections , and denote respectively the palm probability and the corresponding expectation with respect to the marked point process ( see , e.g. , ( * ? ? ?note that while .note also that , due to the mutual independence of and , for . when we consider under the palm distribution , we use the index for the point at the origin ; that is , . to give the main theorem which is a refinement of theorem 4 of , we first define the typical voronoi cell and its circumscribed radius . for a point process on and a point of , the voronoi cell of with respect to defined as the set ; that is , the set of points in whose distance to is not greater than that to any other points of .the typical voronoi cell is then under the palm distribution and its circumscribed radius , denoted by , is the radius of the smallest disk centered at the origin and containing under .[ thm : general ] we suppose the following . a. [ conda ] for the point process , and for any .b. [ condb ] for the sequence of the propagation effects , and there exist and such that the laplace transform of , , satisfies for .it then holds that .\end{aligned}\ ] ] one can show that the right - hand side of does not depend on the intensity of the point process ( see the remark of definition 4 in ) .the right - hand side of in theorem [ thm : general ] is identical to in theorem 4 of ; that is , that theorem and our theorem [ thm : general ] assert the same result .the difference between the two theorems is that we provide a set of conditions ( [ conda ] ) and ( [ condb ] ) , the role of which is discussed in the proof and the remarks thereafter .let denote the distribution function of , , and let . by and , , the tail probability of the downlink sir for the typical useris expressed as applying the palm inversion formula ( see , e.g. , ( * ? ? ?4.2 ) ) to the right - hand side above , \ , { \mathrm{d}}x \\ & = \theta^{-1/\beta}\,\lambda \int_{{\mathbb{r}}^2 } { \mathsf{e}}^0\biggl [ \bar{f_h}\biggl ( |y|^{2\,\beta } \sum_{i\in{\mathbb{n } } } \frac{h_i } { |x_i-\theta^{-1/(2\,\beta)}\,y|^{2\,\beta } }\biggr ) \\ & \qquad\qquad\qquad\qquad\mbox{}\times { \boldsymbol{1}_{{\mathcal{c}}(o)}}(\theta^{-1/(2\,\beta)}\,y ) \biggr]\ , { \mathrm{d}}y,\end{aligned}\ ] ] where the second equality follows by substituting . here ,if we can find a random function satisfying the dominated convergence theorem leads to we leave finding such an and approve for a moment . substituting to the last integral in yields \int_{{\mathbb{r}}^2}\bar{f_h}(|z|^{2\,\beta})\,{\mathrm{d}}z,\end{aligned}\ ] ] andfurthermore , hence , applying and to , we obtain .it remains to find a function satisfying and . since is nonincreasing and -a.s . for , we can set the function satisfying as then , substituting and applying again , we have .\end{aligned}\ ] ] for the second expectation on the right - hand side above , applying the identity yields \\ & = \frac{1}{\gamma(1/\beta ) } \int_0^\infty s^{-1 + 1/\beta}\\ & \qquad\qquad\mbox{}\times { \mathsf{e}}^0\biggl [ \exp\biggl ( -s \sum_{i\in{\mathbb{n } } } \frac{h_i}{\bigl(|x_i| + r(o)\bigr)^{2\,\beta } } \biggr ) \biggr]\ , { \mathrm{d}}s \\ & = \frac{1}{\gamma(1/\beta ) } \int_0^\infty s^{-1 + 1/\beta}\\ & \qquad\qquad\mbox{}\times { \mathsf{e}}^0\biggl [ \prod_{i\in{\mathbb{n } } } { \mathcal{l}}_h\biggl ( \frac{s}{\bigl(|x_i| + r(o)\bigr)^{2\,\beta } } \biggr ) \biggr]\ , { \mathrm{d}}s,\end{aligned}\ ] ] where denotes euler s gamma function . recall that , , are ordered such that .thus , by taking such that , and applying for from condition ( [ condb ] ) , we have \ , { \mathrm{d}}s \nonumber\\ & \le \int_0^\infty s^{-1 + 1/\beta}\ , { \mathsf{e}}^0\biggl [ { \mathcal{l}}_h\biggl ( \frac{s}{\bigl(|x_k| + r(o)\bigr)^{2\,\beta } } \biggr)^k \biggr]\ , { \mathrm{d}}s \nonumber\\ & \le { \mathsf{e}}^0\biggl [ \int_0^{(|x_k| + r(o))^{2\,\beta } } s^{-1 + 1/\beta}\ , { \mathrm{d}}s \biggr ] \nonumber\\ & \quad\mbox { } + { c_h}^k\ , { \mathsf{e}}^0\biggl [ \bigl(|x_k| + r(o)\bigr)^{2\,\alpha\,\beta\,k } \nonumber\\ & \qquad\quad\qquad\mbox{}\times \int_{(|x_k| + r(o))^{2\,\beta}}^\infty s^{-1 + 1/\beta-\alpha\,k}\ , { \mathrm{d}}s \biggr ] \nonumber\\ & = \beta\ , \bigl ( 1 + \frac{{c_h}^k}{\alpha\,\beta\,k-1 } \bigr)\ , { \mathsf{e}}^0\bigl [ \bigl(|x_k| + r(o)\bigr)^2 \bigr].\end{aligned}\ ] ] hence , the inequality ensures under condition ( [ conda ] ) of the theorem .the differences between the proof of and ours are as follows .the first and less essential one is that , in , they modify the right - hand side of into an appropriate form and then apply the campbell - mecke formula . on the other hand , we apply the palm inversion formula directly .second , does not specify any condition under which the result holds .however , equality requires some kind of uniform integrability condition to change the order of the limit and integrals .our set of conditions ( [ conda ] ) and ( [ condb ] ) in theorem [ thm : general ] gives a sufficient condition for this order change to be valid .the condition ( [ condb ] ) claims that the laplace transform of , , decays faster than or equal to the power law .though this condition excludes the distribution with a mass at the origin , it covers many practical distributions .for example , the gamma distribution , , , has the laplace transform and we can take . the asymptotic constant of in theorem [ thm : general ] depends on the point process and the distribution of the propagation effects .the following proposition gives the impact of the propagation effects to the asymptotic constant by comparing with the case without propagation effects .[ thm : inequality ] let denote the limit on the right - hand side of , specifying the dependence on the value of and the propagation effect distribution . when , we have where denotes the dirac measure with the mass at the unity .the result immediately follows from jensen s inequality conditioned on . on the right hand - side of , since is convex for , \ge { \mathsf{e}}^0\biggl [ \biggl ( \sum_{i\in{\mathbb{n}}}\frac{{\mathsf{e}}h_1}{|x_i|^{2\,\beta } } \biggr)^{-1/\beta } \biggr ] \\ & = \frac{1}{({\mathsf{e}}h_1)^{1/\beta}}\ , { \mathsf{e}}^0\biggl [ \biggl ( \sum_{i\in{\mathbb{n } } } \frac{1}{|x_i|^{2\,\beta } } \biggr)^{-1/\beta } \biggr],\end{aligned}\ ] ] and holds . when ( rayleigh fading without shadowing ) , the result of proposition [ thm : inequality ] coincides with the second part of theorem 2 in . in the inequality ,it is easy to see ( by jensen s inequality ) that the coefficient is smaller than or equal to the unity . now , suppose that . then , the dominated convergence theorem ( due to a.s . ) leads to as both and , which implies that might be true when the value of is close to the unity or sufficiently large .in this section , we consider the homogeneous poisson point process with finite and nonzero intensity .we first confirm that satisfies the condition ( [ conda ] ) of theorem [ thm : general ] .[ lem : poi_a ] let denote the homogeneous poisson point process with intensity , where , , are ordered such that .then , for , this lemma ensures that both , , and have any order of moments .let denote the disk centered at the origin with radius .by slivnyak s theorem ( see , e.g. , ( * ? ? ?1.4 ) , ) , therefore , exploiting for , and holds , where and denote respectively the regularized lower and upper incomplete gamma functions . on the other hand , for the circumscribed radius of the typical voronoi cell of , calka ( ** theorem 3 ) shows that so that , we have and holds , where denotes the upper incomplete gamma function and .we now apply theorem [ thm : general ] and obtain the following .[ cor : poisson ] suppose that is the homogeneous poisson point process .when the propagation effect sequence satisfies the condition ( [ condb ] ) of theorem [ thm : general ] , the right - hand side of reduces to .since the conditions of theorem [ thm : general ] are fulfilled , the result follows from the proof of lemma 6 in .the asymptotic result from corollary [ cor : poisson ] agrees with that in ( * ? ? ?* remark 4 ) , where the rayleigh fading is considered .corollary [ cor : poisson ] states that the downlink coverage probability in the poisson cellular network is asymptotically insensitive to the distribution of the propagation effects as far as it satisfies the condition ( [ condb ] ) of theorem [ thm : general ] . in this section, we consider a general stationary and isotropic determinantal point process on with intensity .let : denote the continuous kernel of with respect to the lebesgue measure . then , the joint intensities ( correlation functions ) , , with respect to the lebesgue measure are given by for , , , .note that , due to the stationarity and isotropy , it holds that and that . in order for the point process to be well - defined, we assume that ( i ) the kernel is hermitian in the sense that for , , where denotes the complex conjugate of , and ( ii ) the integral operator on corresponding to is of locally trace class with the spectrum in ] for each ( see , e.g. , ( * ? ? ? * chap . 4 ) ) . concerning the condition ( [ conda ] ) of theorem [ thm : general ] , we show the following . [lem : determinantal ] a. let , , denote the points of such that .then , there exist and such that , for any , we can take a sufficiently large satisfying b. let denote the circumscribed radius of the typical voronoi cell of . then , there exist and such that , for , by lemma [ lem : determinantal ] , it is easy to confirm , similar to lemma [ lem : poi_a ] , that both , , and have any order of moments under .to prove lemma [ lem : determinantal ] , we use the following supplementary lemma , the proof of which is in the appendix : [ lem : kernel ] the kernel of satisfies let denote the reduced palm probability with respect to the marked point process and let denote a bounded set in .since the ( reduced ) palm version of a determinantal point process is also determinantal ( see , e.g. , ) , under has the same distribution as with some kind of mutually independent bernoulli random variables , ( see , e.g. , ( * ? ? ? * sec .thus , the chernoff - hoeffding bound for an infinite sum with finite mean ( see , e.g. , for a finite sum ) implies that , for any , there exists a such that where denotes the expectation with respect to . on the other hand ,the kernel of the palm version of is given by ( see ) therefore , the intensity function ( -correlation ) of under reduces to so that , lemma [ lem : kernel ] leads to where denotes the lebesgue measure on .[ [ proof - of - i ] ] _ proof of ( i ) _+ + + + + + + + + + + + + + note that . since from , applying this to yields hence , for any and , we can take satisfying , which implies . [[ proof - of - ii ] ] _ proof of ( ii ) _+ + + + + + + + + + + + + + + we here derive an upper bound of by using foss & zuyev s seven petals , which are considered to obtain an upper bound of the tail probability for the circumscribed radius of the typical poisson - voronoi cell .consider a collection of seven disks of common radii centered at the points , , in polar coordinates .the petal is given as the intersection of two circles centered at , and the angular domain between the rays and .the petal is the rotation copy of petal by angle for ( see figure [ fig : petals ] ) .let , , denote the set formed by petal on the complex plane . then , according to the discussion in the proof of lemma 1 of , where the second inequality follows from the isotropy of the palm version of .now , we can apply the bound with and we have hence , holds since and .( -6.5,0)(6.5,0 ) ; ( 0,-6.5)(0,6.5 ) ; in 0 , 51.42857142857143 , 102.8571428571429 , 154.2857142857143 , 205.7142857142857 , 257.1428571428571 , 308.5714285714286 ( 3,0 ) circle ( 3 cm ) ; ( 0,0 ) circle ( 3.74093881115 cm ) ; in 0 , 51.42857142857143 , 102.8571428571429 , 154.2857142857143 , 205.7142857142857 , 257.1428571428571 , 308.5714285714286 ( 3cm,0cm)(6cm,0 cm ) arc ( 0:51.4285714286:3 cm ) ; in 0 , 51.42857142857143 , 102.8571428571429 , 154.2857142857143 , 205.7142857142857 , 257.1428571428571 , 308.5714285714286 ( 3cm,0cm)(6cm,0 cm ) arc ( 0:51.4285714286:3 cm ) ; in 0 , 51.42857142857143 , 102.8571428571429 , 154.2857142857143 , 205.7142857142857 , 257.1428571428571 , 308.5714285714286 ( 6cm,0 cm ) arc ( 0:51.4285714286:3 cm ) ; in 0 , 51.42857142857143 , 102.8571428571429 , 154.2857142857143 , 205.7142857142857 , 257.1428571428571 , 308.5714285714286 ( 6cm,0 cm ) arc ( 0:51.4285714286:3 cm ) ; in 0 , 51.42857142857143 , 102.8571428571429 , 154.2857142857143 , 205.7142857142857 , 257.1428571428571 , 308.5714285714286 ( 0,0)(3.74093881115,0 ) ; in 0 , 51.42857142857143 , 102.8571428571429 , 154.2857142857143 , 205.7142857142857 , 257.1428571428571 , 308.5714285714286 ( 3,0 ) circle ( .7 mm ) ; ( 3,0 ) node[below , inner sep=1.5mm] ; ( 7mm,0 mm ) arc ( 0:51.4285714286:7 mm ) node[inner xsep=1.5mm , right] ; we can take in equal to the unity since determinantal point processes are weakly sub - poisson ( in particular , due to the -weakly sub - poisson property ) ( see for details ) .when the kernel of the determinantal point process is explicitly specified , it may be possible to obtain a tighter upper bound on the tail probability of the circumscribed radius of the typical voronoi cell .for example , the case of the ginibre point process is given by the following proposition .[ prp : ginibre - radius ] for the ginibre point process ( with intensity ) , the circumscribed radius for the typical voronoi cell satisfies where the kernel of the ginibre point process is given by with respect to the lebesgue measure ( see , e.g. , ) .thus , the intensity function of the ( reduced ) palm version reduces to now , we obtain two lower bound of as follows .let denote the circular sector centered at the origin with radius and angular domain between and .when we take and , we have .therefore , applying , we have the first lower bound ; the second lower bound is given by hence , we have from and with . indeed , there is such that for and for .we are now ready to give the tail asymptotics of the sir distribution when the bss are deployed according to the ginibre point process .[ cor : ginibre ] suppose that is the ginibre point process .when the propagation effect sequence satisfies the condition ( [ condb ] ) of theorem [ thm : general ] , we have furthermore , when ( nakagami- fading without shadowing ) , where denotes the beta function . for the proof of corollary [ cor : ginibre ] , we use the following proposition which is a consequence of and . [ prp : kostlan ] let , , denote the points of the reduced palm version of the ginibre point process .then , the set has the same distribution as , where , , are mutually independent and for each . for the ginibre point process , we can see from lemma [ lem : determinantal ] ( or proposition [ prp : ginibre - radius ] ) that , , and have any order of moments with respect to the palm probability ; that is , the condition ( [ conda ] ) of theorem [ thm : general ] is fulfilled .thus , applying the identity to the right - hand side of , we have \\ & = \frac{1}{\gamma(1/\beta ) } \int_0^\infty s^{-1 + 1/\beta}\ , { \mathsf{e}}^0\biggl [ \exp\biggl ( -s \sum_{i\in{\mathbb{n } } } \frac{h_i}{|x_i|^{2\,\beta } }\biggr ) \biggr]\ , { \mathrm{d}}s \\ & = \frac{1}{\gamma(1/\beta ) } \int_0^\infty s^{-1 + 1/\beta}\ , { \mathsf{e}}^0\biggl [ \prod_{i\in{\mathbb{n } } } { \mathcal{l}}_h\biggl ( \frac{s}{|x_i|^{2\,\beta } } \biggr ) \biggr]\ , { \mathrm{d}}s \\ & = \frac{1}{\gamma(1 + 1/\beta ) } \int_0^\infty { \mathsf{e}}^0\biggl [ \prod_{i\in{\mathbb{n } } } { \mathcal{l}}_h\biggl ( \biggl(\frac{t}{|x_i|^2}\biggr)^\beta \biggr ) \biggr]\ , { \mathrm{d}}t,\end{aligned}\ ] ] where the last equality follows by substituting . here, proposition [ prp : kostlan ] states that under with , so that , applying the density function of , , we have .when , then and .thus , applying these to the right - hand side of , finally , substituting and applying , we obtain . when , reduces to the result of ( * ?* theorem 2 ) , which considers the rayleigh fading .finally , we give a simple counterexample that violates the condition ( [ conda ] ) of theorem [ thm : general ] .let denote a random variable with density function , , for .given a sample of , we consider the mixed and randomly shifted lattice , where denotes a uniformly distributed random variable on \times[0,t] ] .hence , applying to the above , we have other examples will be investigated in the extended version of the paper .the first author wishes to thank radha krishna ganti for directing his interest to the sir asymptotics in cellular network models based on general stationary point processes .the first author s work was supported by the japan society for the promotion of science ( jsps ) grant - in - aid for scientific research ( c ) 25330023 .the second author s work was supported by jsps grant - in - aid for scientific research ( b ) 26287019 . for a compact set , let denote the restriction of on .let also and , , denote respectively the eigenvalues of and the corresponding orthonormal eigenfunctions ; that is , then mercer s theorem states that the following spectral expansion holds ( see , e.g. , ) ; note that ] , , and .hence , letting , we obtain .r. k. ganti and m. haenggi , `` asymptotics and approximation of the sir distribution in general cellular networks , '' _ ieee trans .wireless commun ._ , vol . 15 , pp .21302143 , 2016 . n. miyoshi and t. shirai , `` downlink coverage probability in a cellular network with ginibre deployed base stations and nakagami- fading channels , '' _ wiopt 2015 _ , pp .483489 , 2015 .
|
we consider the spatial stochastic model of single - tier downlink cellular networks , where the wireless base stations are deployed according to a general stationary point process on the euclidean plane with general i.i.d . propagation effects . recently , ganti & haenggi ( 2016 ) consider the same general cellular network model and , as one of many significant results , derive the tail asymptotics of the signal - to - interference ratio ( sir ) distribution . however , they do not mention any conditions under which the result holds . in this paper , we compensate their result for the lack of the condition and expose a sufficient condition for the asymptotic result to be valid . we further illustrate some examples satisfying such a sufficient condition and indicate the corresponding asymptotic results for the example models . we give also a simple counterexample violating the sufficient condition .
|
the characteristic set method of wu has freed ritt s decomposition from polynomial factorization , opening the door to a variety of discoveries in polynomial system solving . in the past two decadesthe work of wu has been extended to more powerful decomposition algorithms and applied to different types of polynomial systems or decompositions : differential systems , difference systems , real parametric systems , primary decomposition , cylindrical algebraic decomposition . today, triangular decomposition algorithms provide back - engines for computer algebra system front - end solvers , such as maple s solve command. algorithms computing triangular decompositions of polynomial systems can be classified in several ways .one can first consider the relation between the input system and the output triangular systems . from that perspective ,two types of decomposition are essentially different : those for which encode all the points of the zero set ( over the algebraic closure of the coefficient field of ) and those for which represent only the `` generic zeros '' of the irreducible components of .one can also classify triangular decomposition algorithms by the algorithmic principles on which they rely . from this other angle ,two types of algorithms are essentially different : those which proceed _ by variable elimination _ , that is , by reducing the solving of a system in unknowns to that of a system in unknowns and those which proceed _ incrementally _ , that is , by reducing the solving of a system in equations to that of a system in equations .the characteristic set method and the algorithm in belong to the first type in each classification .algorithm , which is an elimination method solving in the sense of the `` generic zeros '' , has brought efficient techniques , based on the concept of a _regular chain_. other works on triangular decomposition algorithms focus on incremental solving .this principle is quite attractive , since it allows to control the properties and size of the intermediate computed objects .it is used in other areas of polynomial system solving such as the probabilistic algorithm of lecerf based on lifting fibers and the numerical method of sommese , verschelde , wample based on diagonal homotopy .incremental algorithms for triangular decomposition rely on a procedure for computing the intersection of an hypersurface and the quasi - component of a regular chain .thus , the input of this operation can be regarded as well - behaved geometrical objects .however , known algorithms , namely the one of lazard and the one of the second author are quite involved and difficult to analyze and optimize . in this paper, we revisit this intersection operation .let ] , the function call returns regular chains (t) ] where is the saturated ideal of the regular chain . a modular algorithm for computing these gcds appears in : if is known to be radical , the performance ( both in theory and practice ) of this algorithm are very satisfactory whereas if is not radical , the complexity of the algorithm increases substantially w.r.t . the radical case . in this paper ,the ring will be of the form /\sqrt{{\mbox{{\rm sat}}}} ] denotes the ring of polynomials over , with ordered variables . let ] .if is not constant , then we denote by and the pseudo - remainder and the pseudo - quotient of by as univariate polynomials in .we say that is less than q and write if either and or both are non - constant polynomials such that holds , or and both hold .we write if neither nor hold .* notations for polynomial sets .* let . ] .for an ideal ] the _ saturated ideal _ of w.r.t . , denoted by , is the ideal \mid \exists m\in\mathbb{n}\text { s.t . }h^mq\in { \cal i}\} ] be a _ triangular set _, that is , a set of non - constant polynomials with pairwise distinct main variables .the set of main variables and the set of ranks of the polynomials in are denoted by and , respectively .a variable in is called _ algebraic _ w.r.t . it belongs to , otherwise it is said _ free _ w.r.t . . for ,denote by the polynomial in with main variable . for ,we denote by ( resp . ) the set of polynomials such that ( resp . ) holds .let be the product of the initials of the polynomials in .we denote by the _ saturated ideal _ of defined as follows : if is empty then is the trivial ideal , otherwise it is the ideal .the _ quasi - component _ of is defined as .denote as the zariski closure of . for ] be another triangular set .we say that has smaller rank than and we write if there exists such that holds and : either ; or and .we write if .* iterated resultant . *let ] .we define by induction :if , then ; otherwise let be greatest variable appearing in , then . * regular chain . * a triangular set ] be finite .let be a finite set of regular chains of ] . then , holds if and only if holds .[ proposition : equal - dim ] let and be two regular chains of ] such that is regular w.r.t . .then is also regular w.r.t . .[ proposition : radsatprem ] let ] be a regular chain .let and such that holds .then , we have . [corollary : sqrtsat ] let and be two regular chains of ] with such that is regular w.r.t .both and .assume that holds .then we also have .[ proposition : equivalence ] let ] be a regular chain .then the following statements are equivalent : * the polynomial is regular w.r.t . , * for each prime ideal associated with , we have , * the iterated resultant is not zero . [corollary : regular ] let ] be a regular chain .let and .we have : * the polynomial is regular w.r.t . if and only if is regular w.r.t . ; * if and is regular w.r.t . , then is regular w.r.t . .as mentioned before , definition [ definition : regular - gcd ] was introduced in as part of a formal framework for algorithms manipulating regular chains . in the present paper ,the ring will always be of the form /\sqrt{{\mbox{{\rm sat}}}} ] is also called a regular gcd of modulo [ definition : regular - gcd ] let be a commutative ring with unity .let ] is a _ regular gcd _ of if : * the leading coefficient of in is a regular element ; * belongs to the ideal generated by and in ] be a regular chain , possibly empty .let ] , and thus we have . since and are regular modulo , the same property holds for .together with ( [ eqs : tgq ] ) , we obtain .therefore .the inclusion relation in follows from ( [ eqs : tgq0 ] ) .we prove .assume . with ([ eqs : tgq0 ] ) and ( [ eqs : tgq ] ) , this hypothesis implies that is a regular chain and holds .since also holds , is contained in .conversely , for any , there exists an integer and ] , for .let , .assume that .let .let be a regular chain of ] and .let be the subresulant polynomials of and w.r.t . in ] .[ theorem : regulargcd ] let be an integer , with , such that is a regular element of and such that for any , we have in . then is a regular gcd of and in ] and let be the fraction field of the integral domain .let be the homomorphism from to . by theorem [ theorem : gcd ] of appendix[ app : subresultantchain ] , we know that is a gcd of and in ] such that in ] such that in ] , which implies that in ] is a pair , where ] is a regular chain . the process is also written as for short . given two processes and , let and be respectively the greatest variable appearing in and .we say if : either ; or and ; or , and ; or , , and .we write if neither nor hold . clearly any sequence of processes which is strictly decreasing w.r.t . is finite .[ definition : split ] let , , be regular chains of ] .we call a _ regular split _ of whenever we have * * ( or equivalently ) * we write as .observe that the above three conditions are equivalent to the following relation . geometrically , this means that we may compute a little more than ; however , is a `` sharp '' approximation of the intersection of and . next we list the specifications of our triangular decomposition algorithm and its subroutines .we denote by the polynomial ring ] such that for each : is a regular chain ; ; if , then otherwise is regular modulo ; moreover we have . * * input : * , a variable of ; and , polynomials of , whose main variables are both . * * output : * a list of polynomials , where , such that is the -th subresultant of and w.r.t . . * * input : * , a variable of , * * , a regular chain of such that , * * and , polynomials of with the same main variable such that : is regular modulo ; belongs to , * * , the subresultant chain of and w.r.t . . ** output : * a set of pairs ,\ldots,[g_e , t_e]\} ] . * * output : * a set of regular chains such that . * * input : * , a polynomial of with main variable , * * , a regular chain of , where , * * , the subresultant chain of and w.r.t . , * * , a regular chain of ] such that . * * output : * if , return ; otherwise return a set of regular chains such that is regular modulo each , and . * * input : * , is a regular chain of ] * if * * then * return let with greatest main variable ; * if * * or * * then * return * if * * then * return choose a polynomial with maximal rank[ figure : algorithm ] all the algorithms in fig . 1 terminate .the key observation is that the flow graph of fig .1 can be transformed into an equivalent flow graph satisfying the following properties : ( 1 ) the algorithms intersect and regularize only call each other or themselves ; ( 2 ) all the other algorithms only call either intersect or regularize .therefore , it suffices to show that intersect and regularize terminate .note that the input of both functions is a process , say .one can check that , while executing a call with as input , any subsequent call to either functions intersect or regularize will take a process as input such that holds . since a descending chain of processes is necessarily finite , both algorithms terminate . since all algorithms terminate , and following the flow graph of fig .1 , each call to one of our algorithms unfold to a finite dynamic acyclic graph ( dag ) where each vertex is a call to one of our algorithms . therefore, proving the correctness of these algorithms reduces to prove the following two points . *_ base : _ each algorithm call , which makes no subsequent calls to another algorithm or to itself , is correct . * _ induction : _ each algorithm call , which makes subsequent calls to another algorithm or to itself , is correct , as soon as all subsequent calls are themselves correct . for all algorithms in fig . 1, proving the base cases is straightforward .hence we focus on the induction steps .[ proposition : intersectfree ] satisfies its specification .we have the following two key observations : * , where are the regular chains in the output of .* then it is not hard to conclude that .[ proposition : intersectalgebraic ] is correct .we need to prove : .let us prove now , that is , for each regular chain in the output , we have .first by the specifications of the called functions , we have , thus , by corollary [ corollary : sqrtsat ] , since is regular modulo both and . secondly ,since is a regular gcd of and modulo , we have by corollaries [ corollary : sqrtsat ] and proposition [ proposition : regular - gcd ] .next we prove .it is enough to prove that holds .since is a regular gcd of and modulo , the conclusion follows from point of proposition [ proposition : regular - gcd ] .finally we prove , that is .let be the regular chains returned from algorithm .we have , which implies .next since is a regular gcd of and modulo , the conclusion follows from point of proposition [ proposition : regular - gcd ] .[ proposition : intersect ] satisfies its specification . the first while loop can be seen as a projection process .we claim that it produces a nonempty triangular set such that .the claim holds before staring the while loop . for each iteration , let be the set of polynomials obtained at the previous iteration .we then compute a polynomial , which is the resultant of a polynomial in and a polynomial in .so .by induction , we have .so the claim holds .next , we claim that the elements in satisfy the following invariants : at the beginning of the -th iteration of the second while loop , we have * each is a regular chain ; if exists , then is regular modulo , * for each , we have , * for each , we have , * when , we then have , for each and so of definition [ definition : split ] all hold .this concludes the correctness of the algorithm .now we prove the above claims , , , by induction . the claims clearly hold when since and .now assume that the loop invariants hold at the beginning of the -th iteration .we need to prove that it still holds at the beginning of the -th iteration .let be an element picked up at the beginning of -th iteration and let be the set of the new elements of generated from .then for any , claim clearly holds by specification of .next we prove . * if , then . by induction and specifications of called functions, we have * if , by induction we have and is regular modulo both and . by corollary [ corollary : sqrtsat ]we have therefore holds .next we prove claim . by induction andthe specifications of called functions , we have secondly , we have . therefore , that is holds . finally , since , we have which implies that holds .this completes the proof . satisfies its specification. if , the conclusion follows directly from point of corollary [ corollary : regular ] . from nowon , assume . let be the set of pairs ] be a pair in the output of .if , the conclusion follows directly from point of corollary [ corollary : regular ] . otherwise , let ] be a pair returned by .when , since holds , we conclude by corollary [ corollary : sqrtsat ] .let \in { { \mbox{{\sf regulargcd}}}} ] , , be the output .first from the specification of regularize , we have . when , by proposition [ proposition : equal - dim ] and theorem [ theorem : regulargcd ] , is a regular gcd of and modulo .in this section , we adapt the algorithm triangularize ( algorithm [ algo : triangularize ] ) , in order to compute efficiently a kalkbrener triangular decomposition .the basic technique we rely on follows from krull s principle ideal theorem .let ] .then , for any minimal prime ideal associated with , the height of is less than or equal to .let be a kalkbrener triangular decomposition of .let be a regular chain of , the height of which is greater than .then is also a kalkbrener triangular decomposition of .based on this corollary , we prune the decomposition tree generated during the computation of a lazard - wu triangular decomposition and remove the computation branches in which the height of every generated regular chain is greater than the number of polynomials in .next we explain how to implement this tree pruning technique to the algorithms of section [ sec : incremental ] . inside triangularize , define and pass it to every call to intersect in order to signal intersect to output only regular chains with height no greater than .next , in the second while loop of intersect , for the -th iteration , we pass the height to cleanchain , intersectfree and intersectalgebraic . in intersectfree , we pass its input height to every function call .besides , lines to are executed only if the height of is strictly less than , since otherwise we would obtain regular chains of height greater than . in other algorithms, we apply similar strategies as in intersect and intersectfree .part of the algorithms presented in this paper are implemented in maple14 while all of them are present in the current development version of maple .tables 1 and 2 report on our comparison between triangularize and other maple solvers .the notations used in these tables are defined below .* notation for triangularize .* we denote by tk and tl the latest implementation of triangularize for computing , respectively , kalkbrener and lazard - wu decompositions , in the current version of maple .denote by tk14 and tl14 the corresponding implementation in maple14 .denote by tk13 , tl13 the implementation based on the algorithm of in maple13 .finally , stk and stl are versions of tk and tl respectively , enforcing that all computed regular chains are squarefree , by means of the algorithms in appendix [ sec : squarefree ] .* notation for the other solvers . * denote by gl , gs , gd , respectively the function groebner :- basis ( plex order ) , groebner :- solve , groebner :- basis ( tdeg order ) in current beta version of maple .denote by ws the function wsolve of the package wsolve , which decomposes a variety as a union of quasi - components of wu characteristic sets .the tests were launched on a machine with intel core 2 quad cpu ( 2.40ghz ) and 3.0 gb total memory .the time - out is set as seconds .the memory usage is limited to of total memory . in both table 1 and 2, the symbol `` - '' means either time or memory exceeds the limit we set .the examples are mainly in positive dimension since other triangular decomposition algorithms are specialized to dimension zero .all examples are in characteristic zero . in table 1, we provide characteristics of the input systems and the sizes of the output obtained by different solvers . for each polynomial system ] , called the determinant polynomial of , given by let be a set of polynomials of ] with positive degrees and .let .for any , let be the matrix of the polynomials .we define the subresultant of and , denoted by , as note that is a polynomial in ] to $ ] .let and .[ lemma : sk ] for any integer , if , then and does not vanish at the same time .moreover , we have both and .observe that therefore there exists such that and .the conclusion follows .[ lemma : smn ] assume that .then , if , we have * if and , then * if and , then symmetrically , if , we have * if and , then * if and , then we prove and , whose correctness implies and by symmetry .let , then we have therefore so from and , we conclude that . on the other hand , if and , then .now let consider .we have from , we conclude that . from , we conclude that .so on so forth , finally , if and , we deduce that , for all , which implies that ; if and , we deduce that , which implies that .[ lemma : dpol ] let be an integer such that .assume that .if , then we have if , then .therefore we have done .[ theorem : gcd ] we have the following relations between the subresultants and the gcd of and : 1 .let , , be an integer such that and for any , , . then .2 . assume that for all .we have the following cases 1 . if and , then ; symmetrically , if and , then we have 2 . if and but , then we have ; symmetrically , if and but , then we have 3 .if , then let us first prove .w.l.o.g , we assume . from lemma [ lemma :sk ] , we know that . therefore for , we have . by lemma [ lemma : dpol ] , if , we have .if , since , we have so for all , we have . if , we have .so . if , we have . therefore .next we prove . by symmetry , we prove it when . if , it follows directy from lemma [ lemma : smn ] .otherwise , we have . by lemma [ lemma : dpol ] , for all we have that is .since , we deduce that .finally follows directly from lemma [ lemma : smn ] and is obviouly true .throughout this section , we assume that the coefficient field is of characteristic zero .we propose two strategies for computing a squarefree triangular decomposition .the first one is a post - processing which applies algorithm [ algo : squarefree ] to every regular chain returned by algorithm [ algo : triangularize ] .the second consists of ensuring that , each output or intermediate regular chain generared during the execution of algorithm [ algo : triangularize ] is squarefree . to implement the second strategy ,we add an _ squarefree option _ to algorithm [ algo : triangularize ] and each of its subalgorithms . if the option is set to _ true _ , this option requires that each output regular chain is squarefree .this is achieved by using algorithm [ algo : squarefree - poly ] whenever we need to construct new regular chains from a previous regular chain and a polynomial such that is known to be a regular chain .
|
we propose new algorithms for computing triangular decompositions of polynomial systems incrementally . with respect to previous works , our improvements are based on a _ weakened _ notion of a polynomial gcd modulo a regular chain , which permits to greatly simplify and optimize the sub - algorithms . extracting common work from similar expensive computations is also a key feature of our algorithms . in our experimental results the implementation of our new algorithms , realized with the regularchains library in maple , outperforms solvers with similar specifications by several orders of magnitude on sufficiently difficult problems .
|
the current paradigm of large scale structure formation in the universe is hierarchical , meaning that structures are formed by merger aggregation . the dominant driver for this merger dynamic is thought to be the non - baryonic and non - collisional cold dark matter .therefore , research in large scale structure formation is principally done through the study of dark matter structures , which on large scales are thought to serve as scaffolding for baryonic structures .the dark matter merger process has been usually represented by a tree , keeping the analogy with the genealogical tree of an individual .great effort has been put into the methods of construction of merger trees , as they are a way to understand dark matter aggregation and are a necessary input for codes of semi - analytic galaxy formation .the methods range from the analytical approach of monte - carlo methods ( for a review ) passing through hybrid approaches that mix numerical realizations of a density field with analytical approximations for its evolution to the fully numerical approach that identifies the dark matter halos from different snapshots in a -body simulation to construct the merger history .usually , the validity of the trees constructed in the analytical and hybrid way is stated from comparisons with trees constructed from numerical simulations .unfortunately , the quantities used to compare trees from two approaches usually sacrifice the complexity inherent to the tree structure in the sake of simple tests . the most common simplification is to select one branch of the tree ( the most massive ) to make the analysis .another approach , measures the abundance of structures of a given mass among the halos in all the merger tree .in all these cases the geometrical information is suppressed , mostly because of the lack of simple structures to describe that kind of information . in this paperwe present a new way , in the astrophysical context , to translate the geometrical information from a merger tree into a 1-dimensional structure .the translation is based on the encoding of the tree information into its _contour walk_. we apply this description to the merger trees from a large dark matter numerical simulation , the millennium run .we use its public database to select halos in different bin masses at redshift to extract its merger histories and build the contour walk .we analyze each tree in terms of two simple statistics extracted from these walks .we have also performed this kind of analysis on merger trees obtained with the algorithm described by , using the source code they kindly made public . with this codewe have constructed trees with two resolutions for the minimum mass halo , one mimicking the millennium run , and the other with nearly times lower resolution .this paper is divided as follows . in section 2we explain the construction of a contour walk from a merger tree . in section 3we present the simulation that produced the public data we used in this paper and the monte carlo code for the construction of markovian merger trees . in section 4we perform an immediate implementation of these concepts to the available merger trees .we calculate global statistics from the walks , and discuss its possible physical meaning . in the last section we discuss our results and suggest how the tool we have proposed can be used to tackle more complex questions about merger trees .the merger trees in the current dark matter paradigm are simple , as only merging is allowed . from the mathematical point of viewthis tree structure corresponds to a galton - watson tree .galton - watson trees are genealogical trees of a population that evolves according to very simple rules . at generation , the population starts with one individual , called the ancestor .then each individual in the population has , independently of the others , a random number of children according to an offspring probability distribution . in the structure formation contextwe identify the ancestor with the halo at redshift zero , and the offsprings with the parents of the halo .these trees can be coded by a discrete path around the contour of the tree .this _ contour process _ is easy to visualize .we imagine the displacement of a particle that starts at time zero from the root of the tree ( a halo at redshift zero ) and then visit the first not yet visited parent halo , if any , and if none , the successor of the halo .we do so until all the members in the tree have been explored and the particle has come back to the origin .this process is illustrated in the fig.[contorno ] . where the different values of correspond to the discrete snapshots for which we have the halo information , and is the imaginary time giving the pace of the particle around the tree .we point out that one has to define some order between the halos at a given point in the tree in order to define a unique way to walk the tree , a criteria to decide to which parent should jump the particle .this ordering criterium in our case is naturally imposed by the masses of the halos , as we always visit first the most massive progenitor .we can express this as visiting all the halos in the tree in a depth - first fashion , visiting first the most massive branch at every time .( horizontal lines ) , time flowing from top to bottom .we start from the last halo at the bottom , and visit the halos in the tree at unit timesteps .the most massive progenitor is always visited first .we go around until we go back to the initial point .the bottom part of the figure shows the contour walk , its values correspond to the time where the particle was located at time .the tree structure can be recovered completely from the contour walk as indicated by the solid lines in the lower panel . ]the millennium run ( mr hereafter ) was carried out by the virgo consortium in 2004 at the max planck s society s supercomputer center in garching .it is a dark matter only simulation containing particles , it was evolved from to the present - day inside a cubic region of mpc h on a side , the individual particle mass is m .it adopted , for the current densities of dark matter and baryons in a lambda cold dark matter cosmological model , furthermore it used for the present dimensionless value of the hubble constant , for the _ rms _ linear mass fluctuation in a sphere of radius extrapolated to , and for the slope of the primordial fluctuation spectrum .the simulation was run using the tree - pm n - body code gadget2 , 64 outputs were stored at times spaced approximately equally in the logarithm of the expansion factor at early times and at approximately myr intervals after . at run timeall collapsed halos with at least particles ( m ) were identified using a friends - of - friends ( fof ) group - finder with linking length .post processing with the substructure algorithm subfind allowed a detection and measurement of the resolved sub - halos .this in turn allowed trees to be built with detailed assembly histories for every object and its substructure .the postprocessed data have been publicly disseminated through an interactive database : http://www.g-vo.org/millennium/mydb .the particular structure of the database design which allows efficient querying for merger trees was implemented by .conveniently enough , the structure for merger trees in the database is based on a depth - first ordering , making that the output of a tree query is an incomplete version of the contour walk in terms of the lower panel in fig.[contorno ] , only the upward arrows offspring - progenitor exist .the authors recently proposed and algorithm for the construction of merger trees based on a markovian approach .this approach to merger trees means that any halo of a given mass at time has a progenitor probability distribution depending only on and .this scheme is explicitly independent of the large scale environment that could be defined for a halo .their merger trees are parameterized by a time variable and a mass variable , where and is the cosmological linear growth rate and is the variance of the initial density fluctuation field , linearly extrapolated to and smoothed using a window function that corresponds to a mass .these two variables are also the natural variables in the extended press - schechter formalism .their approach is the following .first , from the mr data they find the conditional probabilities for the masses of the main progenitors at a past time as a function of halo mass at a present time . with thisconditional probabilities they build the main progenitor history which by construction reproduce to a good extent the mr data .then , they extended this approach to the construction of a full merger tree .the extension included some additional heuristic rules based again on the premise of a fair match to the total progenitor number density from the millennium run data .nevertheless , the lack of a true physical motivation can not ensure the reconstruction of the correct full joint distribution of progenitors .even if n - body merger trees are not markovian , the proposed algorithm manages to reproduce some tree properties , specially those related with the main progenitor but also the total mass distribution in all the progenitors .the detected inaccuracies come from the fact that the estimation of the average mass for the second progenitor is not reliable . in spite of that, we have decided to use these author s public available code given its explicit effort to reproduce the mr data .we make use of two kind of trees .trees extracted from the mr public data base and trees constructed with the public code implementing the markovian approach . in both cases we concentrate on halos with masses greater than . for the mr trees we selected all the halos in the simulation box with a given mass m at redshift , for different values of in bins of width dex , where is the halo mass measured within the radius where the halo has an overdensity times the critical density of the simulation .the bins are spaced by dex , the least massive bin corresponds to and has dark matter halos and the most massive bin corresponds to and has dark matter halos .we also performed measurements for the most massive bins with , which provided us with nearly five times more halos per bin .the results we obtained in that case are basically the same than in the case . in this paperwe only report and discuss the results for the selection with the smaller sample . for the markovian trees we made two different runs changing the minimal halo mass .the first one mimicked the mr and its minimal mass was m .the second run used a higher minimal mass of m .we will refer to these runs as the high resolution and low resolution runs , respectively . in both cases we also constructed merger trees for halo mass bins spaced by dex .the mass bins for the high resolution run are the same as in the mr case , for the low resolution run we can only describe trees from the bin up to . for each binwe construct trees , each one with steps in with . which is equivalent to have trees described from redshifts .the snapshots in the dark matter simulation will be labeled by , where ranges from to .the snapshot corresponds to the .we select a halo in the snapshot to extract its merger tree . the contour walk , as described in section [ sec : trees ] ,can be visualized as a dynamical process of a particle going around the tree , at unit time - steps stopping at each node in the tree and recording the snapshot to which it belongs .the discrete variable counting the imaginary time of the particle walking the tree is noted . we write the walk as a sequence of discrete values ,corresponding to different values of the discrete intermediary times , is the total length of the walk , and every can take values from the possible snapshots . in the case ofthe markovian trees the values the walk can take vary from to , consistent with the fact that we have described these trees at points equally spaced by . for each merger treewe compute two statistics : its length and action .the length corresponds to the number of points in the walk .the action , which will be defined later , is an statistic based on the first derivatives of the walk .we offer in fig.[examples ] , a feeling on the merger walks for halos in four different mass bins .the figure [ longhalo ] shows the walk length for halo merger trees as a function of halo mass . at low masses where the growth should be dominated by mass accretion and not by mergers , . at high masses where the growth starts to be controlled by mergers .the transitional scale corresponds to halos for a halo mass .this transitional scale should depend on the resolution of the dark matter simulation .when smaller halos are resolved , the merger trees will be populated with more branches of this lower mass halos , and one could start to see that all the growth is done trough mergers . nevertheless , if we intend to study galaxy formation using merger trees , this suggested transitional scale may have some significance . in the favored paradigm of galaxy formation ,not every halo can harbor galaxies at a cosmic epoch . only halos that can coolefficiently the gas may hold star forming galaxies .a lower limit is imposed by the uv background from star formation , which sets a low baryonic fraction in halos of masses below , which is roughly the minimal resolution for halo detection in the millennium run .if one aims to study galaxy formation , the uv background sets naturally a minimal mass of the progenitors that should be included in the merger history of a dark matter halo . in any case, we decided to explore the influence of the resolution using the results from the monte - carlo code . in the case of merger trees constructed with a resolution mimicking the mr we find again a transitional scale ( fig.[longhalo_markov ] ) around .the biggest difference in comparison with the -body results are the exponents describing the length of the walk as a function of halo mass . in the monte carlo caseboth exponents are closer to .this results would favor a view from which there is still room left for a better description of smooth accretion into the markov description . _upper panel _ walk length for halo merger trees extracted from the millennium run as a function of the logarithm of the final halo mass .the solid lines show two different power law trends for the relation , and .the intersection of the two lines is located at .the dashed line simply passes through the two extreme mass points in the plot ._ lower panel _ the same results as in the upper panel , but this time the measured values are normalized to the functional dependence of the dashed line in the upper panel .this enhances the features in the curve and allows an easier determination of the transitional mass - scale.,title="fig : " ] _ upper panel _ walk length for halo merger trees extracted from the millennium run as a function of the logarithm of the final halo mass .the solid lines show two different power law trends for the relation , and .the intersection of the two lines is located at .the dashed line simply passes through the two extreme mass points in the plot ._ lower panel _ the same results as in the upper panel , but this time the measured values are normalized to the functional dependence of the dashed line in the upper panel .this enhances the features in the curve and allows an easier determination of the transitional mass - scale.,title="fig : " ] _ _ upper panel__same as fig.[longhalo ] for merger trees constructed with the markovian algorithm .black symbols show the high resolution run , and white symbols the low resolution run . _upper panel _the lines show again two different power law trends for the relation , and .this is more steeper than the found in the millennium merger trees ._ lower panel _ the transitional mass - scale is roughly located at with a weak dependence on the run s resolution .the dispersion for the measurements from the markovian trees is noticeably larger than the measurements coming from millennium run data.,title="fig : " ] _ _ upper panel__same as fig.[longhalo ] for merger trees constructed with the markovian algorithm .black symbols show the high resolution run , and white symbols the low resolution run . _upper panel _the lines show again two different power law trends for the relation , and .this is more steeper than the found in the millennium merger trees . _lower panel _ the transitional mass - scale is roughly located at with a weak dependence on the run s resolution .the dispersion for the measurements from the markovian trees is noticeably larger than the measurements coming from millennium run data.,title="fig : " ] we extend the statistical description of the contour walk by the quantification of the its waviness .we borrow from statistical mechanics the concept of action , which is used in the numerical calculation of path integrals .the action can be defined as the potential energy invested in bending the path , comparing it with a characteristic temperature . if there is a walk defined by points , the actionis usually defined as where plays the role of a temperature or imaginary time depending on the context .if we take that definition applied to our case : as we have unit steps , is equal to , making the action equal to the walk length .therefore , we decide to define a normalized action only from the extreme points in the contour walk . we take this extreme points , , as a sample of the contour walk where the derivative at point , defined as , is equal to zero .we note the set of times corresponding to the vanishing points of the derivative as ( as in ) .if we now define the action on the sampled walk which makes the action equal to the walk length divided by the number of extreme points . in the context of merger trees ,each peak in the walk corresponds to a branch in the tree , making the contribution to the action roughly proportional to the length of that branch .therefore , the normalized action could be loosely interpreted as a proxy for the mean length of one branch in the tree . in fig.[actionhalo ]we show the normalized action as the function of halo mass for the mr and the monte - carlo code . for mr treeswe find a distinction between the behavior of the action for low and high mass halos .this time the two mass extremes share the same action values , meaning that we can identify the same mean branch length for the two extremes masses . for intermediate massesthe action increases . in the case of the mr trees ,the action achieves a maximum for a mass scale m . for montecarlo trees the maximum takes the form of a broader plateau ranging almost two orders of magnitude between m and m .perhaps the most distinguishing factor between the mr and markovian approach is that the dispersion in the latter is much higher that in the former .a fact that can be interpreted as a higher variability in the geometry of markovian trees .we introduced from the mathematical literature the contour process of a tree , and we applied this concept to the description of merger trees .we used a large dark matter simulation ( the millennium run ) and a monte - carlo code ( implementing a markovian approach ) to obtain merger trees in these two approximations .furthermore , the markovian trees were obtained with two different values for the minimal mass of a parent halo .one resolution mimicked exactly the mr , and the other resolution had a times more massive minimal halo mass .we refer to these markovian runs as the high resolution and low resolution runs respectively .we extracted simple statistics from these walk : the length ( proportional to the total number of halos in the tree ) and the action ( which can be loosely interpreted as the mean longitude of a branch ) .we report our results of walk length and action emphasizing its evolution as a function of the physical halo mass , and not its absolute values . from the length and the action, we found in the millennium run a transitional mass scale at m . in the case of the walk length , , this transitional mass - scale marks the change between a dependence for low halo masses and for high mass halos , where is the mass of the final dark matter halo in the tree . with the action , the scale marks the highest value for the action as function of the halo mass .for the markovian trees the dependence of the walk length on the halo mass is almost the same everywhere , although much steeper than the dependence found in the millennium run . nonetheless , we found the same transitional scale from the length statistics for the two resolutions .it does not prove that the transitional scale is independent on the resolution mass ( in fact , it should be dependent ) but suggests that the scale does not have a strong dependence with the minimal mass resolution used to describe the merger tree .the evolution of the action as a function of halo mass is completely different from the mr case .the markovian run with mimicking mr resolution does not show a sharp transitional scale , instead it shows a large plateau ranging for two orders of magnitude in mass .perhaps the biggest difference between the two ways of constructing the trees is that the markovian approach always show a bigger dispersion on its contour walk statistics , which seem best defined in the millennium run , judging from its low dispersion .this could be interpreted in fact as a higher geometrical variability in the markovian trees .we have shown how simple statistics from the contour walk can give a new handle on the description of merger tree , exploring the geometrical information of merger trees .even if the length of the walk could have been obtained without an intermediating contour process , the approach of extracting information from higher order statistics , as was the case for the normalized action , can be extended in complexity .for instance , is relevant to point out that the information of the mass in each node is also encoded in the contour walk .the mass information is used as an ordering criteria to walk the tree .one could define sections defined by , i.e. the points where the walk touches the next to last snapshot , and the _ length ratio _ of the sections delimited by these points would include information of the _ mass ratio _ of the mergers at time . in general the study of crossing paths , defined as the the section of the contour walks within some boundary might produce useful statistics to further classify the complex merger trees of massive halos .we thank jrmy blaizot ( jb ) and gerard lemson ( gl ) for early discussions , motivation and criticism around the ideas presented here .we thank jb s suggestion on pointing the discussion towards a comparison with markovian merger trees .we thank again gl for his amazing day - to - day work on the public millennium database .we also thank the authors neistein and dekel for making public a well documented and easy to use code for merger tree construction .the necessary infrastructure to develop this work was provided in the framework of the horizon project ( france ) .g. , springel v. , 2006 , in gabriel c. , arviset c. , ponz d. , enrique s. , eds , astronomical data analysis software and systems xv vol .351 of astronomical society of the pacific conference series , cosmological simulations in a relational database : modelling and storing merger trees .pp 212+ v. , white s. d. m. , jenkins a. , frenk c. s. , yoshida n. , gao l. , navarro j. , thacker r. , croton d. , helly j. , peacock j. a. , cole s. , thomas p. , couchman h. , evrard a. , colberg j. , pearce f. , 2005 , nature , 435 , 629
|
we introduce the contour process to describe the geometrical properties of merger trees . the contour process produces a one - dimensional object , the contour walk , which is a translation of the merger tree . we portray the contour walk through its length and action . the length is proportional to to the number of progenitors in the tree , and the action can be interpreted as a proxy of the mean length of a branch in a merger tree . we obtain the contour walk for merger trees extracted from the public database of the millennium run and also for merger trees constructed with a public monte - carlo code which implements a markovian algorithm . the trees correspond to halos of final masses between m and m . we study how the length and action of the walks evolve with the mass of the final halo . in all the cases , except for the action measured from markovian trees , we find a transitional scale around m . as a general trend the length and action measured from the markovian trees show a large scatter in comparison with the case of the millennium run trees . [ firstpage ] cosmology : theory , cosmology : dark matter
|
during the last years impressive progress was achieved in the understanding of the classical and quantum capacities of quantum communication channels ( see , in particular , the papers - , where the reader can also find further references ) .it appears that a quantum channel is characterized by a whole variety of different capacities depending both on the kind of the information transmitted and the specific protocol used .most of this literature studies the properties of systems and channels described in finite dimensional hilbert spaces .recently , however , there has been a burst of interest ( see e. g. ) in a new kind of systems , sometimes called `` continuous variable '' quantum systems , whose basic variables satisfy heisenberg s canonical commutation relations ( ccr ) .there are two reasons for this new interest . on the one hand , such systems play a central role in quantum optics , the canonical variables being the quadratures of the field .therefore some of the current experimental realizations of quantum information processing are carried out in such systems . in particular , the bosonic gaussian channels studied in this paper can be seen as basic building blocks of quantum optical communication systems , allowing to build up complex operations from `` easy , linear '' ones and a few basic `` expensive , non - linear '' operations , such as squeezers and parametric down converters .the other reason for the interest in these systems is that in spite of the infinite dimension of their underlying hilbert spaces they can be handled with techniques from finite dimensional linear algebra , much in analogy to the finite dimensional quantum systems on which the pioneering work on quantum information was done . roughly speaking this analogy replaces the density matrix by the covariance matrix of a gaussian state. then operations like the diagonalization of density matrices , the schmidt decomposition of pure states on composite systems , the purification of mixed states , the computation of entropies , the partial transpose operation on states and channels , which are familiar from the usual finite dimensional setup , can be expressed once again by operations on finite dimensional matrices in the continuous variable case .the basic framework for doing all this is not new , and goes under heading `` phase space quantum mechanics '' or , in the quantum field theory and statistical mechanics communities , `` quasi - free bose systems '' .both authors of this paper have participated in the development of this subject a long time ago . in this paper , continuing and , we make further contributions to the study of information properties of linear bosonic gaussian channels .we focus on the aspects essential for physical computations and leave aside a number of analytical subtleties related to infinite dimensionality and unboundedness unavoidably arising in connection with bosonic systems and gaussian states .the paper is organized as follows . in the sectionii we recapitulate some notions of capacity , which are currently under investigation in the literature , and what is known about them .naturally this can not be a full review , but will be limited to those quantities which we will evaluate or estimate in the subsequent sections .a new addition to the spectrum of capacity - like quantities is discussed in subsection ii.b : an upper bound on the quantum capacity ( even allowing finite errors ) , which is both simple to evaluate and remarkably close to maximized coherent information , a bound conjectured to be exact . in sectioniii we summarize the basic properties of gaussian states . although our main topic is channels , we need this to get an explicit handle on the purification operation , which is needed to compute the entropy exchange , and hence all entropy based capacities .bosonic gaussian channels are studied in section iv .here we introduce the techniques for determining the capacity quantities introduced in section i , deriving general formulas where possible . in the final sectionv we apply these techniques to the case of a single mode channel comprising attenuation / amplification and a classical noise .some technical points are treated in the appendices .consider a general quantum system in a hilbert space .its states are given by density operators on .a _ channel _ is a transformation ] on the observables , which is defined by the relation ={\rmtr}\rho t^*[x],\ ] ] and has to be completely positive and unit preserving ( cf . ) . it can be shown ( see e.g. ) that any channel in this sense arises from a unitary interaction of the system with an environment described by another hilbert space which is initially in some state , ={\rm tr}_{e}u\left ( \rho \otimes \rho_{e}\right ) u^*,\ ] ] where denotes partial trace with respect to , and vice versa .the representation is not unique , and the state can always be chosen pure , .the definition of the channel has obvious generalization to the case where input and output are described by different hilbert spaces .let us denote by the von neumann entropy of a density operator .we call the input state , and ] , and the entropy exchange .while the definition and the meaning of the first two entropies is clear , the third quantity is somewhat more sophisticated . to define it ,one introduces the _ reference system _ , described by the hilbert space , isomorphic to the hilbert space of the initial system .then according to , , there exists a _purification _ of the state , i.e. a unit vector such that the _ entropy exchange _ is then defined as \bigr),\ ] ] that is , as the entropy of the output state of the dilated channel applied to the input which is purification of the state .alternatively , where ] .in particular , for any channel .we also note another kind of capacity , in which a much weaker requirement is made on the errors , namely for all sufficiently large , and some fixed .we call the resulting capacity the _ -quantum capacity _ , and denote it by . of course , , and by analogy with the classical case ( strong converse of shannon s coding theorem ) one would conjecture that equality always holds .the unassisted classical capacity can be defined similarly with the sole difference that both the domain of encodings and the range of decodings should be restricted to the state space of the abelian subalgebra of operators diagonalizable in a fixed orthonormal basis . in that casethere is no need to use the cb - norm , as it coincides with the usual norm . according to recently proven strong converse to the quantum coding theorem , , where is defined similarly to .the criterion we will formulate makes essential use of the transpose operation , which we will denote by the same letter in any system . for matrix algebras, can be taken as the usual transpose operation .however , it makes no difference to our considerations , if any other anti - unitarily implemented symmetry ( e.g. time - reversal ) is chosen . in an abstract c*-algebra setting best taken as the `` op '' operation , which maps every algebra to its `` opposite '' .this algebra has the same underlying vector space , but all products are replaced by their opposite .obviously , a commutative algebra is the same as its opposite , so on classical systems is the identity .although the transpose maps density operators to density operators , it is not an admissible quantum channel , because positivity is lost , when coupling the operation with the identity transformation on other systems , i.e. , is not _completely _ positive .a similar phenomenon happens for the norm of : we have unless the system is classical .in fact , where denotes the transposition on the -matrices .we note that since we do not distinguish the transpose on different systems in our notation , the observation that tensor products can be transposed factor by factor is expressed by the equation .moreover , although for a channel , the operator may fail to be completely positive , is again a channel , and , in particular , satisfies . the main result of this subsection is the estimate for any channel .the proof is quite simple .suppose , and encoding and decoding are as in the definition of . then by equation ( [ cbthetan ] ) we have where at the last inequality we have used that and are channels , and that the cb - norm is exactly tensor multiplicative , so .hence , by taking the logarithm and dividing by , we get if we take base logarithms , as is customary in information theory , we have .then in the last inequality we can go to the limit , obtaining , and equation ( [ cbn - bound ] ) follows by taking the supremum over all attainable rates .note that base logarithms are built into the above definition of capacity , because we are using the ideal qubit channel as the standard of comparison .this amounts only to a change of units .if another base is chosen for logarithms is chosen , this should also be done consistently in all entropy expressions , and equation ( [ cbn - bound ] ) holds once again without additional constants . the upper bound computed in this way has some remarkable properties , which make it a capacity - like quantity in its own right .for example , it is exactly additive : for any pair of channels , and satisfies the `` bottleneck inequality '' .moreover , it coincides with the quantum capacity on ideal channels : , and it vanishes whenever is completely positive .in particular , , whenever is _ separable _ in the sense that it can be decomposed as into a measurement and a subsequent preparation based on the measurement results .this follows immediately from the observation that on classical systems transposition is the identity .then is a channel , and so is .we note that is also closely related to the entanglement quantity , i.e. , the logarithm of the trace norm of the partial transpose of the density operator , which enjoys analogous properties .in this section we recapitulate some results from , , for the convenience of the reader .our approach to quantum gaussian states is based on the characteristic function of the state which closely parallels classical probability , , and is perhaps the simplest and most transparent analytically .an alternative approach can be based on the wigner `` distribution function '' .let be the canonical observables satisfying the heisenberg ccr =i\delta_{jk}\hbar i,\;\;[q_{j},q_{k}]=0,\;\;[p_{j},p_{k}]=0.\ ] ] we introduce the column vector of operators ^{t},\ ] ] the real column -vector ^{t} ] is a column vector and ] , respectively .we will make use of following key properties : \(a ) the dual of a linear bosonic channel transforms any polynomial in the operators into a polynomial in the of the same order , provided the function has derivatives of sufficiently high order .this property follows from the definition of moments by differentiating the relation ( [ linbos ] ) at the point .\(b ) a gaussian channel transforms gaussian states into gaussian states .this follows from the definition of gaussian state and the relation ( [ ce ] ) .\(c ) linear bosonic channels are covariant with respect to phase space translations .that is if is a shift of by , ] are ( nondegenerate ) gaussian density operators , hence their logarithms are quadratic polynomials in the corresponding canonical variables ( see appendix in ) . by property ( a ) the expression in curly brackets in ( [ h2 ] ) is again a quadratic polynomial in , that is a linear combination of the constraint operators in .therefore , the sufficient condition ( [ usl ] ) in the appendix is fulfilled and achieves its maximum at the point .this theorem implies that the maximum of over a set of density operators defined by arbitrary constraints on the first and second moments is also achieved on a gaussian density operator . in particular , for an arbitrary quadratic hamiltonian the maximum of over states with constrained mean energy achieved on a gaussian state .the energy constraint is linear in terms of the correlation matrix : where is the diagonal energy matrix ( see ) .when and are gaussian , the quantities , ),h(\rho , t) ] is given by formula ( [ abs ] ) with replaced by computed via ( [ trans ] ) , and where \nonumber\\ \beta&=&\delta \sqrt{-(\delta ^{-1}\alpha ) ^{2}-i/4 } \nonumber\end{aligned}\ ] ] is computed by inserting ( [ chan ] ) into where are the ( unchanged ) canonical observables of the reference system . alternatively , the entropy exchange can be calculated as the output entropy ) ] of states in the schrdinger picture both have the characteristic function &v(z)= { \rm tr}\rho v(kz ) \times\nonumber\\&&\quad\times \exp \left[-\frac{\hbar}{2}\bigl ( |k^{2}-1|/2+n_{c}\bigr)\ , |z|^{2}\right].\end{aligned}\ ] ] let the input state of the system be the elementary gaussian with characteristic function ( [ one - mode - cf ] ) .then the entropy of is . from ( [ atten ] )we find that the output state ] as functions of for is clear from figure 3 .for all the coherent information )-h(\rho , t) ] , and the supremum is taken over all _ gaussian _ input states . since the coherent information increases with the input power , we obtain which is in a good agreement with the upper bound ( [ 1modbound])(see figure 4 ) .there is a useful lemma in classical information theory which gives necessary and sufficient conditions for the global minimum of a convex function of probability distributions in terms of the first partial derivatives .the lemma is based on general kuhn - tucker conditions and can be generalized to functions depending on density operators rather than probability distributions .let be a convex function on the set of density operators , and a density operator .in order to achieve minimum on it is necessary and sufficient that for arbitrary density operator the convex function of the real variable achieves minimum at .for this , it is necessary and sufficient that where , and is the directional derivative of in the direction , assuming that the derivatives exist . if , then , where .therefore it is necessary and sufficient that ( [ nsc ] ) holds for pure .if for small negative , then we say that the direction is _ inner_. in that case ( [ nsc ] ) takes the form if is nondegenerate , then the direction is inner for arbitrary pure in the range of , and the necessary and sufficient condition for the minimum is that ( [ nsp ] ) holds for arbitrary such .let be a collection of selfadjoint _constraint operators_. assume that for some real constants it follows that the convex function achieves minimum at the point , hence the function achieves minimum at the point under the constraints .let us consider ccr with one degree of freedom described by one mode annihilation operator , and consider the transformation where is a complex random variable with zero mean and variance .this is a transformation of the type ( [ chan ] ) with , which describes quantum mode in classical gaussian environment .the action of the dual channel is = \int f(a+z,(a+z)^{\dagger } ) \mu_{n_{c}}(d^{2}z),\ ] ] where is now complex variable , and is complex gaussian probability measure with zero mean and variance , while the channel itself can be described by the formula =\int d(z)\rho d(z)^*\mu_{n_{c}}(d^{2}z ) , \label{spn}\ ] ] where is the displacement operator .the entanglement - assisted classical capacity of the channel ( [ spn ] ) was first studied in by using rather special way of purification and the computation of the entropy exchange . a general approach following the method of described in sections iv - v ; here we give an alternative solution based on the computation of the environment entropy . for thiswe need to extend the environment to a quantum system in a pure state .consider the environment hilbert space with the vector given by the function identically equal to 1 .the tensor product can be realized as the space of -square integrable functions with values in .define the unitary operator in by then ={\rm tr}_{{\cal h}_{e}}u\left ( \rho \otimes ={\rm tr}_{{\cal h}}u\left ( \rho \otimes |\psi_{0}\rangle\langle\psi_{0}|\right ) u^*.\ ] ] this means that ] is (z_{1},z_{2 } ) = \int \left .v(z_{1},z_{2})k(\check{z},z)\right|_{\check{z}=z } \mu_{n_{c}}(d^{2}z),\ ] ] where acts on as a function of the argument . evaluating the gaussian integral, we obtain that it is equal to , \ ] ] ( where now ) , which is gaussian characteristic function with the correlation matrix .\ ] ] thus .\ ] ] by using pauli matrix , we can write it as & & = \\ = \frac{1}{2}\left [ \begin{array}{cc } i & 0 \\ 0 & \sigma_{y } \end{array } \right ] & & \left [ \begin{array}{cc } -i\sigma_{y}n_{c } & \sigma_{y}\frac{d^{2}}{n_{c } } \\ -\sigma_{y}n_{c } & -i\sigma_{y}n_{c } \end{array } \right ] \left [ \begin{array}{cc } i & 0 \\ 0 & \sigma_{y } \end{array } \right ] , \end{aligned}\ ] ] hence the absolute values of the eigenvalues of are the same as that of the matrix , \ ] ] which coincide with ( [ eig ] ) in the case .99 c. h. bennett , p. w. shor , `` quantum information theory , '' ieee trans . on inform. theory , * it-44 * , n6 , pp .2724 - 2742 , 1998 .a. s. holevo , `` coding theorems for quantum channels , '' , no.4 , 1998 .lanl report no .quant - ph/9809023 . h. barnum , m. a. nielsen , b. schumacher , `` information transmission through noisy quantum channels , '' , vol . *a57 * , pp .4153 - 4175 , 1998 .lanl report no .quant - ph/9702049 .c. adami and n. j. cerf,``capacity of noisy quantum channels , '' , vol . * a56 * , pp . 3470 - 3485 , 1997; lanl report no .quant - ph/9609024 .h. barnum , e. knill , m. a. nielsen , `` on quantum fidelities and channel capacities , '' lanl report no .quant - ph/9809 . to appear in ieee trans .on inform . theory . c. h. bennett , p. w. shor , j. a. smolin , a. v. thapliyal , `` entanglement - assisted classical capacity of noisy quantum channel,''lanl report no .quant - ph/9904023 .s. l. braunstein , `` squeezing as an irreducible resource '' , lanl report no .quant - ph/9904002 .a. furusawa , j. srensen , s. l. braunstein , c. fuchs , h. j. kimble , e. s. polzik , science , vol.*282 * , 706 , 1998 .b. demoen , p. vanheuverzwijn , a. verbeure , `` completely positive quasi - free maps on the ccr algebra , '' _ rep ._ , vol.*15 * , pp .27 - 39 , 1979 .a. s. holevo , `` some statistical problems for quantum gaussian states , '' , vol .* it-21 * , no.5 , pp .533 - 543 , 1975 .a. s. holevo , , chapter 5 , north - holland , 1982 .r. f. werner , `` quantum harmonic analysis on phase space , '' , vol . *25 * , pp .1404 - 1411 , 1984 .a. s. holevo , m. sohma and o. hirota , `` the capacity of quantum gaussian channels , '' , vol .* 59 * , n3 , pp .1820 - 1828 , 1998 .a. s. holevo , `` sending quantum information with gaussian states , '' lanl report no .quant - ph/9809022 . to appear in proc .qcm-98 , ed . by m. dariano , o.hirota , p. kumar .a. s. holevo,``towards the mathematical theory of quantum communication channels , '' , vol .* 8 * , no.1 , pp .63 - 71 , 1972 .k. kraus , states , effects and operations , _ lect .notes phys ._ , vol . * 190 * , 1983 .g. lindblad , `` quantum entropy and quantum measurements , '' , vol . * 378 * , quantum aspects of optical communication , ed .by c. benjaballah , o. hirota , s. reynaud , pp.71 - 80 , 1991 .p. w. shor et al , in preparation .s. lloyd , `` the capacity of the noisy quantum channel , '' , vol . * 56 * , pp . 1613 , 1997 .d. p. divincenzo , p. w. shor , j. a. smolin , `` quantum - channel capacity of very noisy channels , '' , vol . * 57 * , pp.830 - 839 , 1998 , lanl report no .quant - ph/9706061 .v. i. paulsen , , longman scientific and technical 1986 d. aharonov , a. kitaev , and n. nisan , `` quantum circuits with mixed states , '' lanl report no .quant - ph/9806029 .t. ogawa , h. nagaoka , `` strong converse to the quantum channel coding theorem , '' lanl report no .quant - ph/9808063 to appear in ieee trans . on inform .theory .a. winter , `` coding theorems and strong converse for quantum channels , '' to appear in ieee trans . on inform . theory .r. simon , m. selvadoray , g. s. agarwal , `` gaussian states for finite number of bosonic degrees of freedom , '' to appear in . c. w. helstrom , , chapter 5 , academic press , 1976 .a. s. holevo,``generalized free states of the c-algebra of the ccr , '' , vol . *6 * , no.1 , pp . 3 - 20 , 1971 .werner,``the classical limit of quantum theory , '' lanl report no .quant - ph/9504016 .[ fig - gofk ] j = coherent information ( [ coh-1mode ] ) with ; + qg== bound maximized over gaussians ( [ qg ] ) ; + qt = bound from transposition ( [ 1modbound ] ) ; + z= zero at , common to all curves of type j.
|
we show how to compute or at least to estimate various capacity - related quantities for bosonic gaussian channels . among these are the coherent information , the entanglement assisted classical capacity , the one - shot classical capacity , and a new quantity involving the transpose operation , shown to be a general upper bound on the quantum capacity , even allowing for finite errors . all bounds are explicitly evaluated for the case of a one - mode channel with attenuation / amplification and classical noise .
|
fashions have changed , in fashion , old - fashioned are phrases frequently used in the street , on tv or in newspapers .but what fashion are we talking about ?the fashion of ideas , artistic fashion or , more prosaically , dress fashion ?when we talk about fashion , do we consider it to be the result of a creative process ( either intellectual or industrial ) , or a method of communicating a certain way of life which would correspond in economics to the level of information ?in this paper , fashion should be understood as the way a creative process entails new behaviour in a set of consumers , independently of the information structure .we apply the model to the clothes fashion market . every time a new outlet appears on the market , it can, under certain specific conditions , invade the market .hence , the old outlet can be completely dominated and disappears .each creator competes for a finite population of agents .as is the case for the movie industry or the industry of industrial design , fashion activity depends on the level of creativity of the designers .it also depends on the level of public recognition of this creativity .nike certainly shares this assumption , paying michael jordan ( a famous basket ball player in the u.s . )20 million dollars in 1992 , for endorsing nike running shoes .fashion does not improve either efficiency or the marginal utility of the consumption .clothes belong to a class of goods whose functional properties are fundamental in their consumption .it is easy to understand why i need to buy a dress , or a pair of jeans .it is harder to explain why i choose adidas shoes , or levi s jeans .moreover , it seems very difficult for an economist to explain why women ask for short skirts one year , long skirts the year after , and then short skirts again the year after that. some theoricians may talk of erratic preferences , but the desire to be in fashion could justify janssen and jager s argument that ... in satisfying their need for identity , people may change their behaviour without changing their preferences . in markets in general and the fashion market in particular, people are interacting simultaneously , and constantly modifying their decisions .it is very reminisence of the way atoms interact in inert matter .galam and moscovici built a model of group decision making to describe the dynamics of competing effect in updating individual opinions .de oliveira et al give some interesting applications of physic tools to study economic and social problems has become more numerous .social interactions certainly play a major role in the understanding of fashion diffusion process . by social interactions , we refer to the idea that the utility or payoff an individual receives from a given action depends directly on the choices of others in that individual s reference group , as opposed to the sort of dependence which occurs through the intermediation of markets ( brock and durlauf 1995 ) .hence , we postulate that a consumer maximises his utility when he is dressed like everybody else in his peer group . in this paper, we consider that fashion results from the desire both to conform and to differentiate oneself from others .we also assume that products are disembodied , thus any agent can switch from one brand to another without cost . under these assumptions, we attempt to explain how , after a certain number of steps , a common unique behaviour can emerge from a finite population with heterogeneous behaviour .we believe that the structure and organisation of the market are strong factors in the diffusion of fashion . the fashion sector has changed since the 1950s .a lot of studies show that this sector was organised into a very simple system until the 1970s ( creation , production , distribution ) .since the beginning of the1980s , the fashion sector has developed into a quite complex and destructured system , with the emergence of different levels of quality ( fashion off the peg and luxury off the peg ) .the fashion industry now follows a rationale of filire , which implies a multiplication of brands ( with a tough policy of brand protection),and sales levels .this tough policy of brand protection actually coexists with a high level of market piracy in the design - based industry . for a long time the legal market fought hard against this illegal market .it seems that things have recently changed : an explanation could be in terms of of social welfare .some recent analyses show how if the counterfeit industry were to disappear it would leave the place open to criminal organization , which would clearly be dangerous for the economy and for society .according to us , allowing counterfeits can have positive externalities for the main brands .a high level of piracy can be interpreted as a signal that the product is fashionable . in this sense , the illegal market becomes an advertising tool .the more a brand is counterfeited , the more this brand will be seen by a wide public .multiplying the levels of quality can also be interpreted as an advertising tool .the consumer side can be seen as a lattice , in which people have strong cliquishness with their peers , in hierarchic social groups .hence , the fashion market seems to be extremely hierarchical both on the demand and on the offer side .proving the existence of a dominant strategy for the creators could help to explain why some fashion goods emerge , where others fail . in what follows , we assume that the aim of a new creator is to invade the fashion market . in other terms , his product must become the fashion . in this paper, we compare the different strategies a brand can use to insure the widest diffusion of its product with the minimum level of investment.we wish to explain why the fashion filire is currently so diversified and hierarchically organised , why counterfeits can have positive externalities and why , therefore , creators must influence social interactions by appropriate strategies and facilitate the diffusion of their creation . * in part 2 * , we focus on a market within which people are anonymous , with homogeneous characteristics.we apply the concept and techniques of real space renormalisation group from physics , in the line of ma to study the fashion diffusion process within hierarchical structures . in particular , we focus on the necessary conditions for a given brand to be sure to dress the majority of people in a society . in previous papers , galam found that majority rule voting produces a critical threshold to total power .the value of the critical threshold to power is a function of the voting structure , namely the size of voting groups and the number of hierarchical voting levels .we find here that when producers consider consumers as simply being organised into reference groups the presence of counterfeits can be helpful in invading the market .this result is demonstrated in * part 3 * where we show that in the case where consumers are anonymous , allowing counterfeits can be an efficient tool to diminish the level of investment . * in part 4 * , people are no longer anonymous .they are organised into small groups , which can be interpreted as classified social groups with heterogeneous characteristics . in this situation, people can recognise each other .we postulate that they are _ a priori _ leaders ( they make the fashion ) or followers .every time a follower imitates a leader , he becomes a leader himself .we show that invasive creators can avoid counterfeits and minimise their sunk costs if they can identify organised , hierarchic social groups within the society .* part 5 * deals with setting some quantitative conditions to invade a given market with an already existing set of references groups and leaders .* last part * concludes .when sufficient agents in a consumer s social network switch from one product to another , the preferences of this consumer are assumed to follow . but little is said about how this socialisation appears , although some authors as mccauley , rozin and schwartz speak about cultural transmission . in what follows ,we show how creators can generate social effects by making useful investments , then use these effects to invest less . at the beginning of the game ,all the consumers wear a specific brand , let us say a. in the next period , a new creator b emerges and decides to invade the market . to this end , he makes classical investments such as advertising campaigns , promotions , etc .now , people can choose between two brands ( a and b ) , representing two tendencies in the fashion .we assume that each individual wants to be in fashion .reference groups can be seen as cells that will produce common social behaviour .the investment made by the new creator will be efficient if it is sufficient to allow a wide diffusion process . at some stage , we consider all investments from both brands are stopped .there , we denote by the overall proportion of people wearing the b brand and the corresponding proportion wearing the a brand . from these initial conditions we study the internal dynamics of spreading or disappearence of the new brand b within the population .the underlying assumption is that each individual wants to be in fashion .we believe that it is not because they have erratic preferences that people will switch from one fashion good to another , but rather because the maximisation of their utility depends on their level of belongingness to a reference group . consuming the same product as their peers or friends increases this feeling of belongingness .to achieve this individual goal , we assume that each person goes through a hierarchical imitation process .each step of this process is related to a reference social group that produces common social behaviour . within this framework ,every reference group consists of the aggregation of other smaller reference groups .moreover , it is only when a local fashion is established at one reference group level that the higher reference group level becomes activated , as seen in figure ( 1 ) .each increase in the reference group size is referenced with a one time unit incrementation .= 3.4cm=3.4 cm = 3.4 cm we start from an anonymous situation in which the in - fashion model is driven solely by a simple individual motivation to be like the majority of the people around .there exists no leader , no intrinsic advantage to one brand over the other , only the majority effect counts .we assume the population dynamics of fashion awareness to be go within a successive set of reference groups level extending to some factor r where r is an integer which counts how many persons are within the first level of reference groups . then to simplify the equations we postulatethe rescaling of the reference group extension goes also by a factor r. it means people go in fashion first according to a local majority among r persons .once this step is completed , the in - fashion process goes up to include r groups of r persons each , i.e. , by majority among persons . and so on up to include the whole population .for instance for we have 100 persons by first reference group level and then it jumps to includes and reach , one million already after 3 levels . to formalize above schemewe calculate the probabiliy to have one initial reference group of r persons to have a majority of people wearing the b brand starting from a whole larger population with a proportion of b wearing .all configurations of r persons having from r persons wearing the b brand down to m ones where for odd r and for even r add to yield a b majority .the case of equality between the numbers of a and b wearing persons is attributed to the b new brand as a tip to novelty .accordingly having at leads to at with , where the function determines the renormalized proportion of b wearing persons .more generally , starting from one reference level with a b proportion leads to the new proportion at level with , simultaneously the a proportion varies according to . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for a creator who wishes to invade the market , an investment threshold exists , below which investing is useless , no matter what the amount of investment .investing more is also superfluous .the optimal strategy consists in setting the right level of investment just above the threshold to enable the full market invasion driven by the in fashion process ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to follow the dynamics of change in the density of _ b - dressed _people we need to study the fashion function defined above in eq.(2 ) .one shape is shown in figure ( 2 ) for the case . from the figure and equation( 2 ) it is found that the renormalized in - fashion process produces a monotonic flow towards either one of the two stable fixed points and .the first one corresponds to the total disappearance of the b brand with a preserving its initial monopoly . at the other extreme represents a total b invasion with the a brand totally evicted . in between these two pointsthere exists another one which is unstable since it produces the monotonic flow towards either one of the two stable fixed points .it defines the critical density below which the repeated extension in the in - fashion process leads inexorably to the total disappearence of the b brand .for any odd reference group size it is located at given by which gives the threshold to b invasion at exactly .starting from leads towards 0 while the flow leads to 1 for .for instance in the case of we get the following series starting from , and in the case of we get the series is , and . therefore the repeated in - fashion process produces the self - elimination of any proportion of an initial b brand as long as . to be completed total disappearence only two reference levels are required .getting closer to the unstable fixed point increases slightly the number of required reference levels . with series are , , and , , and for respectively and . at this stage ,the key issue to ensure full monopoly for the initial newcomer brand is a huge investment to guarantee a starting in fashion process with more than fifty percent of the population wearing its b brand .any value less does make the all investment pure waste with the total dispappearence of the brand .such a condition put the level of success at an almost impossible task ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ when individuals on the consumer side are anonymous and simply organised into undifferentiated large size reference groups , producers can not recognise them . in this situation ,allowing counterfeits is a dominant strategy , which will implicitly share the minimum level of investment needed to invade the whole market ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ however , reaching a level below the threshold is just a waste of money . to ignore the existence of a threshold level can yield to a quite expensive strategy of penetration . on the other hand to pass the threshold is also a waste of money . and moreover in the real world the exact value of the treshold is unknow .this quite difficult situation in assessing the right level of investment leads to view allowing counterfeits as a strategic attitude .it can indeed help in pushing the initial penetration above the critical threshold , for no extra cost .when individuals on the consumer side are anonymous and simply organised into undifferentiated reference groups , producers can not recognise them .in this situation , the best strategy is to allow counterfeits , which will implicitly share the minimum level of investment needed to invade the whole market .therefore , allowing counterfeits can be a useful strategy when producers are facing an anonymous market , to avoid a total waste of the initial investment .for instance , considering an investment yielding some initial leads as seen before to zero . at constrast allowing counterfeits to yield some of persons wearing the counterfeits produces an effective proportion of b wearing such that now . in that case the b will invade the market thanks to the counterfeits .altough it does hold the one hundred percent of the full market it does have make a profitable investment . in this sense , counterfeits can be considered as a diffusion vector and then , be a useful tool for lowering the threshold . but this tool can also have dangerous effect .for example , the counterfeit producer can invade the market , creators never know exactly until where the counterfeits will invest , the presence of a high number of counterfeits can make the brand less attractive for a certain part of the population .we will now explore whether creators can use other tools to decrease the level of initial investment .we have already shown that invading a market can have a high cost , or/ and be an inefficient strategy when the threshold is not reached . in this case , creators may be tempted to allow counterfeits in order to decrease their level of investment .however such a strategy does have also a high cost . on this basiswe suggest another strategic tool consisting in acting on the in - fashion frame itself . the idea is to modify the value of the unstable threshold which is instrumental in determining the all or nothing investment to invade or disappear from a market . to shift the threshold from the value towards a lower value for the invader means to increase it to the in market brand .splitting the unstable threshold value into two different values implies to introduce some asymmetry to favor the new coming brand .this can be done naturaly since we already have introduced a bias in favor of the new brand .indeed we assume that given a reference group , in case of a tie , equal a and b wearing , the in fashion process leads to choose the new brand as the tip for novelty .however for large reference group sizes , the occurrence of a tie is very rare and does not have much effect on the overall dynamics . at constrast smaller reference groups will exhibit more often tie situations . therefore to implement our new strategyrequires to decrease the reference size groups .such a change can be obtained by producing clear social signs to make people able to recognize each other .we are suggesting taking out from anonymity the imitating processes to turn it personalization . in otherswords make people choose whom they wish to resemble .to make our task easier , we talk about neighbours ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for a creator who wishes to invade the market but avoid counterfeits , an optimal strategy consists in breaking down the size of reference groups by a personalization of the in fashion process .the smaller the reference groups , the lower the investment level ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ having find out where to act to produce a disymmetry in favor of the incoming new brand , we now analyse our model in the case of maximum tie effect which occurs at the smaller social group exhibiting a tie , a group of 4 persons . if at time , 3 or 4 people in the group are _ a - dressed _, then all the people end up _ a - dressed _ at time .however if only 0 or 1 people are _ a - dressed _ , then all people end up _ b - dressed_. in case of a tie with 2 _ a - dressed _ and 2 _ b - dressed _, a bias in favour of novelty results in the new brand b being adopted by the whole group . accordingly putting in eq . ( 2 ) gives for the first reference group level in fashion result , at time , where as before is the initial proportion of b brand - wearing people at .the following three configurations \{(2 a , 2 b ) , ( 4 a , 3b ) , ( 0 a , 4 b ) } leads to ( 0 a , 4 b ) . while the in fashion function still have the two stable fixed points zero and one , the unstable fixed point is now located at , which put the threshold to b invasion at about , a much lower value than as shown in figure ( 3 ) . at the same time to keep on the whole marketthe a brand must always share more than of the market .starting from leads towards 0 while the flow leads to 1 for .the above people , all locally ( groups of four ) wearing the same brand , constitute the first level of the hierarchy reference groups denoted level 1 .then the same in - fashion process is repeated , but now within reference groups of 16 people each .again , each whole reference group of 16 people adopts the same brand , either a or b , according to a local majority rule .a tie situation still yields a b brand choice . in terms of densitythe same equation holds , with since with the three types of configurations ( 8 a , 8 b ) , ( 4 a , 12 b ) , ( 0 a , 16 b ) which can be noted as follows : 4\{(2 a , 2 b ) , ( 1 a , 3 b ) , ( 0 a , 4 b)}. each additional increment of time increases the size of the reference group with a new in - fashion process leading to an homogenisation on brand a or b with the density at level given by where is the proportion of b brand - wearing people at level n with the following three configurations \{(2 a , 2 b ) , ( 1 a , 3b ) , ( 0 a , 4 b)}. to illustrate the quantitative changes produced by the smaller size effect , we first come back to the case of an intitial which goes down to zero within two reference group levels for and . now in the case not only the series does not go to zero but instead increases toward one , i.e. , the full market invasion with , , and .only 4 reference groups levels are enough to total invasion . at the fourth levelonly 96 people are involved instead of the one million after 3 levels in the case .but now , as the threshold is at we can follow what happens with an initial investment accounting for instance to a b wearing proportion .we have the series , , , , , , ,and .eight levels of reference groups are required for a total invasion of the market .an initial value yields , , , , and reducing the number of reference groups required to six . one illustration of the different outcome resulting from breaking down the neighborhood is shown in figures ( 4 , 5 , 6 , 7 ) for a total population of 64 persons with 21 dressing an x item and 43 a o item , x being the new one .figure ( 4 ) shows the in - fashion process at work for one reference group embodying at once the whole population of the 64 persons .the result is the winning of the item o initially chosen by the majority .figures ( 5 , 6 , 7 ) shows the same initial population with now the existence of 3 discrete reference groups with respectively 4 , 16 and 64 people .figure ( 5 ) shows the introduction of the first level of neighboring reference group with four persons each .x and o wearing are present on the left side .common behavior is completed on the right side with now 24 x and 60 o. in figure ( 6 ) new larger reference groups of 16 people each are activated as shown on the left side . on the right side common behavior has been completed with an equality between x and o , each item being chosen by 32 persons .. figure ( 7 ) exhibits the last reference group which embodies the whole population as shown on the left side . on the right sideeveryone is wearing an x item .the opposite outcome of figure ( 5 showing how the creation of intermediate reference group levels has been able to reverse the outcome of the in - fashion process .we can now see why it is important for a brand to multiply quality levels and forms of marketing in its filire .this strategy could be seen as a classical process of differentiation .the wider the scale of products , the larger the set of potential consumers . buteach level can be considered as a specific means of being seen by one or more members of a reference group ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for a creator who wishes to invade the market , an investment threshold exists , below which investing is useless , no matter what is the amount of investment .investing more is also superfluous .the optimal strategy consists in hierarchically organizing the filire . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ at this stage , the key issue is to determine the number of reference levels needed to ensure full monopoly for the initial newcomer brand in the case of small group sizes . for large size , the threshold is at around and the level number is very small but with huge number of people involved .it is also worth to notice that increasing the size to a larger group inceases the threshold value toward that value of .for instance we have for respectively .we can now calculate analytically the critical number of reference group levels at which with being a very small number .this determines the level of confidence of the prediction of getting the whole market .one way to evaluate is to expand the in - fashion function around the unstable fixed point , where with . rewriting the last equation as we can then repeat the process to get from which we get the critical number of levels at which . taking the logarithm on both sides of eq .( 8) gives where .as we are dealing with integers , we have to take the integer part of eq .putting yields pretty accurate results .as it may turn rather costly and complicated to modify both the size reference groups and the associated number of levels we now evaluate the following strategic issue . given n reference group levels of group size r ,what is the initial overall proportion of b wearing required to get everybody dressed in b ?it is worth keeping in mind the fact that the dynamics for a and b are not symmetric . here, we focus on the in - fashion dynamics from the b point of view . to proceed , we invert eq .( 9 ) to obtain this yields two new operative critical thresholds .the first gives the value of initial market penetration below which b is sure to disappear from the market .it is obtained from eq .( 10 ) by inserting and it yields in parallel , setting gives the second threshold , above which the b brand invades the whole market .combining eqs . ( 11 ) and ( 12 ) gives which shows the appearance of a strategic domain for . in this regionb neither disappears totally nor takes over the whole market .there is a coexistence with being at equilibrium and neither 0 nor 1 .this is therefore a coexistence region in which both brands are present .no one is sure of winning .however , as seen from eq .( 13 ) , this coexistence region shrinks as a power law of the number n of hierarchical reference group levels . having a small number of these levels raises the threshold for total invasion but simultaneously lowers the threshold for disappearence .the above formulas are approximated , since we have neglected corrections in the vicinity of the stable fixed points .however they give the right quantitative behavior .to get a practical feeling of what eqs .( 12 ) and ( 13 ) mean in terms of numbers , let us illustrate them for the case where we have and . considering 3 , 4 , 5 , 6 and 7 level reference systems , is equals to respectively , , , and . in parallel equals , , , and .these series dramatically emphasize the massive effect of the in - fashion process .in this paper , we have discussed the different investment tools a creator can use to invade a market , in other words to become the fashion .we use the concept and techniques of real space normalisation group to study the fashion diffusion process within hierarchical structures.we show that when potential consumers are organised into large , anonymous groups producers can not recognize them . in this case , the level of investment making it possible to invade the market is certainly high , and it may be optimal to allow counterfeits . but allowing counterfeits can also be a dangerous tool .we then show that if creators could organize people into distinctive reference levels , this would diminish the threshold and make it possible to avoid counterfeits .simmel noted that the fashion market actually seems to be the joint result of an imitation process and a search for differentiation .we show that in creating different social leaders , producers enable consumers to differentiate themselves from each other by imitating their closest social leaders . to simplify our task , we have focused on the clothes fashion market . with slight modifications , our resultscould be extended to other kinds of fashion such as the fashion in research ideas , for example .
|
we apply a physical based model to describe the clothes fashion market . every time a new outlet appears on the market , it can invade the market under certain specific conditions . hence , the `` old '' outlet can be completely dominated and disappears . each creator competes for a finite population of agents . fashion phenomena are shown to result from a collective phenomenon produced by local individual imitation effects . we assume that , in each step of the imitation process , agents only interact with a subset rather than with the whole set of agents . people are actually more likely to influence ( and be influenced by ) their close `` neighbours '' . accordingly we discuss which strategy is best fitted for new producers when people are either simply organised into anonymous reference groups or when they are organised in social groups hierarchically ordered . while counterfeits are shown to reinforce the first strategy , creating social leaders can permit to avoid them . pacs numbers : 02.50.ey , 05.40.-a , 89.65.-s , 89.75.-k keywords : renormalisation group , fashion dynamic , investment threshold
|
in several physical problems , such as estimation of the mass inertia matrix in the design of controllers for solid structures and robots , an overdetermined linear system of equations with multiple right hand side vectors arises with the constraint that the unknown matrix be symmetric and positive definite ; see , e.g. , .a method for solving such a problem has been proposed in .there are also physical contexts , such as modeling a deformable structure and computing the correlation matrix in finance or insurance / reinsurance industries , where a symmetric positive semi - definite solution of an over determined linear system of equations needs to be computed or equivalently the problem needs to be solved , where , with , are given and , a symmetric positive semi - definite matrix , is to be computed as a solution .in some special applications , the data matrix has a simple structure , which may be taken into consideration for efficiently organized computations .computing the correlation matrix in finance is such an example where the data matrix is the identity matrix ; see , e.g. , .+ unlike the positive definite total least squares problem , here the unknown matrix is singular and thus our previously defined error formulation in is no more applicable .we need to formulate the error in the measured data and target matrices as a function of the unknown matrix but not its inverse . +a number of least squares formulations have been proposed for the physical problems , which may be classified as ordinary and total least squares problems . unlike the ordinary formulation , in a total least squares formulationboth data and target matrices are assumed to contain error .also , single or multiple right hand sides may arise . in ,ordinary and total least squares formulations with single or multiple right hand sides have been considered . for detailed analysis of total least squares ,see .+ here , we consider an specific case of the total least squares problem with multiple right hand side vectors .our goal is to compute a symmetric positive semi - definite solution of the overdetermined system of equations , where both matrices and may contain error .several approaches have been proposed for this problem , commonly considering the ordinary least squares formulation and minimizing the error over all symmetric positive semi - definite matrices , where is the frobenious norm .larson discussed a method for computing a symmetric solution to an overdetermined linear system of equations based on solving the corresponding normal system of equations .krislock proposed an interior point method for solving a variety of least squares problems with positive definiteness constraint .woodgate described a new algorithm for solving a similar problem in which a symmetric positive semi - definite matrix is computed to minimize , with known and . in , toh introduced a path following algorithm for solving a positive semi - definite quadratic optimization problem . later in 2009, he posted a matlab package for solving such a problem ; see .hu gave a quadratic programming approach to solve a least squares problem with a symmetric positive definite unknown matrix . in his method , the upper and lower bounds for the entries of the target matrix can be given as extra constraints .in real measurements , however , both the data and target matrices may contain error .thus , to be practical , a total least squares formulation seems to be appropriate .here , we define a new error function to consider error in both data and target matrices and propose an iterative algorithm to minimize the defined error .+ if the goal is to compute the correlation matrix , the mathematical problem is a little different .computing the correlation matrix is very important in financial modeling .it is applicable for example in obtaining a quadratic model for an economical system and even in reverse engineering for extreme scenario stress testing . in this case, the data matrix is the identity and a large number of linear constraints are to be satisfied .sun presented an algorithm for computing the correlation matrix .rebonato and werner also discussed solving the same problem .we will see later that the minimum rank problem can also be solved by applying our proposed algorithm .this problem appears in the literature in diverse areas including system identification and control , euclidean embedding , and collaborative filtering ; see . in a minimum rank problem ,the goal is to find a positive semi - definite solution with the minimum possible rank to an overdetermined linear system of equations .+ the remainder of our work is organized as follows . in section 2, we define a new error function for solving a positive semi - definite total least squares problem with a fixed rank . a method for solving the resulting optimization problemis presented in section 3 .also , a discussion on solving the positive semi - definite total least squares problem ( with arbitrary rank ) is given in section 3 . in section 4 ,we introduce two slightly different problems and discuss how to solve them based on the proposed method in section 3 .these two problems are : the minimum rank problem and computing the correlation matrix .comparative computational results are given in section 5 .section 6 gives our conclusion .available methods for solving a positive semi - definite least squares problem consider an ordinary least squares formulation ; see , e.g. , .a practically useful total error formulation was introduced in for a positive definite total least squares problem . based on this formulation , the solution of the optimization problem is a solution of a corresponding positive definite total least squares problem , where is symmetric and by , we mean is positive definite .the error formulation in not being suitable here , we first motivate and present a new error formulation for the positive semi - definite total least squares case .+ in ( [ 2 ] ) , the entries of and represent the errors in and , respectively . here , we need to represent the error in independent of . before discussing how to solve the positive semi - definite total least squares problem , we consider the newly noted problem , positive semi - definite total least squares problem with a given rank , , of the unknown matrix ( r-psdtls ) . in section 3 , we outline an algorithm for solving r-psdtls and discuss how to solve the positive semi - definite total least squares problem applying the proposed algorithm .+ the error in is supposed to be the difference between the real value of and the predicted value for obtained by . to compute the predicted value for , we use the general least squares solution of the system . considering the block form and , where , for , we have , for . the general solution to such a linear system has the form where is the pseudo - inverse of and is an arbitrary vector in the null space of .a straight choice for is which results in and .we later consider the suitable choices for which minimizes .to compute from ( [ 3 ] ) , the spectral decomposition can be applied . + + * result . * now , making use of the spectral decomposition of in ( [ 3 ] ) , we have where , for , are arbitrary vectors , and thus , the predicted value for is with , arbitrary , and the error in is equal to .a reasonable choice for in this formulation would be the one minimizing the norm of , which is the solution of the optimization problem where and is the frobenius norm . solving ( [ 6 ] ) results in substituting ( [ 7 ] ) in ( [ 5 ] ) , we get using along with ( [ 8 ] ) , we get based on the above discussion , and represent the error in and respectively . thus , to solve a rank positive semi - definite total least squares problem, it is appropriate to minimize the error with standing for trace of a matrix .consequently , the optimization problem needs to be solved , where is symmetric and by , we mean is positive semi - definite .we can rewrite the optimization problem using the spectral decomposition of and substituting and by and respectively . considering well - known properties of the trace operator and the above formulation for and , we get letting , and , problem ( [ 10 ] ) is then equivalent to where satisfies and is a nonsingular diagonal matrix . +the lagrangian function corresponding to the constrained optimization problem is , where the lagrangian coefficient vector corresponds to the constraint vector .necessary conditions for a solution , known as karush - kahn - ticker conditions , is as well as , which gives ; for kkt conditions , see .thus , is the lagrangian function for the optimization problem ( [ 12 ] ) .+ an appropriate characteristic of the error formulation proposed by is that its value is nonnegative and it is equal to zero if and only if .it is clear that if , then . assuming , from ( [ 66 ] ) we have which holds if and only if multiplying both sides from right by , we get or equivalently if ( [ 67 ] ) is satisfied , then we have ; hence , .here , we discuss how to solve ( [ 12 ] ) .we also study the computational complexity and the convergence properties of our proposed algorithm .we are to propose an algorithm for solving ( [ 12 ] ) .more precisely , a nonsingular diagonal matrix and a matrix need to be computed to minimize where is the stiefel manifold : in the following lemma , we show that the optimization problem ( [ 12 ] ) is strictly convex under a weak assumption on the data and target matrices .we also make use of the well - known properties of convexity to propose our algorithm .+ + [ 55 ] the function is always convex and it is strictly convex on the set , where . the key point of the proof is to reformulate as thus , is always convex and it is strictly convex if and only if + , for , is positive definite which holds if and only if , for , has full column rank . * note .* since the function is strictly convex and the set is convex , a point satisfying the karush tucker ( kkt ) optimality conditions is the unique solution of problem ( [ 12 ] ) ; for kkt conditions , see .+ + next , in the following theorem we derive the kkt optimality conditions for problem ( [ 12 ] ) .+ [ 14 ] if is the solution of problem ( [ 12 ] ) , then it satisfies where is the column of .+ the kkt necessary conditions for ( [ 12 ] ) are obtained by setting .thus , if forms a solution for ( [ 12 ] ) with , then we must have , for . to simplify the computation of , we can reformulate using the definition of the trace operator .let and .we have and so , is equal to now , from ( [ 15 ] ) we have and can be computed by . considering the above discussion ,in each iteration of our proposed algorithm we need to + ( 1 ) compute one term of the sequence converging to the minimizer of the + , and + ( 2 ) compute the diagonal elements of from ( [ 27 ] ) .+ edelman introduced two methods for solving optimization problems on stiefel manifolds : the newton method and conjugate gradient method on the stiefel manifold . here , we adaptively use the newton approach to develop an algorithm for solving ( [ 12 ] ) . in each iteration of our proposed algorithm ,a newton step is computed from the newton method on the stiefel manifold and then the diagonal elements of , , are updated by ( [ 27 ] ) .we show in section 3.2 that our proposed algorithm converges to the unique solution of ( [ 12 ] ) at least quadratically .also , we discuss its computational complexity in section 3.2 . + + we are now ready to outline the steps of our proposed algorithm . + * algorithm 1 . * solving rank positive semi - definite total least squares problem using the newton method on stiefel manifold ( r-psdtls ) .+ - and are upper bounds for relative and absolute error , respectively taken to be close to the machine ( or user s ) unit roundoff error and machine ( or user s ) zero .+ ( 1 ) let , and .+ ( 2 ) choose such that .+ repeat + ( 3.1 ) let .+ ( 3.2 ) compute the matrix such that and let ( 3.3 ) to compute , solve the linear system of equations .+ ( 3.4 ) move from in direction to using , where the compact factorization of , and and are : .+ ( 3.5 ) compute .let . +until .+ ( 4 ) let and . + + * note . * the linear equation may be solved by various methods including conjugate gradient and gmres .another possible method is to convert the linear operator appearing on the left side of ( [ 17 ] ) to an linear system of equations . in section 5, we present the numerical results obtained by using these three methods and compare the respective obtained accuracies and computing times .+ in section 3.1 , we outlined algorithm 1 to solve the rank positive semi - definite total least squares problem .here , we discuss how to solve the general positive semi - definite total least squares problem . +a positive semi - definite solution for the overdetermined linear system of equations , whose rank is not known , needs to be computed .this problem arises for example in estimation of compliance matrix of a deformable structure . to solve this problem, we can apply psdtls for possible values of , compute the corresponding solutions , and identify the one minimizing .we will refer to this approach as psdtls . in section 5.1 ,we report some numerical results to compare psdtls by two existing methods .although our proposed method ( psdtls ) computes the minimizer of for each value of and then finds the optimal solution among , for , it takes less time to solve the problem than two other proposed methods in the literature . here , we discuss convergence properties of r-psdtls .we cite a theorem to be used to establish the local quadratic convergence of r-psdtls to the unique solution of ( [ 12 ] ) .we also show that the computational complexity of every iteration of our proposed algorithm is moreover , we provide an upper bound for the computational complexity of our proposed approach for solving the positive semi - definite total least squares problem , psdtls . [ 28 ] ( ) newton s method applied to the function on the stiefel manifold locally converges to the unique solution of at least quadratically .+ see .+ algorithm 1 converges locally to the unique solution of problem ( [ 12 ] ) at least quadratically .in algorithm 1 , we have two main computations : applying newton s approach on stiefel manifold to update and updating the scalars using ( [ 27 ] ) . the rate of convergence is not affected by ( [ 27 ] ) and it is governed by newton s approach .thus , considering theorem [ 28 ] , r-psdtls converges at least quadratically to the unique solution of ( [ 12 ] ) . **computing cost : rank positive semi - definite total least squares problem . * * the computational complexity of one iteration of r-psdtls is given in table [ t1 ] .the first , second and third columns respectively give the computational complexities of solving the linear problem ( [ 17 ] ) using conjugate gradient method in the operator form ( cg - o ) , gmres in the operator form ( gmres - o ) and conjugate gradient method after converting ( [ 17 ] ) into a linear system of equations ( cg - l ) ; for details , see ..computational complexities for one iteration using different approaches .[ cols="^,^,^,^ " , ] considering the numerical results reported in this section , we summarize our observations : + + ( 1 ) a newly defined problem , r-psdls , was considered and an efficient algorithm + proposed for its solution . +( 2 ) although our proposed algorithm for solving the psdls problem , psdtls , ap- + r-psdtls , for , in search for the solution , it appears to be efficient than psdls - intp and psdls - pftoh .+ ( 3 ) in contrast with other available methods , our use of total formulation for solving + psdls problem to consider error in both data and target matrices turns to be effective to produce more meaningful results .+ ( 4 ) the proposed method for solving the psdls problem , psdtls , is more efficient + the other methods .+ ( 5 ) the proposed method for solving the minimum rank problem , mr - psdtls , is more efficient than mr - toh and mr - recht .+ ( 6 ) the proposed method for computing the correlation matrix , cm - psdtls , shows + be more efficient and robust in computing a correlation matrix with a lower of standard deviation of error in as compared to cm - intp and cm - sun .we proposed a new approach to solve positive semi - definite total least squares ( psdls ) problems .consideration of our proposed error estimate for both data and target matrices admitted a more realistic problem formulation .we first considered a newly defined given rank positive semi - definite total least squares ( r-psdtls ) problem and presented an at least quadratically convergent algorithm for its solution .numerical results confirmed the effectiveness of our approach to compute solutions of r-psdls problems in less computing time than the interior point method and the path following algorithm .we then showed how to apply r-psdtls to solve the general psdls problem .based on the reported numerical results , our method for solving the psdls problem also showed to be more efficient than the interior point method and the path following algorithm .an specifically effective approach was also described to solve the rank positive semi - definite total least squares problem , r-psdtls .in addition , we noted that r-psdtls can be applied to other problems arising in control and financial modeling : the minimum rank ( mr ) problem and correlation matrix computation . using the dolan - mor performance profiles, we showed our proposed method for solving the mr problem to be more efficient than a path following algorithm and a semi - definite programming approach for solving the mr problem .furthermore , in computing the correlation matrix , numerical results showed lower standard deviation of error as compared to the interior point method and semi - definite programming approach . + + * acknowledgement . *the authors thank research council of sharif university of technology for supporting this work .+ 10 edelman a. , arias t. a. , smith s. t. : the geometry of algorithms with orthogonality constraints , siam j. matrix anal ., 20(2 ) , 303 - 353 ( 1998 ) golub g. h. , van loan c. f. : an analysis of the total least squares problem , siam j. numer ., 17 , 883 - 893 ( 1980 ) hu h. : positive definite constrained least - squares estimation of matrices , linear algebra appl , 229 , 167 - 174 ( 1995 ) hu h. , olkin i. : a numerical procedure for finding the positive definite matrix closest to a patterned matrix , statistical and probability letters , 12 , 511 - 515 ( 1991 ) krislock n. g. , lang j. , varah j. , pai d. k. , seidel h. : local compliance estimation via positive semi - denite constrained least squares , ieee trans .robotics and automation 20(6 ) , 10071011 ( 2004 ) larson h. j. : least squares estimation of the components of a symmetric matrix , technometrics , 8(2 ) , 360 - 362 ( 1966 ) mcinroy j. , hamann j. c. : design and control of flexure jointed hexapods , ieee trans .robotics and automation , 16(4 ) , 372 - 381 ( 2000 ) paige c. c. , strako z. : scaled total least squares fundamentals , numer . math ., 91 , 117 - 146 ( 2000 ) rebonato r. , jckel p. : the most general methodology to create a valid correlation matrix for risk management and option pricing purposes , j. risk , 2 , 1727 ( 1999 ) recht b. , fazel m. , parrilo p.a. : guaranteed minimum - rank solutions of linear matrix equations via nuclear norm minimization , siam review , 52(3 ) , 471 - 501 ( 2010 ) smith s. t. : optimization techniques on riemannian manifolds , fields ins. com . , 3 , 113 - 146 ( 1994 ) sun d. , gao y. : calibrating least squares covariance matrix problems with equality and inequality constraints , siam .j. matrix anal . and appl ., 31 , 1432 - 1457 ( 2009 ) tisseur f. : the quadratic eigenvalue problem , siam review , 43(2 ) , 235 - 286 ( 2001 ) toh k. c. : an inexact primal - dual path - following algorithm for convex quadratic sdp , mathematical programming , 112 , 221 - 254 ( 2007 ) woodgate k. g. : least - squares solution of over positive semidefinite symmetric , linear algebra appl . , 245 , 171 - 190 ( 1996 ) fazel m. : rank minimization with applications , phd thesis , stanford university ( 2002 ) huffel s. v. , vandewalle j. : the total least squares problem : computational aspects and analysis , siam ( 1991 ) krislock n. g. : numerical solution of semidefinite constrained least squares problems , m. sc .thesis , university of british colombia ( 2003 ) nocedal j. , wright s. j. : numerical optimization , springer , new york ( 1999 ) saad y. : iterative methods for sparse linear systems , siam , philadelphia , second edition ( 2003 ) van loan c. f. , golub g. : matrix computation , 4th edition , jhu press ( 2012 ) werner r. , schttle k. : calibration of correlation matrices sdp or not sdp , technical report , munich university of technology , munich ( 2007 ) poignet p. , gautier m. : comparison of weighted least squares and extended kalman filtering methods for dynamic identification of robots , proceedings of the ieee conference on robotics and automation , san francisco , ca , usa , 3622 - 3627 ( 2000 ) bagherpour n. , mahdavi - amiri n. : direct methods for solving positive definite total least squares problems using orthogonal matrix decompositions , http://arxiv.org/ , ( 2014 ) gauthier g. , goldberg l. , hayes m. , vannerem p. : reverse engineering for extreme scenario stress testing , msci research report , https://fp7.portals.mbs.ac.uk/portals/59/docs/ ( 2010 ) hand p. : conditions for existence of dual certificates in rank - one semidefinite problems , http://arxiv.org/ ( 2014 ) higham n. j. : computing the nearest correlation matrix ( a problem from finance ) , mims eprint : 2006.70 , http://eprints.ma.man.ac.uk/ ( 2006 ) liang x. , li r. c. : the hyperbolic quadratic eigenvalue problem , http://www.uta.edu/math/preprint/rep201401.pdf ( 2014 ) petersen k. b. , pedersen m. s. : the matrix cookbook , http://orion.uwaterloo.ca/ hwolkowi/ ( 2008 ) toh k. c. : qsdp version 0 , beta a matlab software for convex quadratic semidefinite programming , http://www.math.nus.edu.sg/ mattohkc / qsdp.html ( 2009 ) toh k. c. , yun s. : an accelerated proximal gradient algorithm for nuclear norm regularized least squares problems , http://www.optimization-online.org/db-file/2009/03/2268.pdf ( 2009 )
|
we have recently presented a method to solve an overdetermined linear system of equations with multiple right hand side vectors , where the unknown matrix is to be symmetric and positive definite . the coefficient and the right hand side matrices are respectively named data and target matrices . a more complicated problem is encountered when the unknown matrix is to be positive semi - definite . the problem arises in estimating the compliance matrix to model deformable structures and approximating correlation and covariance matrices in financial modeling . several methods have been proposed for solving such problems assuming that the data matrix is unrealistically error free . here , considering error in measured data and target matrices , we propose a new approach to solve a positive semi - definite constrained total least squares problem . we first consider solving the problem when the rank of the unknown matrix is known , by defining a new error formulation for the positive semi - definite total least squares problem . minimization of our newly defined error consists of an optimization problem on the stiefel manifold . to solve the optimization problem , in each iteration a linear operator subproblem arises for which we propose three different iterative methods . we prove quadratic convergence of our proposed approach . we then describe how to generalize our proposed method to solve the general positive semi - definite total least squares problem . we further apply the proposed approach to solve the minimum rank problem and the problem of computing correlation matrix . comparative numerical results show the efficiency of our proposed algorithms . in solving positive semi - definite total least squares problems , we find that in most cases the linear operator equation is solved faster and turns to be more accurate using the gmres method . also , comparison of the results obtained by our algorithm with the ones due to two other methods , the interior point method and a matlab routine for solving a quadratic programming problem with semi - definite constraint based on a path following algorithm , confirms the efficiency of our approach . numerical test results also show that our approach for computing a correlation matrix leads to smaller standard deviations of error in the target matrix . finally , the dolan - mor performance profiles are shown to summarize our comparative study . + total least squares , positive semi - definite constraints , deformable structures , correlation matrix 65f05 , 65f20 , 49m05
|
developing efficient methods to prevent epidemic outbreaks or accelerate information dissemination is the ultimate goal of research on spreading dynamics across various domains . for diseases and information that spread through social networks, the structure of underlying contact network can greatly affect the spreading processes .previous studies have inspected the role of topology in epidemic threshold and critical behavior .for example , in uncorrelated networks , the epidemic threshold is , where and are the first and second moments of the degree distribution .later on , in view of the wide occurrence of intrinsic heterogeneous complex networks in real world , researchers started to investigate the effect of microstructure on spreading dynamics in more detailed perspectives : the community structure appearing in social networks , -shell decomposition of networks for identification of influential spreaders , link salience skeleton correlated with the frequency of a link s appearance in infection hierarchies , as well as weak ties which is significant for information dissemination . due to the existence of complex microstructure , individuals with different topological features should play distinct roles during the spreading processes .previous studies mainly focus on the spreading ability of individuals .many topological measures are employed to identify influential spreaders in networks , such as degree , betweenness , -shell index , pagerank , etc .in fact , individuals situated in different positions of the contact network will also vary in susceptibility to epidemics . identifying the highly susceptible individuals in contact networksis of great significance in controlling epidemic outbreaks .therefore , it is indispensable to explore the factors that affect people s susceptibility . in this paper, we study the susceptibility of individuals in sir and rumor spreading dynamics through extensive simulations in real - world social networks .first , we explore the impact of community structure on people s susceptibility .even though the community structure can diminish the infected population and slow down the spreading processes , it is not the pivotal factor affecting nodes susceptibility .although the location of epidemic source has an impact on the precise infected probability of each single person , there exist a group of nodes that can always get infected with relatively high probabilities no matter where the spreading originates .we are particularly interested in these populations and consider them as highly susceptible individuals . to quantitatively depict the property of these people , we define the susceptibility of each individual as the average probability to be infected by a randomly chosen spreading source. then we further explore the topological properties of the highly susceptibility individuals . by examining their topological traits including degree , -shell , betweenness centrality and pagerank, we find the susceptibility of individuals is sensitive to the choice of spreading dynamics . for sir spreading, the susceptibility is highly correlated to nodes influence .therefore , nodes located in the core region of networks are more likely to be infected . however , in rumor spreading model , degree can better identify highly susceptible individuals .our results indicate that it is necessary to acquire the information of spreading mechanism to better locate highly susceptible population in practice .most of social networks in reality have notable community structure .we first explore the impact of community structure on individuals probabilities to be infected in spreading processes .we examine the epidemic flows in two social networks with community structure : ( 1 ) the friendship network between users of the social network facebook ( facebook ) , ( 2 ) the enron email communication network ( enron ) , in which nodes are email addresses and edges represent email communications .these social networks have been used in previous studies .topological statistics of networks are shown in table [ tb ] . in our following study , we treat all these networks as undirected ..statistics of social networks .we display the node number , link number , and average degree for facebook and enron social networks .the critical infection rate of sir model is calculated by . and are adopted infection rates in sir and rumor spreading simulations respectively . [ cols="^,^,^,^,^,^,^ " , ] . ( a ) , the layout of community structure for facebook social network .eight communities are distinguished by different colors .( b , c , d ) , distributions of infected probability for spreading sources in different communities .the solid black dots represent outbreak origins .the infected probability is obtained by averaging over sir realizations .the colors indicate the logarithmic values ( base 10 ) of infected probability for corresponding individuals . ]the community structure is extracted by a fast heuristic method based on modularity optimization .the layout of community structure for facebook social network is shown with different colors in fig . [fig1](a ) . to simulate spreading processes among populations , we first apply susceptible - infectious - recovered ( sir ) model on the above social networks . in sir model, the population can be classified into three possible states : susceptible ( s ) , infected ( i ) , or recovered ( r ) .starting from a single epidemic source node , at each time step , the infected individuals ( i ) would infect their susceptible neighbors ( s ) with probability and then become recovered ( r ) and gain permanent immunity with probability .such process continues until there are no infected individuals in the system . in our simulations , without loss of generality , we set .previous studies have adopted this model to describe diffusion of contagious diseases . for information dissemination ,recent works have shown that , although some information spreading processes are greatly affected by human - related factors , a few information memes showing viral behavior can also spread like infectious diseases .hence the sir model is applicable not only to epidemics but also to those information memes which are not seriously affected by human - related factors . in simulations ,the selection of infection rate could significantly affect the spreading outcomes .below the critical value , the proportion of infected individuals will vanish in the limit of a very large population . for larger than , the epidemic will infect almost all the people in the network . in our study , we use relatively small values of to guarantee medium scale infection coverage .otherwise most of the nodes will be infected with high probabilities , making their susceptibility indistinguishable . in table [ tb ] , we report the critical infection rate and adopted value in our simulations .the infection rate is slightly above its corresponding critical value to generate moderate coverage . for facebook network ,we select three nodes in different communities as spreading sources and apply realizations of sir model .the infected probability of each individual is approximated by the frequency of getting infected in these realizations . in fig.[fig1](b - d ) , we display the distribution of infected probability for facebook social network .the black dots represent outbreak origins and the infected probability is indicated by colors .clearly , the epidemic permeates to several communities .moreover , during all the epidemic spreading , there exist a group of individuals retaining relatively high infected probabilities , regardless of the positions of spreading sources . .the evolution of infected proportion is displayed in ( a ) . in ( b ) , we present the infected proportion versus time for rumor spreading model with . ] in order to check the effect of community structure on spreading processes , we compare the evolution of infected proportion in networks with and without community structure . for facebook network ,we rewire the links to destroy the community structure . specifically , we randomly select two links and , and then swap the end nodes to and .this procedure breaks down the communities but preserves the degree distribution . in our simulations, we perform rewire operations .the result of sir model is shown in fig.[fig2](a ) . under same conditions ,the rewired facebook network has a much higher infected proportion , which implies the community structure restricts the spreading processes to some extent .meanwhile , it will take a longer time for facebook network with community structure to achieve a given infected proportion .therefore , the community structure will slow down the speed of sir spreading . to examine whether our finding is applicable to other spreading dynamics , we perform the same analysis on rumor spreading model . in rumor spreading model , each individual can be in three possible states : the spreader ( s ) , ignorant ( i ) , and stifler ( r ) .spreaders represent nodes that are aware of the rumor and are willing to transmit it .ignorant people are individuals unaware of the rumor .stiflers stand for those that already know the rumor but are not willing to spread it anymore . in each time step, the spreaders contact one of their neighbors randomly and turn the ignorant ones into spreaders with probability .if the spreaders encounter spreaders or stiflers , they will turn to stiflers with probability .we also set in our simulations .we obtain similar results in fig.[fig2](b ) .during spreading processes , the exact values of infected probability depends on the choice of spreading sources .but in real - world applications , usually we have no information about the locations of spreading origins , which makes it impractical and intractable to obtain individuals exact infected probability during an epidemic outbreak . in spite of that , there exist a group of nodes exhibiting relatively high infected probability regardless of the position of source ( see fig.[fig1](b - d ) ) . in our work, we will focus on these persons who would be infected easily from a randomly chosen spreading origin .we define the source - independent susceptibility of a given individual as the average infected probability over epidemics starting from all the nodes in the network : where is the number of nodes in the network , and is the number of spreading realizations originating from each node . in our research , we set . besides , denotes whether node gets infected ( ) or not ( ) during the spreading realization originating from node .accordingly , represents the average infected probability of node during the spreading from node .we should note that this definition of susceptibility is different from that in , where the susceptibility is defined as the nature of individuals , such as immune system , physical condition or learning ability , etc . according to our definition , on average , nodes with higher susceptibility are more likely to be infected during an epidemic wherever the contagion starts . for sir spreading model, we can calculate the susceptibility with the standard set of equations of sir dynamics . for a network with nodes ,the topology is recorded by the adjacency matrix , where if node and are connected , and otherwise .we denote the probability of node being in susceptible , infected and recovered state at time as , and respectively . for networks without too many short loops , we assume the neighbors of one node can independently get infected at time .then the evolution of dynamics follows , \\\frac{dp_{i}^i(t)}{dt}&=&[1-\prod_{j=1}^n(1-\beta_s a_{ij}p_{j}^i(t)p_{i}^s(t))]-p_{i}^i(t ) , \\ \frac{dp_{i}^r(t)}{dt}&=&p_{i}^i(t).\end{aligned}\ ] ] here is the element of adjacency matrix . in case of small infection rate , the term ] and examine the correlation coefficient between and in the inset of fig.[fig3](c ) .it can be seen that theoretical predictions are highly correlated with simulated susceptibility .however , the slope varies as increases . for ,the slope remains around , which means theoretical values are quite accurate . as approaching to , the slope decreases dramatically .this implies becomes larger than , even though they still have a linear relationship .this phenomenon is also reflected in fig.[fig3](d ) , in which we display the residual error for different . with increasing to , the residual error rises abruptly .therefore , our method predicts susceptibility accurately for infection rate smaller than threshold value , and can be used to rank susceptibility for near or larger than .although we have calculated the susceptibility with the standard set of equations of spreading dynamics , it is still convenient to discern highly susceptible individuals with topological indicators in practice . herewe explore some of the most important topological measures in network theory .the most straightforward and simple one is degree , i.e. the number of connections a node maintains .hubs ( high degree nodes ) are believed to play significant roles in many dynamical processes .more connections may lead to a higher chance to be infected in an epidemic spreading .however , degree only reflects nodes number of neighbors . in order to check people s global location in networks, we also examine the -shell index by -shell decomposition analysis .the -shell decomposition can be described as follows : firstly we start by recursively removing nodes with degree until there are only nodes with degree left in the network .all the nodes removed in this procedure are assigned with . in a similar way, we iteratively find out higher -shells until all nodes are pruned . in this procedure , each node will be assigned a unique -shell index .nodes belonging to high ( low ) -shells are located in the core ( periphery ) area of the network .another topological measure , betweenness centrality , quantifies the number of shortest paths passing through each node . in social science ,the betweenness centrality of node , denoted by is defined as where is the number of shortest paths between nodes and , and is the number of shortest paths between and which pass through node .consequently , nodes with large betweenness centrality usually hold the vital positions in the shortest pathways between large numbers of nodes pairs .in addition , considering the importance of nodes neighbors , pagerank , which is originally introduced to rank web pages , is proposed .the pagerank of a node in a network can be calculated from where is the element of adjacency matrix and is the jumping probability . in our calculation , as in many previous studies , we set conventionally . to sum up ,we have chosen four typical topological measures proposed from different perspectives : degree , -shell , betweenness centrality and pagerank . and degree ( a , d , g ) , betweenness centrality ( b , e , h ) and pagerank ( c , f , i ) respectively .the infection rate is and for facebook and enron networks .values are indicated by colors , which are shown in color bars . ] first , we compare the performance of -shell and degree for sir model . for individuals with a given combination of -shell and degree , we check the average susceptibility : here is the collection of all nodes with -shell and degree , and is the number of nodes in the set . the susceptibility of node is calculated by averaging over sir realizations in each social network .we display the logarithmic value ( base 10 ) of in fig.[fig4](a ) , ( d ) , ( g ) and ( j ) for facebook , enron , rewired facebook and rewired enron social networks respectively .it is observed that most highly susceptible individuals occupy high -shells . for nodes with a fixed degree , the susceptibility can be either large or small , depending on their -shell values .meanwhile , the individuals located in the same shell have similar susceptibility .this implies , -shell index could better reflect nodes susceptibility than degree .we also perform same analyses for betweenness centrality ( see fig.[fig4](b),(e),(h),(k ) ) and pagerank ( see fig.[fig4](c),(f),(i),(l ) ) .consistently , -shell index outperforms betweenness centrality and pagerank in predicting the average susceptibility of individuals .we should also note that our results are not affected by the community structure .we compare our results in the original and rewired networks in fig.[fig4 ] .the conclusion is not sensitive to the existence of community structure .no matter where the node locates in the network ( between communities or within communities ) , it is susceptible to sir spreading processes as long as it has a high -shell index . for different fraction in sir model. for facebook ( a ) , enron ( b ) , rewired facebook ( c ) and rewired enron ( d ) social networks , we display the recognition rate curves for k - shell , degree , betweenness centrality and pagerank .the fraction ranges from 0.05 to 0.5 .the infection rate in sir simulations is and for facebook and enron networks . ]although -shell can better predict the average susceptibility of individuals , since the susceptibility for nodes with same topological measures has fluctuations , it is still desirable to explore their accuracy in locating highly susceptible persons . to this end , we use the recognition rate to quantify the efficiency in identifying individuals of high susceptibility .the recognition rate is defined as where and are the sets of nodes ranking in top percentage by susceptibility and topological measures , and is the number of elements in the set .in fact , recognition rate indicates the proportion of highly susceptible individuals which can be predicted by each measure .the higher is , the more accurate the predictor is . in fig.[fig5 ] , we display the recognition rate ( $ ] ) for facebook , enron , rewired facebook and rewired enron networks . in most cases , -shell index and degree outperform betweenness centrality and pagerank .and performs better than degree in general , except when the fraction is relatively small for facebook and rewired facebook networks .this can be explained by the fact that in these networks , nodes with extremely large degree tend to locate in the core region .therefore , if we pick high - degree nodes , they will also have high .however , the number of such hubs is not too large .as we increase the fraction , could still identify more highly susceptible individuals .this phenomenon depends on the exact topological structure of networks .for instance , for tree - like networks , only contains a few values . under this condition ,degree can clearly better distinguish highly susceptible individuals .whereas , such simple topological structure usually does not appear in large scale real - world social networks . on the contrary , realistic social networks posses rather complex structure , in which hubs are not necessarily located in the dense core .so in real scenario , is still an effective predictor for highly susceptible individuals . and influence of sir modelis presented in ( a ) .the infection rate is set to be .we also display susceptibility versus influence of rumor spreading model in ( b ) . is set to be . ] as pointed out in the supporting information of ref. , for sir model , nodes influence and infected probability are highly correlated .we present the relationship between susceptibility and influence of sir model in fig.[fig6](a ) .influence is defined as the average size of the population infected in a spreading process originating at node .since -shell outperforms other topological measures in identifying influential spreaders for sir model , it should be also effective in locating highly susceptible individuals .this explains the efficacy of -shell for sir model .in fact , for any independent - interaction spreading models in which susceptibility and influence are closely correlated , such as sir and sis model , -shell should be a good topological indicator for highly susceptible people .however , for rumor spreading model , we find the susceptibility and influence have no clear correlation , as shown in fig.[fig6](b ) . in rumor spreading model ,influential spreaders are not necessarily highly susceptible to spreading processes .it has been found that -shell can not represent nodes spreading capability in rumor spreading model . then it is desirable to find the topological indicator of susceptibility in rumor spreading model .since the figure like fig.[fig4 ] can not distinguish the performance of some measures clearly , we directly display the recognition rate for each topological indicator in fig.[fig7 ] .for all considered networks , degree is capable of recognizing more high - risk people , which differs from the better performance of -shell for sir model .also , the community structure will not affect the prominent performance of degree .whereas , after rewiring links , the recognition rates for all indicators are enhanced .this implies the existence of community structure undermines the effectiveness of topological indicators . for different fraction in rumor spreading model. for facebook ( a ) , enron ( b ) , rewired facebook ( c ) and rewired enron ( d ) social networks , we display the recognition rate curves for k - shell , degree , betweenness centrality and pagerank .the fraction ranges from 0.05 to 0.5 .the infection rate in rumor spreading simulations is for all networks . ] in fact , the different results obtained in sir model and rumor spreading model can be explained by the spreading mechanisms . in sir model ,the superior performance of -shell relies on the cascading behavior of spreading processes .starting from the source , the epidemic will diffuse from person to person contagiously and may eventually spread through the majority of population in a `` viral '' way .usually , a spreading process will last for many generations before it dies out . for nodes located in the core region , not only their neighbors are prone to be infected , but also their neighbors of neighbors have large chances to be infected , and so forth . consequently , -shell can better describe nodes susceptibility in sir model . on the contrary , in rumor spreading model , once a spreader encounters a spreader or stifler neighbor , it will stop spreading the information .this dramatically restrains the cascading behavior .it has been found that nodes with high -shell values serve as `` firewalls '' of rumor spreading , i.e. , once these nodes are infected , they will soon turn into stiflers . under this situation , high -shell nodes will have lower chances to be infected due to their `` firewall '' neighbors . since the cascading behavior is restricted , it is sufficient to consider only one - step neighbors. therefore , degree outperforms -shell in rumor spreading model .in this paper , we investigate individual s susceptibility in spreading processes .we first study the impact of community structure on people s susceptibility . we find that , although the community structure can reduce the infected population and slow down the spreading processes, it will not affect nodes susceptibility significantly .for sir model , we find the topological indicator -shell can better identify highly susceptible individuals , outperforming degree , betweenness centrality and pagerank .this can be explained by the high correlation between nodes susceptibility and influence . on the contrary , for rumor spreading model , nodes susceptibility and influence have no clear correlation .degree performs the best among considered topological measures .we give an explanation of the different results of sir and rumor spreading models based on spreading mechanisms .in general , for spreading processes that can trigger large - scale cascading behaviors , such as sir model , nodes in the core region should be more susceptible since they tend to have highly susceptible neighbors , neighbors of neighbors , and so forth .in contrast , in spreading models that rely on short - range diffusion , degree should be a good choice of indicator for susceptibility .our finding indicates that , when identifying highly susceptible individuals , it is necessary to take both topological features and spreading mechanisms into account .
|
identifying highly susceptible individuals in spreading processes is of great significance in controlling outbreaks . in this paper , we explore the susceptibility of people in susceptible - infectious - recovered ( sir ) and rumor spreading dynamics . we first study the impact of community structure on people s susceptibility . despite that the community structure can reduce the infected population given same infection rates , it will not deterministically affect nodes susceptibility . we find the susceptibility of individuals is sensitive to the choice of spreading dynamics . for sir spreading , since the susceptibility is highly correlated to nodes influence , the topological indicator -shell can better identify highly susceptible individuals , outperforming degree , betweenness centrality and pagerank . in contrast , in rumor spreading model , where nodes susceptibility and influence have no clear correlation , degree performs the best among considered topological measures . our finding highlights the significance of both topological features and spreading mechanisms in identifying highly susceptible population . epidemic spreading , susceptibility , complex networks
|
quantum entanglement is the primary resource for both quantum computation and quantum communication .utilizing these resources allows one to perform information processing with unprecedented high efficiencies by exploiting the fundamental laws of quantum mechanics .specifically , quantum entanglement possesses a variety of intriguing applications within the realm of quantum information processing ; these applications include : quantum teleportation ( qt ) , remote state preparation ( rsp ) , quantum secret sharing , quantum cryptography , _ etc_. both qt and rsp are important methods in quantum communication . with the help of previously - shared entanglements and necessaryclassical communications , qt and rsp can be applied to achieve the transportation of the information encoded by qubits . yet, there exists several subtle differences between qt and rsp , including : classical resource consumptions and the trade - off between classical and quantum resources . typically in standard qt, the transmission of an unknown quantum state consumes 1 ebit and an additional 2 cbits .in contrast , if the state is known to the sender , the resources required for the same action can be reduced to 1 ebit and 1 cbit in rsp .this decrease in resource consumption generally comes at the expense of a lower success probability .furthermore , pati has argued that rsp is able to maintain its low resource consumption while meeting the success probability of qt for preparing special ensemble states ( e.g. , states existing on the equator and great polar circle of the bloch sphere ) .characterized by conservation of resources while maintaining high total success probability ( tsp ) , it is not surprising that rsp has recently received much attention within the literature . to date, many authors have proposed a number of promising methodologies for rsp ; a list of such methods should include : low - entanglement rsp , optimal rsp , oblivious rsp , rsp without oblivious conditions , generalized rsp , faithful rsp , joint rsp ( jrsp ) , multiparty - controlled jrsp , rsp for many - body states and continuous variable rsp in phase space .various rsp proposals utilizing different physical systems have been experimentally demonstrated , as well .for example , peng _ et al ._ investigated an rsp scheme employing nmr techniques , while others have explored the use of spontaneous parametric down - conversion within their rsp schemes ._ experimentally demonstrated a novel preparation method for an arbitrary , pure single - qutrit state via biphoton polarization ; furthermore , they claim that their method requires only two single - qubit projective measurements without any interferometric setup . reported the remote preparation of two - qubit hybrid entangled states , including a family of vector - polarization beams ; the single - photon states are encoded within the photon spin and orbital angular momentum , and the desired state is reconstructed by means of spin - orbit state tomography and transverse polarization tomography .very recently , rdmark _ et al . _ experimentally demonstrated multi - location remote state preparation via multiphoton interferometry .this method allows the sender to remotely prepare a large class of symmetric states ( including single - qubit states , two - qubit bell states and three - qubit , or states ) .there do exist a number of proposals dedicated to addressing the rsp of arbitrary two- and three - qubit entangled pure states .et al . _ employed two and three bell - type entanglements as quantum channels for conducting such preparations with total success probabilities ( tsp ) of and , respectively .both brown and states have also been employed for the creation of correlations among participants .resulting from these correlations , the maximal success probability for general two- and three - qubit states is for such strategies .recently , zhan presented two schemes for the remote preparation of two- and three - qubit entangled states with unity success probability via maximally entangled states , _i.e_. , greenberger horne zeilinger ( ghz ) states . in our present work ,the aim is to investigate generalized remote preparation for an arbitrary -qubit entangled state , while only utilizing general entanglement states ( _ i.e_. , non - maximally entangled states ) as quantum channels .we will show that the above scheme is capable of performing faithful rsp with a four - fold or eight - fold increase of the success probability over existing methods , for and , respectively .these enhancements are afforded by the construction of two novel -qubit collective unitary transformations , respective of the number of entangled qubits within the desired state .the organization of this paper is as follows : in the next section , we shall detail our procedure for the rsp of a general -qubit entangled state employing a series of ghz - type entanglements as quantum channels .our results show that the desired state can be faithfully restored at the receiver with a fixed , predictable success probability . in section [ section:3 ], we will illustrate our general procedure through its implementation for the rsp of a two - qubit entangled state .section [ section:4 ] will contain our discussion and comments on the procedure , as well as an evaluation of the classical information cost ( cic ) for the procedure and the total success probability ( tsp ) , which can be expected .we will close with section [ section:5 ] , containing a concise summary .we have also chosen to attach a second illustrative example in the appendix ; this example repeats the general procedure upon a three - qubit entangled state .the method presented in this paper is a general scheme for the remote preparation of an arbitrary state using a generic ( ) number of ghz - type entanglements , which will be used as quantum channels . within this procedure, we will firstly specify an -qubit state , which we desire to be transferred from a sender ( alice ) to a receiver ( bob ) . for simplicity ,we have introduced to note the number of vectors required to form a complete basis set for a set of qubits .furthermore , we introduce as the number of qubits required to form the ghz - entangled quantum channels .the desired state is given by : within the above , constraints are imposed on the coefficients and phase factors : and satisfies the normalized condition ; and ] .note that , alice has knowledge of the desired state , yet bob has no such knowledge .initially , a class of robust and genuine ghz - type entanglements must be constructed and shared between alice and bob .these ghz states for our example are given by : and : without loss of generality , the conditions and are maintained .initially , qubits 1 , 2 , 4 and 5 are held by alice , while qubits 3 and 6 are held by bob . in order to accomplish our rsp procedure, we shall implement the steps discussed within section [ section:2 ] : * step 1 : * alice executes one bipartite projective measurement , , on the qubit bipartite ( ) under a set of complete orthogonal basis vectors , where the indices take the place of within the general procedure .this basis , , is written in terms of the computational basis : .the projective measurement is formed in the method previously discussed and can be written as : the result of the projective transformation is : as a result , our quantum channels , constructed from the six - qubit systemic state , can be expressed as : where the non - normalized state can be probed with the probability .* step 2 : * following the measurement , alice executes a corresponding bipartite joint unitary operation , , on qubits 2 and 5 , under the ordering basis : . to be explicit , is taken as a matrix of the form : * step 3 : *alice now performs a measurement on qubits 2 and 5 under the complete set of orthogonal basis vectors : .she then publishes the measurement outcomes via classical channels where the authorized anticipators have already conspired concerning the interpretation of the classical bits .it should now be stated that all authorized anticipators have conspired in advance that cbits correspond to the outcome and cbits ( previously ) correspond to the measuring outcome of qubits 2 and 5 .* step 4 : * bob introduces one auxiliary qubit , , with an initial state of .he then performs a triplet collective unitary transformation , , on qubits 3 , 6 and under the set of ordering basis vectors : , .the transformation matrix is given by : where the operators and are matrices .explicitly , these operators are given by : and : subsequently , bob measures qubit under a set of measuring basis vectors , .if state is detected , his remaining qubits will collapse into the trivial state .if is obtained , the preparation procedure may continue on to the final step .* step 5 : * finally , bob executes an appropriate unitary transformation , ( see table 1 for more details ) , on his qubits 3 and 6 .the exact form of this operator varies with the observed values associated with the measurements denoted by cbits , , and .this operation allows bob to recover at his location .this overall procedure may be conveyed as a quantum circuit and is displayed within figure [ fig:1 ] .. denotes the series of cbits corresponding to measurement outcomes from the sender and denotes an unitary transformation that bob needs to perform on qubits 3 and 6 for the recovery .[ cols="<,<,<,<,<,<,<,<",options="header " , ] from table [ tab.2 ] , it can be directly noted that the tsp of our scheme is capable of both approaching and attaining a value of unity .the intrinsic efficiency achieves , which is much greater than that of previous schemes .due to characteristically high intrinsic efficiency and tsp , our scheme is highly efficient when compared to other existent schemes ; further , our scheme is capable of optimal performance in specific limiting cases . _ en passant _ , the intrinsic efficiency of a scheme is defined by and is given by the form : in the above , denotes the number of qubits in the desired states ; denotes the amount of quantum resources consumed in the process and denotes the amount of classical information resources consumed .( 3 ) generalizability .herein , we have designed a general scenario for rsp of arbitrary -qubit states via ghz - class entanglements .the generalization is embodied in several aspects , which we will now note .first , the states that we desire to remotely prepare are arbitrary -qubit ( ) entangled states .second , the quantum channels employed are ghz - class entanglements , which are non - maximally - entangled states .it has been previously shown that non - maximally - entangled states are general cases and are more achievable in real - world laboratory conditions .in contrast , maximally - entangled states are a special case of general entangled states when the state coefficients are restricted to special values . therefore , our scheme is a readily general procedure .additionally , investigated deterministic rsp for both the and the cases ; however , there are some differences between these schemes and the analogous cases within our works : first , concentrated only on the cases when maximally entangled states are taken as channels ; this limit is just a special case of our schemes where the channels are general , yet still allow for the maximally - entangled case .second , we employ a von neumann projective measurement in a set of vectors instead of measurement on the basis of and hadamard transformations . considering these differences , we argue that our scheme is more general than previous works , and we reduce both the number and complexity of operations in the overall procedure .in summary , we have derived a novel strategy for the implementation of rsp of a general -qubit entangled state .this was done by taking advantage of robust ghz - type states acting as quantum channels . with the assistance of appropriate local operations and classical communication ,the schemes can be realized with high success probabilities , increased four - fold and eight - fold when compared to previous schemes with and , respectively .remarkably , our schemes feature several nontrivial properties , including a high success probability , reducibility and generalizability .moreover , the tsp of rsp can reach unity when the quantum channels are reduced to maximally - entangled states ; that is , our schemes become deterministic at such a limit .further , we argue that our current rsp proposal might be important for applications in long - distance quantum communication using prospective node - node quantum networks .* appendix * + within this appendix , we shall provide a second illustration of the procedure featured in section [ section:2 ] of the main text .we have provided this example to assist in comparisons between two values of for the rsp of an arbitrary -qubit entangled state .appendix [ general-1 ] will cover the general rsp procedure for a three - qubit entangled state .appendix [ general-2 ] will declare specific states for the measurements and explicitly perform the operations of the general procedure ; .let us attempt the rsp for an arbitrary three - qubit entangled state described by : this state is to be remotely prepared at bob s location , transmitted from alice ; in the above , the coefficients must satisfy the following conditions : , $ ] and .it merits stressing that a nontrivial precondition in standard rsp must be met : the sender has the knowledge of the desired state , yet the receiver does not possess this knowledge .originally , alice and bob are robustly linked by genuine entanglements ( ghz - type entanglements ) described by : and : we assume that the conditions and are satisfied . additionally , it should be noted that qubits 1 , 2 , 4 , 5 , 7 and 8 are held by alice , while qubits 3 , 6 and 9 are held by bob . for the sake of a successful rsp, the procedure can be implemented in a manner consistent with the five - step procedure in the main text : * step 1 : * alice performs a three - qubit projective measurement on the qubit triplet ( ) under a set of complete orthogonal basis vectors : ; where is comprised of this computational basis : , , , , , , , .this measurement takes the form : where the projection operator , , is of the form : thus , the total systemic state , encompassing the quantum channels , reads as : within the above , the states are non - normalized ; are the normalized coefficients associated with the states and the non - normalized state can be obtained with a probability of .* step 2 : * in accordance with the measurement outcome , alice makes an appropriate triplet joint unitary operation , , on her remaining three qubits : 2 , 5 and 8 .this operation is performed under the ordering basis : , , , , , , , . to be explicit , is an matrix and takes one of the following forms : * step 3 : * next , alice performs a measurement on her qubits 2 , 5 and 8 under the set of complete orthogonal basis vector and broadcasts the measurement outcome via a classical channel ( _ i.e_. , sending some cbits ) .again , all of the authorized anticipators make an agreement in advance that cbits correspond to the outcome ( within section 2 of main text ) and cbits to the measuring outcome of qubits 2 , 5 and 8 ( within section 2 of main text ) , respectively .* step 4 : * after receiving alice s messages , bob introduces one auxiliary qubit , , with an initial state of .bob then makes quadruplet collective unitary transformation , , on qubits 3 , 6 , 9 and under a set of ordering basis vectors : .the form of this transformation operator is : where and are both matrices .explicitly , these matrices are given by : and : next , bob measures his auxiliary qubit , , under the set of measuring basis vectors : .if state is measured , his remaining qubits will collapse into the trivial state , leading to the failure of the rsp .otherwise , is obtained , and the procedure shall continue forward to the final step . *step 5 : * finally , bob operates with an appropriate unitary transformation , ( see table [ apptab ] for details ) , on qubits 3 , 6 and 9 . for clarity, the quantum circuit for this rsp scheme is provided as figure [ fig:2 ] .[ apptab ] denotes a three - qubit projective measurement on qubits 1 , 4 and 7 under a set of complete orthogonal basis vectors ; denotes alice s appropriate triplet collective unitary transformation on triplet ( 2,5,8 ) ; denotes bob s collective four - qubit unitary transformation on his qubits 3 , 6 , 9 and and denotes bob s appropriate single - qubit unitary transformations on his qubits 3 , 6 and 9 . ]above , we have shown that rsp for an arbitrary three - qubit entangled state can be faithfully performed with a certain success probability . for clarity , here we will take the case of as an example .that is , the state is detected by alice at the beginning .thus , the remaining qubits will be converted into : later , alice makes the operation on her remaining qubits 2 , 5 and 8 . as a consequence, the above state will evolve into : within the above , the normalization parameter is : .incidentally , the state given in equation ( [ equation 1 ] ) can be rewritten as : .\end{aligned}\ ] ] accordingly , alice measures qubits 2 , 5 and 8 under the basis vectors . letting the outcome be , alice broadcasts this outcome to bob via the classical message 001. the subsystem state will then be : bob then introduces the auxiliary qubit , , with an initial state of .he may now implement a local quadruplet collective unitary transformation , , on qubits 3 , 6 , 9 and .thus , bob s system will become : subsequently , he makes a single - qubit projective measurement on qubit under basis vectors .if is measured , his remaining qubits will collapse into the trivial state , and the rsp fails . if is measured , the remaining qubits will transform into the state : .this may readily allow bob to redeem the desired state after the operation : . of course, alice s outcome may be one of the remaining seven states : , , , , , and .therefore , the desired state can be faithfully recovered at bob s location with certainty by similar analysis methods as those above . +* acknowledgements * this work was supported by the program for the national natural science foundation of china ( grant nos .11247256 , 11074002 and 61275119 ) , the fund of anhui provincial natural science foundation ( grant no .1508085qf139 ) , the fund of china scholarship council , and the fund from national laboratory for infrared physics ( grant no . m201307 ) .bennett , c.h . ; brassard , g. ; cr , c. ; jozsa , r. ; peres , a. ; wootters , w.k .teleporting an unknown quantum state via dual classical and einstein - podolsky - rosen channels .lett . _ * 1993 * , _ 70 _ , 18951899 .zhan , y.b . deterministic remote preparation of arbitrary two- and three - qubit states ._ epl _ * 2012 * , _ 98 _ , 40005 .cabello , a. quantum key distribution in the holevo limit .lett . _ * 2000 * , _ 85 _ , 5635 .
|
herein , we present a feasible , general protocol for quantum communication within a network via generalized remote preparation of an arbitrary -qubit entangled state designed with genuine tripartite greenberger horne zeilinger - type entangled resources . during the implementations , we construct novel collective unitary operations ; these operations are tasked with performing the necessary phase transfers during remote state preparations . we have distilled our implementation methods into a five - step procedure , which can be used to faithfully recover the desired state during transfer . compared to previous existing schemes , our methodology features a greatly increased success probability . after the consumption of auxiliary qubits and the performance of collective unitary operations , the probability of successful state transfer is increased four - fold and eight - fold for arbitrary two- and three - qubit entanglements when compared to other methods within the literature , respectively . we conclude this paper with a discussion of the presented scheme for state preparation , including : success probabilities , reducibility and generalizability . keywords : quantum communication ; remote state preparation ; entangled state ; collective unitary + operation ; success probability
|
the ramanujan master theorem ( rmt ) states that if the function is defined through the series expansion with then the following identity holds a proof , albeit not rigorous , of the theorem can be achieved by the use of the umbral methods , namely by setting with the operator defined by : in this way the function can be formally written as an pseudo - exponential function , , and thus the integral in eq .( [ eq2 ] ) can be given in the form and using eq .( [ eq4 ] ) we end up with the result quoted in eq .( [ eq2 ] ) . for further commentssee the concluding section and for a rigorous proof refs . .an immediate consequence of the theorem is the evaluation of the following integral where is the tricomi - bessel function of order which satisfies the conditions of the rmt , with . according to eq .( [ eq2 ] ) , we get the procedure we have just quoted , which traces back to crofton and to other operationalists , is by no means a proof of the theorem but just a guiding tool , which will be proved to be very useful for the forthcoming speculations .as the function is linked to the ordinary bessel functions by the identity the use of the rmt may be of noticeable interest for the evaluation of various integrals appearing in applications . as noted in ref . the umbral formalism may be useful to make some progress towards an extension of the rmt theorem .we observe indeed that if by applying the same procedure as in eqs .( [ eq5 ] ) , we have just as an application of this result , we note that integrals of the type can be written as _pseudo_-gaussian integral of the form \nonumber & = & \sqrt{\frac{\pi}{\hat{c}}}\ , e^{\,b^2/(4\ , \hat{c})}\ , \varphi(0 ) \,=\ , \sqrt{\pi}\ , \sum_{r = 0}^{\infty } \frac{b^{2\ , r}}{4^r\ , r ! } \ , \hat{c}^{-r - 1/2}\ , \varphi(0 ) \\ [ 0.7\baselineskip]\label{eq14 } & = & \sqrt{\pi}\ , \sum_{r = 0}^{\infty } \frac{b^{2\ , r}}{4^r\ , r!}\ , \varphi(-r -1/2).\end{aligned}\ ] ] in the case of , eq . ( [ eq14 ] ) becomes \label{eq15 } & = & \frac{2}{\sqrt{4 + b^2}}.\end{aligned}\ ] ] further examples will be discussed later in this paper , which is devoted to understand a variety of consequence emerging from rmt and from its umbral revisitation . in section 2 we will explore its relevance in the field of special functions , while in section 3 we will discuss some concluding remarks .let us consider the integral ( [ eq2 ] ) in which the function is given by this function satisfies the rmt conditions , since it can be expanded as follows where are two - variable hermite polynomials defined by }\ , \frac{x^{n - 2r}\ , y^{r}}{(n - 2r ) ! \ , r!}.\ ] ] in this case , so that where are no more polynomials but functions , converging for .we have denoted the integral in eq .( [ eq19 ] ) by to stress that it can be considered as a generalization of the euler s gamma function .we get indeed the following properties should be also noted which , once collected together , yield from eqs .( [ eq21])-([eq23 ] ) one obtains \label{eq50 } & = & \frac{1}{\nu - 1 } ( a - 2\ , b\ , \partial_{a})\ , \gamma(\nu ; a , b)\end{aligned}\ ] ] i. e. which reduces to for and .let us note that for the following generalization of integral ( [ eq19 ] ) with }\ , \frac{x^{n - m r}\ , y^{r}}{(n - mr)!\ , r!}\ ] ] higher order hermite polynomials , the use of rmt is not helpful .we can not indeed apply the fact that ( note that the is in fact ) , because the analogue of the series ( [ eq20 ] ) does not converge for .the integral in eq .( [ eq24 ] ) is however well defined and it can be used as an independent definition of either the higher order gamma function or of higher hermite functions . we have noted that the straightforward use of the rmt in the case of eq .( [ eq24 ] ) leads to a diverging series . as an alternative, we consider the case where , according to eq .( [ eq20 ] ) , in the case the series is converging .a comment is also in order on the nature of the hermite - like polynomials specified by the generating function in the case of , , they reduce to } \frac{a^{n - 2r}\ , b^{r}}{(n - 2r)!\ , ( r!)^2}.\ ] ] they are the so - called hybrid polynomials , since they share the properties of the laguerre and hermite polynomials .the operational identity can also be used to define them .analogous results , concerning the laguerre polynomials , can be now obtained by considering the integral since the generating function of the two - variable laguerre polynomials writes a straightforward application of the rmt yields the function too can be viewed as a generalization of the euler s gamma function and satisfies the following recurrences thus getting operators of the type appearing in eq .( [ eq30 ] ) are known in the literature as the laguerre derivative . before closing this sectionwe will discuss a further way of employing the rmt .we consider the integral according to the discussion in the introductory section ( see eq .( [ eq14 ] ) ) we can write and , taking into account eq .( [ eq19 ] ) we obtain the closed - form : this is a further proof of the flexibility of the methods associated with the umbral techniques proposed in the paper .all the previous discussion is based on the tacitly assumed conjecture that the following identity holds this is the critical point and the rigorous proof of the ramanajuan theorem is based on the proof of such a lemma , which will be conjectured to be true .we have checked , _ a posteriori _ , the validity of our results by benchmarking them with numerical tests .we can therefore state that , in the case of functions for which we have , the conjecture holds true . in the section 2we have seen how the use of the pseudo - exponential function can be particularly useful to evaluate integrals involving special functions .we will now use a modified form of the rmt to evaluate integrals of the type with \label{eq36 } g(x ) & = & \sum_{s = 0}^{\infty } \frac{(-x^2)^s}{s!}\ , \sigma(s ) , \qquad \sigma(0 ) \,\neq\ , 0 .\end{aligned}\ ] ] an extension of the rmt can be stated as it follows where and . by working out the gaussian integral in eq .( [ eq37 ] ) , we obtain which holds only if the series converges .an example is provided by \nonumber & = & \frac{\sqrt{\pi}}{b } \sum_{r = 0}^{\infty } \frac{a^{2\ , r}\ , b^{-2\ , r}}{4^r\ , r!}\ , \frac{1}{\gamma(2 r + 1)\ , \gamma(-r + 1/2 ) } \\[0.7\baselineskip ] \label{eq38 } & = & \frac{1}{b}\ , j_{0}\left(\frac{a}{2\ , b}\right),\end{aligned}\ ] ] where .we have also used the pseudo - gaussian function as a tool to derive integrals involving the bessel functions . in the followingwe will see how the fairly wild use of the operator may provide quite a unique tool , to study further properties of these functions .for example , we can establish formulae for the repeated derivatives of the bessel functions by using the same methodology adopted in the case of the ordinary gaussian function .in the case of successive derivatives of the -th order bessel function , we set and , recalling that in the ordinary case we have we get where . by using the explicit form of the two - variable hermite polynomials and by expanding the pseudo - exponential, we find : }\ , \frac{(-1)^r\,(2 x)^{n - 2r}}{(n-2 r)!\ , r!}\ , \sum_{p=0}^{\infty}\ , \frac{(-1)^p\ , x^{2 p}}{p!}\ , \hat{c}^{\,p + n - r}\ , \varphi(0 ) \\ [ 0.7\baselineskip ] \label{eq43 } & = & n ! \ , \sum_{r = 0}^{[n/2]}\ , \frac{(-1)^r\ , 2^{n - 2 r}}{x^r\ , ( n - 2 r)!\ , r ! } \ , j_{n - r}(2\ , x)\end{aligned}\ ] ] and new formulae , like & = & e^{a\ , x } \sum_{r = 0}^{[n/2 ] } \frac{(-1)^{n - r}\ , n!}{r!}\ , \sum_{s = 0}^{n - 2 r } \frac{(-a)^s\ , b^{n - r - s}\ , ( 2\,x)^{n - 2 r - s}}{s!\ , ( n - 2 r - s ) ! } \ , \hat{c}^{n - r - s}\ , e^{-b\ , \hat{c}\ , x^{2}}\ , \varphi(0 ) , \\ [ 0.7\baselineskip ] \label{eq34a } & = & e^{a\ , x } \sum_{r = 0}^{[n/2 ] } \frac{(-1)^{n - r}\ , n!}{r!\ , x^{r } } \sum_{s = 0}^{n - 2 r } \frac{(-a)^s\ , b^{(n - r - s)/2}\ , 2^{n - 2 r - s}}{s!\ , ( n - 2 r - s)!}\ , j_{n - r - s}\left(2\ , \sqrt{b}\ , x\right)\end{aligned}\ ] ] and & = & \sum_{r = 0}^{[n/2 ] } \frac{(-1)^{n - r}\ , 2^{n - 2 r}\ , n!}{(n - 2 r)!\ , r!}\ , x^{-r}\ , \sum_{s = 0}^{n - r } \binom{n - r}{s}\ , \frac{b^{s/2}}{a^{(r+s - n)/2 } } \\[0.7\baselineskip ] \label{eq34b } & & \qquad\qquad\qquad\qquad \times j_{n - r- s}(2\ , \sqrt{a}\ , x)\ , j_{s}(2\ , \sqrt{b}\ , x)\end{aligned}\ ] ] which are quite interesting results , difficult to get by conventional means and absent in available collections .this is just an example and other will be discussed in a forthcoming investigation .most of the considerations developed in this paper have been based on the properties of the pseudo - exponential function which , as for the ordinary case , possesses the semi - group property only if and are commuting quantities . if this is not true and if , for example , and are operators such that \,=\ , \hat{\mathbb{i}}\ , k,\ ] ] where is a -number and the unit operator , we can obtain a convenient disentanglement by using the weyl - identity , namely if , we obtain where is the bessel - wright function .the above remark is a further example of how rich may be the consequences offered by the method we are proposing . in a forthcoming paperwe will see their importance in the study of partial differential equations of evolution type , involving non standard derivative operators .finally we want to mention the relevance of the presented approach for the theory of hankel s integral transform , which can be interpreted as i. e. , according to the point of view developed in this paper , as pseudo - gaussian integrals .further comments on this last topic will be presented elsewhere . + the authors acknowledge support from agence nationale de la recherche ( paris , france ) under program physcomb no .anr-08-blan-0243 - 2 .g. dattoli thanks the university paris xiii for financial support and kind hospitality .g. dattoli , p. l. ottaviani , a. torre , and l. vzquez , _ evolution operator equations : integration with algebraic and finite - difference methods .applications to physical problems in classical and quantum mechanics and quantum field theory _ ,nuovo cimento soc .4 ) * 20 * , 1 - 113 ( 1997 ) .
|
we study a number of possible extensions of the ramanujan master theorem , which is formulated here by using methods of umbral nature . we discuss the implications of the procedure for the theory of special functions , like the derivation of formulae concerning the integrals of products of families of bessel functions and the successive derivatives of bessel type functions . we stress also that the procedure we propose allows a unified treatment of many problems appearing in applications , which can formally be reduced to the evaluation of exponential- or gaussian - like integrals .
|
recently , graph has been extensively used to abstract complex systems and interactions in emerging `` big data '' applications , such as social network analysis , world wide web , biological systems and data mining . with the increasing growth in these areas ,petabyte - sized graph datasets are produced for knowledge discovery , which could only be solved by distributed machines ; benchmarks , algorithms and runtime systems for distributed graph have gained much popularity in both academia and industry .one of the most widely used graph - searching algorithms is breadth - first search ( bfs ) , which serves as a building block for a great many graph algorithms such as minimum spanning tree , betweenness centrality , and shortest paths .implementing a distributed bfs with high performance , however , is a challenging task because of its expensive communication cost .generally , algorithms have two kinds of costs : arithmetic and communication . for distributed algorithms , communication often costs significantly more than arithmetic .for example , on a 512-node cluster , the baseline bfs algorithm in graph 500 spends about 70% time on communication during its traversal on a scale - free graph with 8 billion vertices ( figure [ fig : intro - percent ] ) .therefore the most critical task in a distributed bfs algorithm is to minimize its communication .0.45ll approach & category + two - dimensional partitioning & algorithm + bitmap & sparse vector & data structure + pgas with communication coalescing & runtime + _ this work _ : compression & sieve & data structure + several different approaches are proposed to optimize communication in distributed bfs ( table [ table : comparison ] ) : using two - dimensional partitioning of the graph to reduce communication overhead , using bitmap or sparse vector to reduce the size of messages , or applying communication coalescing in pgas implementation to minimize message overhead .these approaches attack the problem from different angles : algorithm , data structure and runtime . in this paper, we will focus on reducing the size of communication messages ( the optimization of data structures ) .the main techniques we use are _ compression _ and _ sieve_. overall , we make the following contributions : * by compressing the messages , we reduce the communication time by and improved its overall performance by compared to the baseline bfs algorithm . * by sieving the messages with a novel distributed directory before compression .we further reduce the communication by and improved the performance by another , achieving a total reduction in communication and performance improvement over the baseline implementation . *we implement and analyse several compression methods for bitmap compression .our experiment shows the space - time tradeoff of different compression methods . in the next sectionwe will introduce the problem with an example .section [ sec : baseline - bfs ] will describe the baseline bfs algorithm .section [ sec : spmv - bfs ] and section [ sec : dir - bfs ] will describe our bfs algorithms with compression and sieve .the analysis and experiment results are presented in section [ sec : ana ] and section [ sec : exp ] , followed by related works and concluding remarks in section [ sec : related ] and section [ sec : cls ] .is represented as a vector.,scaledwidth=48.0% ] we start with an example illustrating the breadth - first search ( bfs ) algorithm . given a graph and a distinguished source vertex , breadth - first search systematically explores the edges of to `` discover '' every vertex that is reachable from . in figure[ fig : bfs - example - all ] , the source vertex is painted black when the algorithm begins .then it explores its adjacent vertices : , and , and paints them black .the exploration goes on until all vertices are visited .vertices discovered the first time is painted black ; discovered vertices are painted solid grey ; vertices to be discovered are painted grey with black edge .the frontier of the graph is the set of the vertices which are discovered the first time . for distributed bfs ,the vertices as well as the frontier are divided among processors : , , , . andthe global information of the frontier can only be retrieved through communication .for in this example , it only `` owns '' the information of whether vertex and are visited .if it want to identify whether vertex is visited , it needs to ask this information from .the common way to update the global is to use mpi collective communication like allgather at the end of each level .the most critical task for distributed bfs is to reduce the size of the frontier , which directly influence the size of the messages communicated . to reduce it , bitmap or sparse vector is commonly used to represent the frontier .bitmap use a vector of size to represent the frontier , each bit of the vector representing a vertex : means it is included in the frontier , means it is not .sparse vector includes the frontier vertices only , each is represented using 64 bits . for graphs of diameter ,bitmap is generally better when .table [ table : sparsity ] provides an example of the size of the frontier represented as bitmap or sparse vector , for a scale - free graph of 1.6 billion vertices . in this case , for , the total size of messages using bitmap is 1.4 gb , much less than the sparse vector s 12.4 gb .0.45lzzz level & # vertices & bitmap & sparse vector + 1 & 2 & 196.9 mb & 16b + 2 & 20842 & 196.9 mb & 162.8 kb + 3 & 235274348 & 196.9 mb & 2.0 gb + 4 & 1377666413 & 196.9 mb & 10.2 gb + 5 & 38582585 & 196.9 mb & 294.4 mb + 6 & 88639 & 196.9 mb & 692.4 kb + 7 & 211 & 196.9 mb & 1.69 kb + total & 1651633040 & 1.4 gb & 12.4 gb + despite the huge space saved by bitmap , there remains two problems : * the problem of bitmap is that it need to contain _ all _ the vertices to keep the position information of each vertex . for the above example , to represent 2 vertices at level 1 ,the size of the bitmap frontier is still 196.9 mb , where most of the elements are zero .fortunately , these zeros can be condensed ._ we leverage lossless compression to reduce the size of the bitmap_. * the other problem is the expensive broadcast cost of the allgather collective communication , which broadcasts _ all _ vertices to _ all _ processors . in fact , each processor needs only a small fractions of the frontier .for example , in figure [ fig : bfs - example - all ] ( b ) , does not need to send the information of vertex to , because vertex does not has a direct edge connecting to the vertices of ._ we propose a distributed directory to sieve the bitmap vectors before compression , further reducing its message size_.let denote the adjacency matrix of the graph , denote the frontier at level , and denote the visited information of previous frontiers .the exploration of level in bfs is algebraically equivalent to a sparse matrix vector multiplication ( spmv ) : ( we will omit the transpose and assume that the input is pre - transposed for the rest of this section ) .for example , traversing from level one ( figure [ fig : bfs - example - all ] ( a ) ) to level two ( figure [ fig : bfs - example - all ] ( b ) ) is equivalent to the linear algebra below . syntax denotes the matrix - vector multiplication operation , denotes element - wise multiplication , , and overline represents the complement operation . in other words , for and for . in figure [fig : bfs - example - all ] , bfs starts from vertex , thus .if we use a vector of size to represent the corresponding frontier , for example , . this algorithm becomes deterministic with the use of ( select , max)-semiring , because the parent is always chose to be the vertex with the highest label .[ t ] [ alg : spmv - bfs ] algorithm [ alg : spmv - bfs ] describes the baseline bfs .each loop block ( starting in line 3 ) performs a single level traversal . represents the current frontier , which is initialized as an empty bitmap ; is an bitmap that holds the temporary parent information for that iteration only ; is the visited information of previous frontiers .the computational step ( line 4,5,6 ) can be efficiently parallelized with multithreading . for spmv operation in line 4, the matrix data is naturally splitted into pieces for multithreading . at the end of each loop , allgather updates with mpi collective communication .for large graphs , the communication time of distributed bfs algorithms can take as much as seventy percent of the total execution time . to reduce it, we need to reduce the size of the messages .one simple way is to use lossless compression , trading computation for bandwidth .[ t ] [ alg : bfs - compress ] algorithm [ alg : bfs - compress ] describe the distributed bfs with compression .the difference between algorithm [ alg : bfs - compress ] and algorithm [ alg : spmv - bfs ] are line 7 and 9 . at line 7 the frontier vector is first compressed into before communication . at line9 is uncompressed back to after communication. .a wah compressed bitmap .[ cols= " < , < " , ] figure [ fig : time - profiling ] is the time breakdown of the algorithms in figure [ fig : weak - scaling ] : `` traversing '' time is the time spent on local computing ; `` reducing '' time is the time spent on a mpi reduction operation to get the total vertex count of the frontier ; `` communicatoin '' time is the time spent on communication ; `` compression & sieve '' time is the time spent on compression and sieve .for all three algorithms , as the number of nodes increases , `` communication '' times increase exponentially . for _bit _ , it accounts for as much as of the total time for 512 node .the `` reducing ''times also increases because the imbalance of a graph become more severe as the graph becomes larger ; the local `` traversing '' times remain more or less the same because the problem size per node is fixed . at 512 node , _ wah _ reduces the `` communication '' time by compared to _ bit _ ; _ dir - wah _ reduces the `` communication '' time by another compared to _ wah _ , achieving a total reduction compared to _ bit _ , from 18.6 seconds to 3.9 seconds .on one hand , the `` compression & sieve '' time of _ wah _ ( only compression time is counted for _ wah _ ) at 512 nodes is less than of the total run time and not shown in the figure .this means the benefit of compression is at very little cost . on the other hand , the time of `` compression & sieve '' in _ dir - wah _ , the computing time traded for bandwidth accounts for of the total .this is because algorithm [ alg : bfs - dir - compress ] ( line 9 ) needs to copy the frontier for each process before sieve .this copying time is expensive because it is in direct proportion to the number of processes .overall , comparing _ dir - wah _ to _ wah _ ( 512 nodes ) , sieve costs about 1.3 seconds but saves 5.0 seconds in communication the saving is worth the cost .figure [ fig : diff - scales ] plots the performance of different bfs algorithms at different scales .the experiment runs on 512 nodes .we can learn from this plot that the compression and sieve method favours larger messages .the size of messages will affect the results : at scale 26 , _ dir - wah _ , _ wah _ and _ bit _ need to exchange 8 mb bitmap globally using mpi collective communications ; at scale 33 , 1 gb . _ dir - wah _ is the slowest when the scale is small , but it gradually catches up and surpasses all other algorithms when scale gets bigger . as mentioned in section [ sec : spmv - bfs ] , different methods could be used for compression .we did not implement all of them but choose two , zlib library and wah , based on following reasons : zlib library is famous for good compression on a wide variety of data and provides different compression levels ; wah is dedicated to bitmap compression , simpler than plwah and faster than bbc .we use zlib 1.2.6 , and three different compression levels : best compression ( _ zlb - bc _ ) , best speed ( _ zlb - bs _ ) and default ( _ zlb - df _ ) .the results are plotted in figure [ fig : zlib - weak - scaling ] and figure [ fig : zlib - time - profiling ] .figure [ fig : zlib - weak - scaling ] shows the weak scaling performance of bfs algorithms with different compression and sieve methods .bfs with zlib best compression _ zlb - bc _ is the slowest . with 512 nodes ,_ dir - wah _ provides the best performance , followed by _ zlb - bs _ ( 69.9% of _ dir - wah _ ) , _ dir - zlb - bs _ ( 66.7% ) , _ zlb - bc _( 53.5% ) , and _ dir - zlb - df _ ( 39.7% ) respectivelly .figure [ fig : zlib - time - profiling ] shows the time breakdown of these algorithms . at scale 33 with 512 nodes , _ dir - zlb - df _`` communication '' time is the smallest , of _ dir - wah _ , followed by _ dir - zlb - bs _ ( ) , _ dir - zlb - bc _ ( ) , _ zlb - bs _ ( ) and _ zlb - bc _ ( ) . although _ dir - zlb - df _ and _ dir - zlb - bs_ s communication times are less than _ dir - wah _ , their `` compression and sieve '' times are and of _ dir - wah_. so the overall performance of _ dir - zlb - df _ and _ dir - zlb - bs _ are worse than _ dir - wah_. for all three compression levels in zlib we tested , default method , not the best compression method , provides the best compression ratio .in fact , the zlib best compression method is not suited for bitmap compression : it is not only the slowest , but also provides the worst compression ratio .several different approaches are proposed to reduce the communication in distributed bfs .yoo et al . run distributed bfs on ibm bluegene / l with 32,768 nodes .its high scalability is achieved through a set of memory and communication optimizations , including a two - dimensional partitioning of the graph to reduce communication overhead .bulu and madduri improved yoo et al.s work by adding hybrid mpi / openmp programming to optimize computation on state - of - the - art multicore processors , and managed to run distributed bfs on a 40,000-core machine .the method of two - dimensional partitioning reduces the number of processes involved in collective communications .our algorithm reduces the communication overhead in a different way : minimizing the size of messages with compression and sieve .moreover , these two optimizations could be combined together to further reduce the communication cost in distributed bfs .a preliminary result is presented in section [ sec : cls ] to demonstrate its potential .beamer et al . use a hybrid top - down and bottom - up approach that dramatically reduces the number of edges examined .the sample code in graph 500 use bitmap ( bitset array ) in communication , reducing its message size .cong et al . applying communication coalescing in pgas implementation to minimize message overhead .benchmarks , algorithms and runtime systems for graph algorithms have gained much popularity in both academia and industry .earlier works on cray xmt / mta and ibm cyclops-64 prove that both massive threads and fine - grained data synchronization improve bfs performance .bader and madduri designed a fine - grained parallel bfs which utilizes the support for hardware threading and synchronization provided by mta-2 , and ensures that the graph traversal is load - balanced to run on thousands of hardware threads .mizell and maschhoff discussed an improvement on cray xmt . using massive number of threads to hide latency has long be employed in these specialized multi - threaded machines . with the recent progress of multi - core and smt, this technique can be popularized to more commodity users .both core - level parallelism and memory - level parallelism are exploited by agarwal et al . for optimized parallel bfs on intel nehalem ep and ex processors .they achieved performances comparable to special purpose hardwares like cray xmt and cray mta-2 and first identified the capability of commodity multi - core systems for parallel bfs algorithms .scarpazza et al . use an asynchronous algorithm to optimize communication between spe and spu for running bfs on sti cell processors .leiserson and schardl use cilk++ runtime model to implement parallel bfs .cong et al . present a fast pgas implementation of distributed graph algorithms .another trend is to use gpu for parallel bfs , for they provide massively parallel hardware threads , and are more cost - effective than the specialized hardwares .generally , gpus are good at regular problems with contiguous memory accesses .the challenge of designing an effective bfs algorithm on gpu is to solve the imbalance between threads and to hide the cost of data transfer between cpu and gpu .there are several works working on this direction .the main purpose of this paper is to reduce the communication cost in distributed breadth - first search ( bfs ) , which is the bottleneck of the algorithm .we found two problems in previous distributed bfs algorithms : first , their message formats are not condensed enough ; second , broadcasting messages causes waste .we propose to reduce the message size by compressing and sieving . by compressing the messages, we reduce the communication time by . by sieving the messages with a distributed directory before compression , we reduce the communication time by another , achieving a total reduction in communication time and performance improvement over the baseline implementation . for future works, we would like to combine our optimization of message size with other methods such as two - dimensional partitioning and hybrid top - down and bottom - up algorithm .the potential is clear .a preliminary optimization of the distributed bfs algorithm in combinational blas library , compressing the sparse vector using zlib library , reduces the communication time by and increases overall performance by . by using compressed bitmap and adding sieve, we expect to further improve its performance .d. a. bader , `` petascale computing for large - scale graph problems , '' in _ proceedings of the 7th international conference on parallel processing and applied mathematics _ppam07.1em plus 0.5em minus 0.4emberlin , heidelberg : springer - verlag , 2008 , pp .166169 .a. yoo , e. chow , k. henderson , w. mclendon , b. hendrickson , and u. catalyurek , `` a scalable distributed parallel breadth - first search algorithm on bluegene / l , '' in _ proceedings of the 2005 acm / ieee conference on supercomputing _ , ser . sc 05.1em plus 0.5em minus 0.4em washington , dc , usa : ieee computer society , 2005 , pp .25. a. bulu and k. madduri , `` parallel breadth - first search on distributed memory systems , '' in _ proceedings of 2011 international conference for high performance computing , networking , storage and analysis _ ,sc 11.1em plus 0.5em minus 0.4emnew york , ny , usa : acm , 2011 , pp . 65:165:12 .g. malewicz , m. h. austern , a. j. bik , j. c. dehnert , i. horn , n. leiser , and g. czajkowski , `` pregel : a system for large - scale graph processing - `` abstract '' , '' in _ proceedings of the 28th acm symposium on principles of distributed computing _ , ser .podc 09.1em plus 0.5em minus 0.4em new york , ny , usa : acm , 2009 , pp .66 .a. chan , f. dehne , and r. taylor , `` cgmgraph / cgmlib : implementing and testing cgm graph algorithms on pc clusters and shared memory machines , '' in _ international journal of high performance computing applications_.1em plus 0.5em minus 0.4emspringer , 2005 .g. cong , g. almasi , and v. saraswat , `` fast pgas implementation of distributed graph algorithms , '' in _ proceedings of the 2010 acm / ieee international conference for high performance computing , networking , storage and analysis _sc 10.1em plus 0.5em minus 0.4emwashington , dc , usa : ieee computer society , 2010 , pp .111 .g. antoshenkov , `` byte - aligned bitmap compression , '' in _ proceedings of the conference on data compression _ , ser .dcc 95.1em plus 0.5em minus 0.4emwashington , dc , usa : ieee computer society , 1995 , pp .476. f. delige and t. b. pedersen , `` position list word aligned hybrid : optimizing space and performance for compressed bitmaps , '' in _ proceedings of the 13th international conference on extending database technology _ , ser .edbt 10.1em plus 0.5em minus 0.4emnew york , ny , usa : acm , 2010 , pp .228239 .r. thakur , `` improving the performance of collective operations in mpich , '' in _ recent advances in parallel virtual machine and message passing interface .number 2840 in lncs , springer verlag ( 2003 ) 10th european pvm / mpi user s group meeting_.1em plus 0.5em minus 0.4emspringer verlag , 2003 , pp .257267 .j. leskovec , d. chakrabarti , j. kleinberg , and c. faloutsos , `` realistic , mathematically tractable graph generation and evolution , using kronecker multiplication , '' in _ knowledge discovery in databases : pkdd 2005 _ , ser .lecture notes in computer science , a. jorge , l. torgo , p. brazdil , r. camacho , and j. gama , eds.1em plus 0.5em minus 0.4emspringer berlin / heidelberg , 2005 , vol . 3721 , pp .133145 .s. beamer , k. asanovi , and d. a. patterson , `` searching for a parent instead of fighting over children : a fast breadth - first search implementation for graph500 , '' eecs department , university of california , berkeley , tech .ucb / eecs-2011 - 117 , nov 2011 .[ online ] .available : http://www.eecs.berkeley.edu/pubs/techrpts/2011/eecs-2011-117.html d. a. bader and k. madduri , `` designing multithreaded algorithms for breadth - first search and st - connectivity on the cray mta-2 , '' in _ proceedings of the 2006 international conference on parallel processing _ , ser .icpp 06.1em plus 0.5em minus 0.4em washington , dc , usa : ieee computer society , 2006 , pp. 523530 .d. mizell and k. maschhoff , `` early experiences with large - scale cray xmt systems , '' in _ proceedings of the 2009 ieee international symposium on parallel&distributed processing_.1em plus 0.5em minus 0.4em washington , dc , usa : ieee computer society , 2009 , ppv. agarwal , f. petrini , d. pasetto , and d. a. bader , `` scalable graph exploration on multicore processors , '' in _ proceedings of the 2010 acm / ieee international conference for high performance computing , networking , storage and analysis _sc 10.1em plus 0.5em minus 0.4em washington , dc , usa : ieee computer society , 2010 , pp .111 .c. e. leiserson and t. b. schardl , `` a work - efficient parallel breadth - first search algorithm ( or how to cope with the nondeterminism of reducers ) , '' in _ proceedings of the 22nd acm symposium on parallelism in algorithms and architectures _spaa 10.1em plus 0.5em minus 0.4emnew york , ny , usa : acm , 2010 , pp . 303314 .s. hong , s. k. kim , t. oguntebi , and k. olukotun , `` accelerating cuda graph algorithms at maximum warp , '' in _ proceedings of the 16th acm symposium on principles and practice of parallel programming _ , ser .ppopp 11 .1em plus 0.5em minus 0.4emnew york , ny , usa : acm , 2011 , pp . 267276 .p. harish and p. j. narayanan , `` accelerating large graph algorithms on the gpu using cuda , '' in _ proceedings of the 14th international conference on high performance computing _ ,0.5em minus 0.4emberlin , heidelberg : springer - verlag , 2007 , pp .197208 .l. luo , m. wong , and w .- m .hwu , `` an effective gpu implementation of breadth - first search , '' in _ proceedings of the 47th design automation conference _ ,dac 10.1em plus 0.5em minus 0.4emnew york , ny , usa : acm , 2010 , pp .
|
for parallel breadth first search ( bfs ) algorithm on large - scale distributed memory systems , communication often costs significantly more than arithmetic and limits the scalability of the algorithm . in this paper we sufficiently reduce the communication cost in distributed bfs by compressing and sieving the messages . first , we leverage a bitmap compression algorithm to reduce the size of messages before communication . second , we propose a novel distributed directory algorithm , cross directory , to sieve the redundant data in messages . experiments on a 6,144-core smp cluster show our algorithm outperforms the baseline implementation in graph500 by 2.2 times , reduces its communication time by 79.0% , and achieves a performance rate of 12.1 gteps ( billion edge visits per second ) .
|
interferometric observations have gained much popularity in the study of the cosmic microwave background ( cmb ) anisotropy ( white et al .1999 ; carlstrom et al . 2002 ; padin et al .2002 ; leitch et al .2002 ; scaife et al .2008 ) , due to their advantage in stability and phase preserving characteristic via the heterodyne scheme for correlation and polarization observations .the cross correlations used in an interferometric array can effectively suppress many systematics . to achieve high brightness sensitivity , broadband low noise receivers and matching correlators are the two most important components for a continuum interferometer .the advance in millimeter and microwave detector technologies in recent years has produced very broadband components .low - noise amplifiers ( lnas ) with bandwidths of 10 ghz or more are easily accessible with noise performance comparable to bolometric direct detectors , e.g. acbar ( runyan et al . 2003 ) with bandwidths of 30 - 40 ghz and an equivalent noise temperature or rj temperature loading between 40 k and 70 k. the cmb interferometers are therefore very competitive at millimeter wavelengths as compared to bolometers which are usually preferred at shorter wavelengths . the yuan - tseh lee array for microwave background anisotropy ( amiba ) ( lo et al .2001 ; ho et al .2009 ) is a radio interferometric array for the study of the cmb at 3 mm wavelength .amiba detects the minute deviations of the nearly constant cmb temperature over the sky , and can study the spatial variation of this temperature fluctuation. in particular , amiba is imaging galaxy clusters via the sunyaev - zeldovich effect ( sze ) ( sunyaev and zeldovich 1970 , 1972 ; birkinshaw 1999 ) for the first time at 3 mm wavelength .the array presently consists of 13 elements of 1.2 m reflectors distributed on a 6-m diameter platform .the receiver attached to each reflector is dual - polarization , and equipped with cryogenically cooled lnas operating in the 84 to 104 ghz frequency range .the intermediate frequency ( if ) is from 2 to 18 ghz , and is matched with a wide - band analog correlator .the strong interest in cmb observations has motivated the development of very broadband correlators with a limited spectral resolution . utilizing a filter bank and complex correlators ,padin et al ( 2002 ) built an analog correlator with 10 ghz bandwidth .harris and zmuidzinas ( 2001 ) worked on a different approach toward broadband operations by adopting an analog lag correlation scheme to build an auto correlator with a 4 ghz bandwidth .the amiba correlator is also based on the concept of the lag correlator , and is designed to correlate the entire 16 ghz bandwidth .recently holler et al .( 2007 ) also reported a lag correlator for ami with a 6 ghz bandwidth . in principle, analog correlators can achieve better sensitivity over their digital counterparts due to the absence of the digitization process .analog multipliers can easily achieve high sensitivity over multi - octave frequency ranges . in comparisonthe use of analog - to - digital converters ( adcs ) is expensive and has limited bandwidth .thus the analog approach is preferred for observations which require high sensitivity but modest spectral resolution .however , the major difficulty in making a broadband system lies in the distribution and processing of the multi - octave signals coming from the antennas .broadband impedance matching between components presents a major technical challenge for integrating a large scale microwave system . due to the non - linear responses of an analog system , applying appropriate drive power levels and modulation / demodulation techniques to minimize the effects of the spurious terms are also very important .the amiba correlator is our attempt to address these technical issues .the 4-lag analog correlator has a nominal 16 ghz bandwidth .this is currently the only correlator in operation with an effective correlation bandwidth of around 10 ghz .the scientific goals and design philosophy of amiba are presented in ho et al .a broader description of the amiba detection system is given in chen et al .( 2009 ) while a detailed description of the amiba hexapod mount can be found in koch et al .( 2009 ) . from 2007 to 2008, observations were carried out with the 7-element array equipped with 60 cm dishes ( koch et al .details on the observations and analysis of six massive galaxy clusters are presented in wu et al .subaru weak lensing data of 4 galaxy clusters were analyzed with the sze data to derive the baryon fraction ( umetsu et al .this paper describes in detail the instrumental design and testing of the amiba correlator .we provide a system overview in section 2 and major components of the system are described in sections 3 to 7 . in particular , we discuss various aspects of the correlator module in section 4 .in section 8 , we outline the testing and data processing .finally , a conclusion on the system is given in section 9 .as shown in fig .1 , the amiba correlator consists of 5 parts , namely the if distribution , correlation , readout , data acquisition , and control . for the 13-element array , there would be a total of correlations between the 2 polarizations of each receiver in order to obtain the 4 stokes parameters for polarization measurements . in amiba correlator, the number of correlator modules required is reduced by a factor of two by introducing 2-way switches which choose between the two parallel - hand products , xx and yy , or the cross polarization products , xy and yx .for example , observations of the cmb spatial intensity variations require measurement of the xx and yy products only .the network to distribute the 2 to 18 ghz if from the receivers to the correlators is implemented by cascading 3 sections of 4-way power dividers .the broadband analog lag correlator modules were designed using double - balanced diode mixers as the multipliers .the subsequent amplifier provides low - pass filtering with a 3 db cut - off frequency at around 10 khz . in the readout electronics , we use voltage - to - frequency converters ( vfcs ) andcounters to provide integrating analogue to digital conversion of the multiplier outputs .this style of adc has a slow response but a high dynamic range , making them well suited to this application .the data acquisition electronics serve as a memory buffer between the readout electronics and the correlator control computer ( ccc ) . in response to timed event signals from the ccc, it also generates control signals for phase switching , demodulation , and the readout process .the ccc coordinates all the activities in the correlator , as well as archiving the data .equipped with 3 special purpose cards from the australia telescope national facility ( atnf ) , the ccc provides timing , data interface , and event signals for correlator operations .further processing of the archived data is performed offline .this includes the processing required to transform the four lag domain measurements from each lag correlator into two complex channels in the frequency domain . in a digital lag correlator ,the conversion from the lag domain to the frequency domain is a simple discrete fourier transform . in the analogue lag correlator , variations in gain and bandpass of the individual multipliers complicate the transform process considerably .calibration with strong point sources such as planets becomes necessary . by observing a strong point source ,the expected signal from each frequency bin of each baseline can be calculated .correction to the transformation can then be extracted and applied to the following observations .a brief description of the relevant data processing and demonstration are presented in section 8 .an if from 1 ghz to 21 ghz was proposed in the beginning of the project ( lo et al .2001 ) , but was subsequently changed to 2 ghz to 18 ghz after taking into consideration the available commercial microwave components , the complexity of the multi - octave circuit design , and the physical sizes of components .the if signal distribution uses a mixture of off - the - shelf and specially designed microwave components . after passing through the rf lnas and sub - harmonically pumped mixers ( shms ) , which have a conversion loss of around 12 db , the faint signals from the receiver inputs reach a level of -45 to -40 dbm . at this point , the noise figure of the subsequent components has a negligible contribution to the overall noise temperature , as long as the if power level can be kept well above the thermal noise of about - 71 dbm .multiple broadband if amplifiers are used along the if chain to compensate for the divider loss .attenuators are inserted along the if paths to adjust the if power level in order not to saturate the amplifiers , and also to improve the matching between components .the input power level to the correlator modules is chosen to optimize the output signal - to - noise ( s / n ) ratio .the diagram of if power level settings from receiver to correlator is shown in fig .2 . another issue is to keep the if power stable over a period greater than the integration time of each data point .it was found that most of the gain variations can be attributed to ambient temperature changes ( nishioka et al .a ventilation system using fans and a feedback proportional - integral - derivative ( pid ) control was installed to minimize the temperature variations within the electronic cabinets ( jiang et al .2009 ) .the following sections provide a description of each if section .schematics of the amiba receiver and correlator if sections are shown in figs . 3 and 4 respectively for reference . in the section , an if amplifier with a gain of 34 db and output 1db compression point ( p1db ) of 20 dbm is used in conjunction with a 4-way power divider .a directional coupler sends part of the if signal into a total power detector to monitor the if power after the receiver .the reading can be used to adjust the variable gain amplifier ( vga ) in the receiver if to maintain the power level from the receiver .in addition to monitoring the receiver gain variations , the total power reading can also be used for sky dip or hot / cold load experiments to determine the sky and receiver noise temperature .the input of the total power readout electronics is switched between the signal and ground for alternating data .common mode noise such as ground noise section is either terminated with a 50 ohm resistor or connected to a stable if signal . ]are reduced after subtracting the ground reference from the signal .an 18 ghz lowpass filter determines the overall system bandwidth and also filters out the 21 ghz lo leakage from the mixer .a 13-db negative slope equalizer compensates for the gain slope arising from the following components .there are two versions of the section - one with a 2-way switch , a delay trimmer , an if amplifier , and a 4-way power divider ; the other with an if amplifier , a power divider , and a delay compensation cable . during polarization observations ,the 2-way switches can be utilized to obtain 4 cross correlations .the delay trimmer has a range of + /-90 pico - seconds for fine delay tuning .the coarse delay adjustment is done by installing delay compensation cables of designated lengths .more details about delay trimming can be found in lin et al .( 2009 ) .the section consists of a 4-way power divider with built - in amplifiers in front .their dimensions are custom designed in order to feed the wideband if signals into an array of correlator modules in a compact way as shown in fig .the two stages of the mmic if amplifiers with a cascaded gain of 14 db are placed in front of the power divider to compensate for the loss in the power divider .a millimeter - wave dc - blocking capacitor required for bias is placed in front of the mmic and rejects signals below 0.5 ghz .the power divider utilizes a cu - clad substrate with an embedded resistive layer which allows thin - film resistors to be etched into the circuit as part of the fabrication process .however , the same resistive layer causes significant loss and gain slope along a long transmission line . for cavity resonance suppression ,metallic irises were extended down from the cover to confine the circuits into small channels .a picture of the custom power divider is shown in fig .the component layout of the amplification stage and layouts of the two types of power dividers are also shown in fig . 7 and 8 , respectively , for illustration .details on the circuit design and test results can be found in li et al .in a lag correlator system as for amiba , a number of multipliers are used to measure the correlation as a function of the time offset or `` lag '' between two signals , namely where is the lag between the two signals .the cross correlation is represented by the pentagram symbol ( ) , and denotes the complex conjugate of .the cross power spectrum of and can be derived via fourier transforming the cross correlation in a lag correlator of the type described here , the correlation function is measured at discrete values of lags .the bandwidth of the correlator , , is determined by the delay increment between measurements according to the nyquist sampling theorem ( harris et al .the frequency resolution , , is determined by the number of lag measurements , , such that . to limit the gain loss from bandwidth smearing to an acceptable level , for an efficiency of at the primary beam half power points, the fractional bandwidth must be at most where is the baseline length and is the dish diameter . given the final configuration of the array ( 1.2 m dishes with longest baselines of about 6 m ) , a frequency resolution of 8 ghz is chosen . with a bandwidth of 16 ghz , the number of lags in the lag correlator is then set to 4 to provide 2 frequency channels .correlator modules with more ( e.g. 8) lags would not furnish much more information but would minimize the bandwidth smearing for the case of smaller diameter dishes . to examine the bandwidth smearing from another persepctive ,9 shows the calculated bandwidth patterns of the 4 lag outputs with an effective bandwidth of 16 ghz .the lag span is between + /-60 ps , enough to cover the delay for 1.2 m dishes with 6 m baselines . for the amiba correlator, it is important to have a very wide bandwidth for cmb observations . to obtain the wide bandwidth , analog multipliers in the form of balanced mixerswere employed .the use of passive multipliers circumvents the problem of 1/f noise and other noise from the bias circuitry usually associated with active multipliers .a flat amplitude response and a linear phase response ( non - dispersive ) are essential to achieve a large effective bandwidth . from tests with a translating noise source, the entire signal path with the 4-lag correlator module has an effective bandwidth of around 10 ghz ( lin et al . 2009 ) .a brief list of the amiba correlator specifications is given in table 1 .within each correlator module , two stages of 2-way power dividers are cascaded to split the if signals to feed four multipliers .the lags for each multiplier are specified to be 37.5 , 12.5 , -12.5 , and -37.5 pico - seconds , respectively .the 25 pico - second lag spacing is designed to provide nyquist sampling of the if up to 20 ghz .the output voltages from the 4 lags are transformed into 2 complex data points in the cross power spectrum .the double - balanced mixers comprise of low - barrier silicon schottky diode ring quads mounted between two wideband baluns . to avoid reflections due to discontinuities such as wire bonding between circuits , the entire circuit , including power dividers and baluns ,is manufactured on a single microwave substrate . a picture of the 4-lag correlator module manufactured by marki microwave inc .is shown in fig . 10 . regarding the channel isolation , for an ideal 4-lag correlator, the recovered spectrum is where is the correlation measurement at each lag , and is the nominal lag . if the input signals are continuous - wave ( cw ) signals at the center of each channel , namely 6 ghz and 14 ghz in our case , the recovered spectrum is shown in fig . 9 with a fwhm of about 12 ghz for each cw signal .therefore , the leakage between channels is quite severe for the 4-lag correlators . for comparison ,the recovered spectrum of an ideal 8-lag correlator with input cw signals at the center of its 4 channels is also shown .although the leakage between adjacent channels is still severe for an 8-lag module , the isolation between non - adjacent channels is acceptable . to improve on the channel isolation ,one possibility is to apply a window function during the lag - to - visibility transformation .however , constrained by the small number of lags that we have , another possibility might be to apply bandpass filtering or a filter bank before correlation .the filter bank might also reduce the bandpass variation within each channel and maximize the effective bandwidth . during observations ,the correlator input signals consist of large un - correlated noise signals from the receivers and tiny correlated signals from the sky .these large un - correlated noise signals tend to pump the correlator diodes in much the same way as the local oscillator signal in a mixer application .this can lead to non - linearity of the correlator multipliers and an excess of noise .we adjust the correlator input power level to reduce this degradation . in a simulation of a double - balanced mixer with four tones ( two with large power and two with 28 db smaller power ) as shown in fig .11 , when the large signals have a power below a certain level , the product of the two small signals drops dramatically .as the input power of the large signals increases above the threshold , the products of both small signals and large signals increase linearly , i.e. proportional to .as the input power keeps increasing , eventually both small - signal and large - signal products become compressed .we can refer the large signals in the simulation to the un - correlated noise from the receivers , and their product as the output fluctuations of the correlator , as the output fluctuation of the correlator is indeed due to the beating or mixing of un - correlated signals ( kraus , j. d. 1986 ) .the small signals in the simulation are regarded as the correlated signals in the system .the simulation defines three regimes of operation , namely , under - pumped , linear and compressed . from experiments ,our correlator modules have a linear range for input power from -20 dbm to -12 dbm , corresponding to output fluctuations with a root - mean - square ( rms ) from around 100 to 700 counts , including the backend noise , as shown in fig .as the correlators are operated in the linear regime , it is seen that the correlator output fluctuations are proportional to , where are the input power of each baseline ( wrobel et al .since there is no total power detector at the correlator module inputs , the input power can not be measured directly , but can be inferred from the measured rms of each baseline . in the compressed regime ,the output rms grows monotonically , although not linearly , as the input power increases .the rms serves as an indicator of which regime we are operating in .the gain can then be adjusted to optimize the s / n .an ideal double - balanced mixer used as a multiplier can suppress even harmonics of both input signals and their products , i.e. , , and , where either n or m is even and is the input voltage ( maas 1993 ) . from simulations , it has been found that with a double - balanced mixer , the spurious terms such as , , are suppressed by over 40 db below the desired product .experiments show that residuals of these spurious terms can still be seen .they can be further suppressed by the use of a phase switching / demodulation process which allows the introduction of a dc blocking capacitor at the input of the low frequency amplifier following the correlator module .phase switching is achieved by switching a suitable delay into the lo path . however , as the lo signal is switched through different paths , a small amplitude modulation of the if power level is introduced .this modulation survives the demodulation process and appears as an offset in the term .to minimize offset , variable attenuators are used to adjust the 42 ghz lo power to the shm to minimize the if power modulation .s / n is optimized by operating the correlator in a slightly compressed regime , thereby ensuring that front - end noise dominates backend noise from the dc amplifier and readout electronics .12 shows the output s / n estimated from the jupiter fringes as the input power were varied .most of lag outputs shows an increase in s /n as the input power ( infered from the output rms ) increases , even when some of them were driven into the compressed regime .a correlation interferometer where a multiplier is employed , the minimum detectable temperature or sensitivity is where is the system noise temperature , is the bandwidth , and is the integration time , assuming 100 efficiency .( kraus , j. d. 1986 ) . from the correlatoroutput fluctuations , we can estimate the noise contributions from each part of the system , namely the front - end ( , , and terms , where and are integers , and represent the large un - correlated noises . ) and backend ( fig .11 ) , assuming all noises are un - correlated , i.e. their variances are additive .typical correlator output fluctuations measured under different conditions are listed in table 2 to illustrate how the output noise might increase due to the spurious terms . as a result ,noise contributions from the backend , as well as the spurious , terms can be estimated .currently these spurious terms and the backend noise reduce the s / n by 20 . an analysis of the system efficiency based on the observations with the 7-element array can be found in lin et al .( 2009 ) . for the low - frequency low - noise amplifier following the lag correlator module ,because of the high output impedance of the mixers , low noise current amplifiers were chosen to minimize the backend noise .the bandwidth of the amplifier is limited in the feedback loop in order to reduce the output noise . due to the phase switching , the correlated signals are square waves , at the phase switching frequency . a dc blocking capacitor is used in front of the amplifier to remove any dc term from the mixer output . as a result , the `` dc '' amplifier has a 3 db passband from 0.1 to 9 khz .the schematic of the dc amplifier is shown in fig . 13 .the correlator readout circuit uses a vfc plus a 24-bit counter as the adc .the vfc generates pulse sequences at a frequency linearly proportional to the analog signal from the dc amplifier output .the up / down counter accumulates the pulses , acting as a long - term integrator as well as a phase - switching demodulator .the up / down function is controlled by the demodulation signal .in amiba correlator there are four layers of processes used to remove systematics .the correlation process suppresses the uncorrelated receiver noise and gain fluctuations .the phase switching and demodulation process reduces the offsets or gain drifts that do not have the characteristics of the demodulation signals .one additional phase switching layer is implemented by inverting the sign of the demodulation signals for every other integration .every two data points are then subtracted with respect to each other offline in the ccc . from analysis ,the correlator output spectra show a white noise signature between and 1 hz , with an increase in power at lower frequency due to slow gain drifting ( nishioka et al .for the remaining offsets or false signals due to ground pick up , a two - patch scheme ( padin et al .2002 ) is adopted .observations are taken with , for our case , 3-minute tracking on the main field , and another 3-minute tracking on a trailing or leading field at the same declination , separated by few minutes in right ascension .the main and the trailing / leading fields then share the same azimuth - elevation track with identical ground contamination of the data . at the end of an integration interval , the contents of the counters are dumped into shift registers and serially scanned out by the data acquisition electronics . for the 13-element array , instead of the custom readout icsused previously in the 7-element array ( li et al .2004 ) , discrete vfc components are used . the digital section ( counters / shift registers ) is implemented in a field programmable gate array ( fpga ) .the new vfc is synchronous , and has better linearity than the previous readout ics .a schematic of the readout electronics with the timing diagram of the control signals is shown in fig .the data acquisition circuit is used to store data from the readout circuits before they are transferred to the ccc via a direct memory access ( dma ) process .control signals , such as readout control signals and phase switching / demodulation signals are also generated here .ram blocks configured as bit rams within the fpgas are allocated to store the correlation and total power data .there are two steps involved in transferring the data from the readout circuits to the ccc , scan and dma .each step is triggered by events originating from the event generator ( eg ) in the ccc .the dma process is ideal for transferring large volumes of data but is more efficient if the data are stored in consecutive addresses . for the 7-element array ,walsh functions of 64 intervals per cycle are used as the phase switching signals .currently there are 5 cycles in each integration of 0.226 second .this corresponds to a fastest switching frequency of around 700 hz . for the 13-element array ,walsh functions of 128 intervals per cycle are used .since the corner frequency of the 1/f noise from the passive diode multiplier we use is low , phase switching at rather low frequencies is feasible .the operation of the amiba correlator is controlled by the ccc - an industrial grade pc running linux .the ccc is equipped with 3 special cards - event generator ( eg ) , australia telescope distributed clock ( atdc ) , and pci correlator data bus interface ( pciif ) from atnf . the function of the eg is to generate events with precise timing .the pciif acts as an interface between the data acquisition circuit and the ccc to receive the dma data .the interface appears as a 256k byte block of memory on the pci bus while the pciif also assigns memory addresses and several control signals for the dma process .interrupts from the eg are relayed to the pciif for dma timing .the atdc provides precise timing for all the correlator operations and also generates an 8 mhz reference clock for digital processing in the data acquisition and readout circuits .the atdc can phase - lock to both a 5(or 10 ) mhz sine wave and a 1 pulse - per - second ( pps ) signal from a gps receiver and is scheduled to synchronize the system clock of the ccc periodically .an alternative is to synchronize the system clock of the ccc to the gps receiver via the network time protocol ( ntp ) . a block diagram of correlator control components and signals is shown in fig .to test the response of each baseline , a w - band noise source is set up to translate between the 2 receiver inputs .the data of lag sequence can be fourier transformed to obtain the bandpass response .the bandpass shows a two - hump gain response and some scatter from a linear phase response . from the bandpass response, we can estimate the effective bandwidth as where is the complex bandpass response .the baselines of the 7-element array were estimated to have effective bandwidths ranging from 9 to 13 ghz ( lin et al .2009 ) . with the derived bandpass responses ,a transform matrix can be created so that [ s ] = [ r],\ ] ] where is the complex column vector representing the input spectrum , and is the real column vector for the correlator output . and include both positive frequency terms and their complex conjugate at negative frequencies to yield real correlation products ( li et al .2004 ) . with the correlation outputs from each baseline, we can invert and then derive the spectra of the input signals or visibilities via = [ k]^{-1 } [ r].\ ] ] however , since the 4 lag outputs only give us 2 independent complex data points in the frequency spectrum , is rather singular due to degeneracy .the singular value decomposition ( svd ) method can be used to invert .once is obtained , the spectra of channels can be obtained .eventually we have to consolidate those channels into 2 bands .another approach would be to integrate the complex response matrix in frequency into two bands first .the integrated matrix can be easily inverted , and the 2-band visibility output can be written as =[\bar{k}]^{-1}[r]$ ] . from simulations ,lin et al . ( 2009 ) showed that given an accurate and high spectral resolution measurement of , the visibility can be recovered regardless of the source offset from the phase center . on the other hand , inaccurate estimate of results in errors in the recovered visibility .the error varies with the source offset and can be calibrated at any given offset by observing a point source at the same offset from the phase center .the calibration is strictly valid only in a small region ( a small fraction of the synthesized beam ) around the location at which the calibration source is observed . at other locations within the field of view , there are gain and phase errors set by uncertainties in the lag - to - visibility transformation that affect the quality of the image reconstruction , although further calibration should reduce these errors .however , further simulations show that applying the calibration to the entire field of view only contributes about in rms to the recovered point source flux after combining data from all baselines ( lin et al .2008 ) .although using a translating noise source at the receiver inputs to create artificial fringes is a useful method to measure the transfer function of the correlator , it is hard to run the test inline with observations .the spectral resolution obtained with our current setup was not sufficient .this approach becomes more tedious as the number of baselines increases and we would still need to use the planet data to calibrate the gain and phase of each baseline .an alternative would be to derive the response matrix in two bands , assuming nominal responses of an ideal lag correlator .data from tracking of strong point sources such as jupiter , saturn , or mars , performed regularly at an interval of approximately 3 hours during the observations , can be used for calibration . all data analysis andcalibrations were restricted to observations made at the phase center at the moment .16 shows the images of jupiter from visibilities with and without calibration .the calibrator was another set of data of jupiter taken several minutes apart .the uncalibrated visibilities suffer from errors in the transformation as mentioned above .forming an image directly results in very strong cancellation .after calibration , the visibilities add up coherently and form a strong point source at the phase center .the images have been deconvolved using the clean algorithm ( hogbom 1974 ) and plotted with the same dynamical range .more details about data processing can be found in wu et al .( 2009 ) and lin et al .from observations , the amiba correlator has proven to have the sensitivity required for cmb detection .the analog multipliers used in the lag correlators provide the wide bandwidth required for high sensitivity .the inherent noise rejection of the interferometer is also very beneficial .compared with the filterbank scheme used by cbi ( padin et al .2002 ) , the lag correlator design is simpler and more compact .large bandwidth with a small number of lags does not present a significant challenge for the lag - to - visibility transformation after proper calibrations . at the moment , the effective bandwidth we can achieveis limited by bandpass variations due to wideband impedance matching .however , a significant portion ( ) of the nominal bandwidth has been achieved . to improve on the bandpass variations ,it is possible to operate the correlator at a higher if , since the response of the analog multipliers is not limited to frequencies of few ghz .thus , with a similar fractional bandwidth , a larger bandwidth could be achieved .wideband complex correlators in conjunction with bandpass filters can also be considered . for amiba ,complex correlators with a bandwidth of 8 ghz are suitable . by interleaving a number of them in frequency ,a large bandwidth is feasible .the filters would also improve the isolation between channels .similar analog interferometric systems can be constructed for high sensitivity , high angular resolution , and moderate frequency resolution observations . and as the large un - correlated noises , the output from a multiplier can be represented as the summation of various possible terms , , and , where and are integers .each output terms can contribute to correlator output fluctuations .on the other hand , and present the small correlated signals , and is the expected product .a current noise source is used to represent the backend noise from the dc amplifier and the vfc adc at the input of the dc amplifier . _bottom _ : in the simulation of a double - balanced mixer with 4 tones , when the large signals have power below a certain level ( -2 dbm ) , the product of 2 small signals ( red curve ) drops dramatically . as the input power of the large signals increase above the threshold , products of both small signals and large signals ( black curve )increase linearly . as the input power keeps increasing ( above 8 dbm ) , eventually both small - signal and large - signal products get compressed . since a standard diode model is used during the simulations , the input power levels of the large signals where the diodes are sufficiently pumped or compressed are different from our measurements ., title="fig:",width=4 ] and as the large un - correlated noises , the output from a multiplier can be represented as the summation of various possible terms , , and , where and are integers .each output terms can contribute to correlator output fluctuations .on the other hand , and present the small correlated signals , and is the expected product .a current noise source is used to represent the backend noise from the dc amplifier and the vfc adc at the input of the dc amplifier . _bottom _ : in the simulation of a double - balanced mixer with 4 tones , when the large signals have power below a certain level ( -2 dbm ) , the product of 2 small signals ( red curve ) drops dramatically .as the input power of the large signals increase above the threshold , products of both small signals and large signals ( black curve ) increase linearly . as the input power keeps increasing ( above 8 dbm ) , eventually both small - signal and large - signal products get compressed . since a standard diode model is used during the simulations , the input power levels of the large signals where the diodes are sufficiently pumped or compressed are different from our measurements ., title="fig:",width=4 ] * acknowledgments . *we thank the administrative staff for their support over the years .we thank a. harris for useful discussions and notes .we thank the ministry of education , the national science council , and the academia sinica for their support of this project .we thank the noaa for accomodating the amiba project on their site on mauna loa .we thank the hawaiian people for allowing astronomers to work on their mountains in order to study the universe .100 birkinshaw , m. 1999 , phys ., 310 , 97 carlstrom , j. e. et al .2002 , ara&a , 40 , 643 chen , m .-2009 , , 694 , 1664 harris , a. i. & zmuidzinas , j. 2001 , rev .instruments , 77 , 1531 ho , p. t. p.2009 , , 694 , 1610 hogbom , j. a. 1974 , a&as , 15 , 147 holler , c. m. et al .2007 , a&a , 464 , 795 jiang , h. et al .2009 , i2mtc 09 .ieee koch , p. et al .2006 , proc .eucap , ed .h. lacoste l. ouwehand ( esa sp-626 ) , 668.1 koch , p. et al .2009 , , 694 , 1670 kraus , j. d. 1986 , radio astronomy ( 2nd ed .; cygnus - quasar ) leitch , e. m. et al .2002 , , 568 , l28 li , c. -t .2004 , spie , 5498 , 455 lin , k .- y .2008 , proc .spie , 7012 , 701207 lin , k. -y .2009 , , 694 , 1629 lo , k. y. et al .2001 , aip conference proceedings , 586 , 172 maas , s. a. 1993 , microwave mixers ( 2nd ed . ; artech house ) nishioka , h. et al .2009 , , 694 , 1637 padin , s. et al .2002 , asp , 114 , 83 runyun , m. c. et al .2003 , , 149 , 265 scaife , a. m. m. et al .2008 , , 385 , 809 sunyaev , r. a. & zeldovich , y. b. 1970 , comments astrophys .space phys . , 2 , 66 sunyaev , r. a. & zeldovich , y. b. 1972 , comments astrophys .space phys ., 4 , 173 wrobel , j. m. and walker , r. c. , 1999 , in synthesis imaging in radio astronomy ii , asp conference sereis vol .g. b. taylor , c. l. carilli , and r. a. perley , 171 umetsu , k. et al .2009 , , 694 , 1643 white , m. et al .1999 , , 514 , 12 wu , j. -h .2009 , , 694 , 1619
|
a wideband analog correlator has been constructed for the yuan - tseh lee array for microwave background anisotropy . lag correlators using analog multipliers provide large bandwidth and moderate frequency resolution . broadband if distribution , backend signal processing and control are described . operating conditions for optimum sensitivity and linearity are discussed . from observations , a large effective bandwidth of around 10 ghz has been shown to provide sufficient sensitivity for detecting cosmic microwave background variations .
|
the earliest attempts at simulating relativistic flows in the presence of strong gravitational fields are attributed to may and white ( 1966 , 1967 ) who investigated gravitational collapse in a one dimensional lagrangian code using artificial viscosity ( av ) methods to capture shock waves .wilson ( 1972 , 1979 ) subsequently introduced an alternative eulerian coordinate approach in multi - dimensional calculations , using traditional finite difference upwind methods and artificial viscosity for shock capturing . since these earliest studies ,av methods have continued to be developed in their popularity and applied to a variety of problems due largely to their general robustness .these methods are also computationally cheap , easy to implement , and easily adaptable to multi - physics applications .however , it has been demonstrated that problems involving high lorentz factors ( greater than a few ) are particularly sensitive to different implementations of the viscosity terms , and can result in large numerical errors if solved using time explicit methods .significant progress has been made in recent years to take advantage of the conservational form of the hydrodynamics system of equations to apply godunov - type methods and approximate riemann solvers to simulate ultra - relativistic flows .although godunov - based schemes are accepted as more accurate alternatives to av methods , especially in the limit of high lorentz factors , they are not infallible and should generally be used with caution .they may produce unexpected results in certain cases that can be overcome only with specialized fixes or by adding additional dissipation .a few known examples include the admittance of expansion shocks , negative internal energies in kinematically dominated flows , ` carbuncle ' effect in high mach number bow shocks , kinked mach stems , and odd / even decoupling in mesh - aligned shocks .godunov methods , whether they solve the riemann problem exactly or approximately , are also computationally much more expensive than their simpler av counterparts , and more difficult to extend the system of equations to include additional physics .hence we have undertaken this current study to explore an alternative approach of using high resolution , non - oscillatory , central difference ( nocd ) methods to solve the relativistic hydrodynamics equations .these new schemes combine the speed , efficiency , and flexibility of av methods with the advantages of the potentially more accurate conservative formulation approach of godunov methods , but without the cost and complication of riemann solvers and flux splitting .the nocd methods are implemented as part of a new code we developed called cosmos , and designed for fully general relativistic problems .cosmos is a collection of massively parallel , multi - dimensional , multi - physics solvers applicable to both newtonian and general relativistic systems , and currently includes five different computational fluid dynamics ( cfd ) methods , equilibrium and non - equilibrium primordial chemistry , photoionization , radiative cooling , radiation flux - limited diffusion , radiation pressure , scalar fields , newtonian external and self gravity , arbitrary spacetime geometries , and viscous stress .the five hydrodynamics methods include a godunov ( tvd ) solver for newtonian flows , two artificial viscosity codes for general relativistic systems ( differentiated by mesh or variable centering type : staggered versus zone - centered ) , and two relativistic methods based on non - oscillatory central difference schemes ( differentiated also by the mesh type : staggered versus centered in time and space ) .the emphasis in the following sections is to review our particular implementations of the av and nocd methods and compare results of various shock wave and accretion test calculations with other published results .we also explore the accuracy of both av and nocd methods in simulating ultra - relativistic shocks over a wide range of lorentz factors .both of the artificial viscosity methods in cosmos are based on an internal energy formulation of the perfect fluid conservation equations .the equations are derived from the 4-velocity normalization , the conservation of baryon number for the fluid rest mass density , the parallel component of the stress energy conservation equation for internal energy , the transverse component of the stress energy conservation equation for momentum , and an adiabatic equation of state ( eos ) for the fluid pressure , where is the adiabatic index and is the fluid internal energy density . for a perfect fluid ,the stress - energy tensor is where is the relativistic enthalpy , is the specific internal energy , is the contravariant 4-velocity , and is the 4-metric .the resulting equations can be written in flux conservative form as where is the determinant of the 4-metric , is the relativistic boost factor , is the generalized fluid density , is the transport velocity , is the covariant momentum density , and is the generalized internal energy density .we use the standard convention in which greek ( latin ) indices refer to 4(3)-space components . the system of equations ( [ eqn : av_de ] ) ( [ eqn : av_mom ] ) are complemented by two additional expressions for and that are convenient for numerical computation . introducing a general tensor form for artificial viscosity ( see section [ sec : artificial ] ) , and defining ) w \space , \label{eqn : m2}\ ] ] the momentum can be expressed as , and is computed from the normalization of the four velocity .the coordinate velocity then becomes with .also , the time component of the four velocity can be calculated from the normalization , and used to derive the following expressions for the former expression ( ) is used in the staggered mesh av schemes as it results in more accurate density and velocity jump conditions across shock fronts .the latter is more convenient for the zone centered nocd methods .the second class of numerical methods presented in this paper ( the nocd schemes ) are based on a simpler conservative hyperbolic formulation of the hydrodynamics equations . in this case , the equations are derived directly from the conservation of stress - energy expanding ( [ eqn : tmnu2 ] ) into time and space explicit parts yields the flux conservative equations for general stress - energy tensors with curvature source terms substituting the perfect fluid stress tensor ( [ eqn : tmn ] ) into ( [ eqn : tmnu3 ] ) , and including baryon conservation results in the following set of equations }{\partial x^i } & = & \sigma^0 , \label{eqn : hr_en } \\ \frac{\partial { \cal s}^j}{\partial t } + \frac{\partial ( { \cal s}^j v^i)}{\partial x^i } + \frac{\partial [ \sqrt{-g}~(g^{ij } - g^{0j } v^i)~p]}{\partial x^i } & = & \sigma^j , \label{eqn : hr_mom}\end{aligned}\ ] ] where the variables , , and are the same as those defined in the internal energy formulation . however , now are the new expressions for energy and momenta .it is convenient to express and in terms of the internal energy formulation variables , especially for initializing data and reconstructing the equation of state where we have explicitly assumed an adiabatic gamma - law fluid .cosmos is a multi - dimensional ( 1 , 2 or 3d ) code that uses regularly spaced cartesian meshes for spatial finite differencing or finite volume discretization methods .evolved variables are defined at the zone centers in the nocd , tvd , and non - staggered av methods . in the staggered mesh av method ,variables are centered either at zone faces ( the velocity and momentum vectors ) or zone centers ( all other scalar and tensor variables ) .periodic , reflection , constant in time , user - specified , and flat ( vanishing first derivative ) boundary conditions are supported for all variables in the evolutions .the hydrodynamic equations in both of the formalisms presented in [ sec : equations ] are solved with time - explicit , operator split methods with second order spatial finite differencing .single - step time integration and dimensional splitting is used for both av methods .the nocd schemes use a second order predictor - corrector time integration with dimensional splitting , and the tvd approach utilizes a third order runge - kutta time integration with finite volume representations for source updates . since the main emphasis here is on relativistic hydrodynamics , the following discussion is limited to presenting details relevant for the av and nocd schemes : the tvd method is currently only newtonian .the order and frequency in which various source terms and state variables are updated in the av methods can affect the numerical accuracy , especially in high boost flows .the following order composing a complete single cycle or time - step solution has been determined to produce a reasonable compromise between cost and accuracy : * compute time step from ( [ eqn : timestep ] ) * store current value of boost factor * curvature * * compute pressure and sound speed from the ideal fluid equation of state : + , * * evaluate scalar or tensor artificial viscosity * * normalize velocity and update boost factor : + , using )^2 ] , and * compression * * compute pressure * * normalize velocity , update * * compute pressure again * * update energy , accounting for compressional heating : + * pressure gradient * * compute pressure * * update momentum , accounting for pressure gradients : + * transport * * compute pressure * * normalize velocity , update * * update transport terms in all variables : + , , and * boost factor * * compute pressure and sound speed * * normalize velocity , final update of * * update energy , accounting for the variation of in time : + \dot w ] * * update energy and momentum , accounting for curvature : + and , using second order finite differencing for metric derivatives * flux operator * * compute pressure from eos * * normalize velocity and update boost factor : + , using ^ 2 ] * * if the mesh is nonstaggered in time : + perform interpolations to recenter variables on the original nonstaggered mesh + * if the mesh is staggered : * * repeat curvature and flux steps to evolve solution from to * * shift array indices to realign final coordinates at with + initial coordinates at by * update spacetime metric components and if time dependent two essential assumptions built into this method are that the cell - averaged solutions can be reconstructed as muscl - type piece - wise linear interpolants , and that the flux integrals are defined and evaluated naturally on staggered meshes . since we adopt directional splitting for multi - dimensional problems , the basic discretization scheme used to solve equations ( [ eqn : hr_de ] ) - ( [ eqn : hr_mom ] ) can be derived from a simple one - dimensional , first order model equation of the form where represents any of the density , energy or momentum variables , and is the associated flux . a formal solution to ( [ eqn : omega ] )can be written over a single time cycle on a staggered mesh as .\label{eqn : formal}\ ] ] introducing the notation , the average of the piece - wise linearly reconstructed solutions at the staggered positions in ( [ eqn : formal ] ) is given by where refer to the piecewise linearly interpolated solutions from the upwind and downwind cell centers considering that the time averaged integrals in ( [ eqn : formal ] ) can be approximated using midpoint values immediately suggests a two step predictor - corrector procedure to solve ( [ eqn : omega ] ) : the state variables are predicted at by then corrected on the staggered mesh by , \label{eqn : corr}\ ] ] where we have also substituted ( [ eqn : stagger ] ) for in ( [ eqn : formal ] ) .equations ( [ eqn : pred ] ) and ( [ eqn : corr ] ) represent the complete single cycle solution averaged on a staggered mesh .the mesh indices can be brought back into alignment by setting , performing two time cycle updates ( computing then ) to time , and re - center the solution on the original zone positions by shifting the array indices as . as an alternative to mesh staggering, the solution after applying the corrector step can be reconstructed directly back to the nonstaggered cell centers by a second order piece - wise linear extrapolation to yield for the final single time - step solution - \frac{1}{8}\left ( \omega_{j+1/2}^{n+1 ' } - \omega_{j-1/2}^{n+1'}\right ) .\label{eqn : nonstag}\end{aligned}\ ] ] we have found no substantial differences between the staggered and nonstaggered approaches in all the test calculations we have performed . hence all subsequent results presented in this paper from this class of algorithms are derived with the nonstaggered mesh method using ( [ eqn : pred ] ) and ( [ eqn : nonstag ] ) .one final important element of this method is that all gradients ( of either the state variables or fluxes ) must be processed and limited for monotonicity in order to guarantee non - oscillatory behavior .this is accomplished with either the minmod limiter or the van leer limiter \left(\omega_{j+1 } - \omega_j\right ) , \ ] ] which satisfy the tvd constraints with appropriate courant restrictions , although we note that steeper limiters can yield undesirable results especially in under - resolved high boost shock tube calculations .we begin testing the staggered av and nonstaggered nocd methods with one of the standard problems in fluid dynamics , the shock tube .this test is characterized initially by two different fluid states separated by a membrane . at the membraneis removed and the fluid evolves in such a way that five distinct regions appear in the flow : an undisturbed region at each end , separated by a rarefaction wave , a contact discontinuity , and a shock wave .this problem only checks the hydrodynamical elements of the code , as it assumes a flat background metric .however , it provides a good test of the shock - capturing properties of the different methods since it has an exact solution against which the numerical results can be compared .two cases of the shock tube problem are considered first : moderate boost ( ) and high boost ( ) shock waves . in the moderate boost case ,the initial state is specified by , , and to the left of the membrane and , , and to the right . in the high boost case , , , , and , , . in both cases ,the fluid is assumed to be an ideal gas with , and the integration domain extends over a unit grid from to , with the membrane located at .the av shock tube results presented here were run using the scalar artificial viscosity with a quadratic viscosity coefficient , linear viscosity coefficient , and courant factor ( 0.3 for the highest boost cases ) .for the nocd method we use and the minmod limiter which gives smoother and more robust results than the steeper limiters in simulations of under - resolved highly relativistic shocks .we have carried out these tests in one , two and three dimensions , lining up the interface membrane along the main diagonals in multi - dimensional runs .we found it necessary , in order to maintain stability in the highest boost cases , to impose constraints on the pressure and energy density at each cycle to ensure they maintained positive values .although this was nt necessary at velocities smaller than about 0.95 , we nevertheless enforced these conditions in all the calculations .figures [ fig : fig1 ] & [ fig : fig2 ] show spatial profiles of the moderate boost results at time on a grid of 400 uniformly spaced zones using the av and nocd methods respectively .figures [ fig : fig3 ] & [ fig : fig4 ] show the corresponding solutions of both av and nocd methods for the high boost test using a higher resolution grid with 800 zones at time .the density under - shoot ( about 30% ) in figures [ fig : fig3 ] and [ fig : fig4 ] is due to a lack of sufficient spatial resolution , and improves significantly by increasing the number of zones .tables [ tab : errors1 ] & [ tab : errors2 ] summarize the errors in the primitive variables , , and for different grid resolutions and cfd methods using the - 1 norm ( i.e. , , where and are the numerical and exact solutions , respectively , and for 1d problems the orthogonal grid spacings are set to unity ) .included in table [ tab : errors1 ] for comparison are the errors reported by using marquina s approximate riemann solver .they also tested the roe and flux - split approximate solvers and achieved similar results as with marquina s method , so we do not include those numbers here .we find the errors in table [ tab : errors1 ] are quite comparable between all three methods with convergence rates just under first order as expected for shock capturing methods .for the high boost case in table [ tab : errors2 ] , our errors are comparable to those reported by for the same shock tube simulation using an extended high order piecewise parabolic method ( ppm) with an exact riemann solver .however , we note that their published errors are for the conserved quantities ( generalized fluid density , generalized energy density , and covariant momentum density ) rather than the primitive variables we report .their results are included in table [ tab : errors2 ] .we also note that the slightly larger errors in the 3d av results of table [ tab : errors1 ] are due primarily to boundary effects ( particularly at the grid corners ) and not to shock capturing differences .in fact , errors computed only along the main diagonal are about the same for the nocd and av methods .table [ tab : errors3 ] shows the mean - relative errors ( defined as , where again and are the numerical and exact solutions , respectively ) in the primitive variables over a range of boost factors using 800 zones to cover the same unit domain .the different boost factors are established by systematically increasing the original value of over the moderate boost case .these errors are also displayed graphically in figure [ fig : fig5 ] , comparing the av and nocd methods up to the maximum boost ( corresponding to a velocity of ) allowed at this grid resolution , which we define as four cells to cover the leading post - shock density plateau using the analytic solution as a guide .the increasing trend ( with boost ) in error reflects the stronger nonlinear coupling through the fluid velocity and the narrower and steeper leading shock plateau found in the density plots of figures [ fig : fig3 ] and [ fig : fig4 ] . over the range of shock velocities we have simulated , the errors are comparable between the av , nocd , and godunov methods .a second test presented here is the wall shock problem involving the shock heating of cold fluid hitting a wall at the left boundary ( ) of a unit grid domain .the initial data are set up to be uniform across the grid with adiabatic index , pre - shocked density , pre - shocked pressure , and velocity , where is the infall velocity parameter .when the fluid hits the wall a shock forms and travels to the right , separating the pre - shocked state composed of the initial data and the post - shocked state with solution in the wall frame , \end{aligned}\ ] ] where is the velocity of the shock front , and the pre - shocked energy and post - shocked velocity were both assumed negligible ( ) . to facilitate a direct comparison between our results and the genesis code of ( which again uses marquina s approximate riemann solver ) all of the results shown in the figures and tables , unless noted otherwise , were performed on a 200 zone uniformly spaced mesh and ran to a final time of .also , for the nocd methods , the courant factor is set to , and we use the van leer limiter for gradient calculations , which generally gives smaller errors when compared to the more diffusive minmod limiter ( about a 30% reduction for the lower boost cases we have tried ) . for the av methods , we use the scalar viscosity with , , and for all the runs .figures [ fig : fig6 ] & [ fig : fig7 ] show spatial profiles for the case with initial velocity and 200 zones for the av and nocd methods , respectively .table [ tab : errors4 ] summarizes the - 1 norm errors in both methods as a function of grid resolution .the values given in parentheses are the contributions to the total error in the first twenty zones from the reflection wall at .these numbers clearly indicate a disproportionate error distribution from wall heating , an effect that is especially evident in the av results , and particularly in the density curve where the first two data points in figure [ fig : fig6 ] differ significantly from the true post - shock state . excluding this contribution may give a more accurate assessment of each method s ability to resolve the actual shock profile .figure [ fig : fig8 ] plots the mean - relative errors ( using 200 zones ) in density , which are generally greater than errors in either the pressure or velocity , as a function of boost factor up to about the maximum boost that the av methods can be run accurately . although we are not able to extend the av method reliably ( which we define by a 10% mean error threshold , and increased sensitivity to viscosity parameters ) beyond , the nocd methods , on the other hand , are substantially more robust .in fact , as shown in table [ tab : errors5 ] and figure [ fig : fig9 ] , the nocd schemes can be run up to arbitrarily high boost factors with stable mean relative errors , typically less than two percent with no significant increasing trend .these errors are generally smaller than those quoted by .however , we note that the errors for the av method presented in figure [ fig : fig8 ] and table [ tab : errors5 ] can be improved significantly by either lowering the courant factor or increasing the viscosity coefficients . for example, decreasing from 0.6 to 0.3 , or increasing from 2 to 3 for the case reduces the - 1 norm in density from 0.116 to 0.048 and 0.033 , respectively .we have also been able to run accurate wall shock tests with the av method at higher boosts than shown in table [ tab : errors5 ] by choosing different parameter combinations ( e.g. , , and 400 zones can evolve flows with fairly well ) .however , rather than adjusting parameters to achieve the best possible result for each specific problem , we have opted to keep numerical parameters constant between code tests , boost factors , and numerical methods . as a test of hydrodynamic flows in spacetimes with nontrivial curvature ,we consider radial accretion of an ideal fluid onto a compact , strongly gravitating object , in this case a schwarzschild black hole .the fluid will accrete onto the compact object along geodesics , thus allowing the general relativistic components of our codes to be tested against a well - known analytic stationary solution . assuming a perfect fluid in isotropic schwarzschild coordinates where is the isotropic radius , the exact solution to this problem is dependent on a single parameter , the gravitational binding energy ( ) . in terms of this parameter ( which we set to in our tests ), the solution can be written where is the boost factor , is the radial infall velocity in isotropic radial coordinates , is the generalized density in isotropic cartesian coordinates , is the generalized internal energy in isotropic cartesian coordinates , is the adiabatic index , and and are constants of integration which we set to and in the simulations .the computational domain for this problem is constructed to be ( where is the black hole mass ) and centered along the -axis with . in the -directionthe inner boundary zone is defined to lie outside the event horizon at in isotropic coordinates to guarantee all boundary zones are outside the horizon , and extends to along the line .calculations are carried out on different resolution grids , ranging from to to check code convergence .all variables are initially set to negligible values throughout the interior domain ( , , , and ) , and the static analytic solutions are specified as outer boundary conditions at all times . along the inner boundary ,outflow conditions are maintained by simply setting the first derivatives of all variables to zero at the end of each time step .thus fluid flows onto the computational grid from all of the analytically - specified ( inflow ) boundaries , and exits from the lower -plane closest to the black hole .all results presented here were generated from simulations run until steady - state was achieved at , and numerical parameters are defined as in previous tests , namely , , in the av runs , and in the nocd results .table [ tab : errors6 ] summarizes the global mean - relative errors in both methods as a function of grid resolution .figures [ fig : fig10 ] & [ fig : fig11 ] show spatial profiles of density and velocity along the -axis for and zones for the av and nocd methods , respectively .although the numerical results in table [ tab : errors6 ] converge to the analytic solution with grid resolution , they converge at a rate between first and second order due in part to the treatment of boundary conditions and time discretization errors . in particular , comparing the analytic and numerical solutions , we find that maximum relative errors occur near the event horizon along the inner -boundary .for the av method , the maximum relative errors for density and velocity with zones are 9.16% and 2.49% , respectively , compared to global mean - relative errors of 1.36% and 0.63% . for the nocd method , the maximum relative errors are 24.4% and 7.42% , compared to global mean - relative errors of 2.11% and 0.14% .the global errors in both methods , in spite of being computed on a nonsymmetric cartesian mesh , are comparable to those reported by other authors . for instance, saw relative errors of 1 - 3% in density and velocity near the horizon using an artificial viscosity code on a cylindrical grid . saw mean relative errors of 2.67% and 0.99% using a godunov - type method on a spherical grid .also , decreasing the courant factor from to 0.2 reduces the errors in both av and nocd methods by about a factor of three , consistent with first order time discretization , and increases the rate of spatial convergence closer to second order .we have developed new artificial viscosity and non - oscillatory central difference numerical hydrodynamics schemes as integral components of the cosmos code framework for performing fully general relativistic calculations of strong field flows .these methods have been discussed at length here and compared also with published state - of - the - art godunov methods on their abilities to model shock tube , wall shock and black hole accretion problems .we find that for shock tube problems at moderate to high boost factors , with velocities up to and limited only by grid resolution , internal energy formulations using artificial viscosity methods compare quite favorably with total energy schemes such as the nocd methods , the godunov methods using the marquina , roe , or flux - split approximate riemann solvers , and the piecewise parabolic method with an exact riemann solver .however , av methods can be somewhat sensitive to parameters ( e.g. , viscosity coefficients , courant factor , etc . ) and generally suspect at high boost factors ( ) in the wall shock problems we have considered here . on the other hand , nocd methodscan easily be extended to ultra - relativistic velocities ( ) for the same wall shock tests , and are comparable in accuracy over the entire range of velocities we have simulated to the more standard but complicated riemann solver codes .nocd schemes thus provide a robust new alternative to simulating relativistic hydrodynamical flows since they offer the same advantages of godunov methods in capturing ultra - relativistic flows but without the cost and complication of riemann solvers or flux splitting .they also provide all the advantages of av methods in their speed , ease of implementation , and general applicability ( including straightforward extensions to more general equations of state ) without explicitly using artificial viscosity for shock capturing .
|
high resolution , non - oscillatory , central difference ( nocd ) numerical schemes are introduced as alternatives to more traditional artificial viscosity ( av ) and godunov methods for solving the fully general relativistic hydrodynamics equations . these new approaches provide the advantages of godunov methods in capturing ultra - relativistic flows without the cost and complication of riemann solvers , and the advantages of av methods in their speed , ease of implementation , and general applicability without explicitly using artificial viscosity for shock capturing . shock tube , wall shock , and dust accretion tests , all with adiabatic equations of state , are presented and compared against equivalent solutions from both av and godunov based codes . in the process we address the accuracy of time - explicit nocd and av methods over a wide range of lorentz factors .
|
the study of the probability distribution of the primordial density fluctuations in the universe is one of the most fundamental challenges in the present day cosmology .the determination of this probability distribution could constrain the ensemble of cosmological theories of the formation of primordial density fluctuations . in particular ,standard inflationary theories establish that these density fluctuations are gaussianly distributed .so the detection of some non - gaussian features would question these kinds of theories .a powerful tool in the determination of cosmological parameters is the cosmic microwave background ( cmb ) .the primordial density fluctuations left their imprint in the cmb because of the thermal equilibrium existing before matter radiation decoupling . in this manner, the initial density fluctuations led to anisotropies in the cmb , and these inherited the probability distribution of the former . in short , testing cmb gaussianity is equivalent to testing the gaussianity of the primordial fluctuations and hence the validity of standard inflation . for these reasons there is great interest in the implementation of statistical methods to analyse the gaussianity of current and future cmb data . a representative sample of these methods ( containing real - space statistics , wavelets and fourier - based analyses )have been recently applied to probe the gaussianity of the data from the wmap mission . in this paper , we shall concentrate on the smooth tests of goodness of fit developed by .these methods have been already adapted to cmb data analysis , and applied to the study of the gaussianity of maxima data .we further adapt this method here to deal with data from interferometers . with their intrinsic stability ( only the correlated signal is detected ) and their ability to reject atmospheric signals ( e.g. )interferometers have proved their worth in ground - based observations of the cmb . over the last few yearsseveral interferometers have provided high - sensitivity measurements of the power spectrum at intermediate angular scales . in particular , these experiments include the very small array ( vsa , ) , dasi and cbi ; for a more detailed compilation of interferometric experiments measuring the cmb , e.g. .these instruments sample the fourier modes of the measured intensity , so they provide a direct measurement of the power spectrum on the sky and allow us to study the gaussianity directly in harmonic space .non - gaussianity analyses have been already performed on interferometer datasets . in the first casethese analyses are mainly based on real - space statistics ( they are applied to the maximum - entropy reconstructed maps in real space ) ; while in the second case , the analysis is performed in fourier space by computing the bispectrum in a similar way to the computation of the power spectrum .we follow the second approach and present here the smooth tests of goodness - of - fit adapted to the study of the gaussianity of interferometric data directly in the visibility space , so we test directly the gaussian nature of the fourier modes on the sky . since an interferometer measures complex quantities ( visibilities ) , we present two different analyses in this paper , one on the real and imaginary parts , and another on the phases .the experimental data are a combination of the signal plus noise . in principle, it would be desirable to analyse data in which signal dominates over the noise ; that is , data with a high signal - to - noise ratio .one suitable approach to select only data with high signal - to - noise ratio is the transformation of the data into signal - to - noise eigenmodes .this formalism allows us to eliminate the data dominated by noise .moreover , as we explain below , the signal - to - noise eigenmodes are not correlated , allowing the direct implementation of the method . as it is shown below, our method looks for deviations with respect to the null hypothesis ( gaussianity ) in statistics related to the measured moments ( mean , variance , skewness , kurtosis , ... ) of the visibility signal - to - noise eigenmodes .thus , it is clear that the test is specially tailored for detecting non - gaussian signals directly in the fourier domain .this gaussianity analysis of the fourier components completes the analysis of gaussianity in the real space , given that some non - gaussian detection in the fourier space could indicate some degree of non - gaussianity in the real space and vice versa ( a linear transformation of gaussian variables is also gaussian ) .in fact , for example , the analysis of gaussianity directly in the fourier space by means of the bispectrum has demonstrated to be a powerful tool in the detection of the parameter .then , the analysis in fourier space could detect non - gaussianity that , in principle , should be more evident in real space . as an illustration of this fact , we will show how our method detects the non - gaussianity of a cosmic strings map directly in fourier space , even though it seems more natural to search the non - gaussian features of these objects in real space .the organization of the paper is as follows .smooth tests of goodness - of - fit are introduced in section [ section_2 ] .we focus our interest in the statistics developed by and their application to the gaussian and uniform distributions that we are going to test . in section [ section_3 ]signal - to - noise eigenmodes are reviewed and applied to the case of interferometer observations .the simulations we are using are described in section [ sec : vsa ] , where the very small array ( vsa ) is taken as a reference experiment .the application to gaussian simulations is described in section [ sec : gauss ] .the power of the test in discriminating between gaussian and non - gaussian signals affected by gaussian noise is studied in section [ section_4 ] .we consider three kinds of non - gaussian simulations : those performed using the edgeworth expansion introduced by , a string simulation created by and simulations with a coupling parameter .finally , section [ section_5 ] is dedicated to discussion and conclusions .in this section we present the smooth tests of goodness - of - fit and the work of applied to a gaussian and a uniform variable .let us suppose independent realizations , , of a statistical variable ( .our aim is to test whether the probability density function of is equal to a prefixed function .the smooth tests of goodness - of - fit are constructed to discriminate between the predetermined function and another that deviates smoothly from the former and is called the _ alternative density function_. we consider an alternative density function given by , where is a parameter vector whose component is and there exists a value such that .the alternative function deviates smoothly from when is displaced from .the probability density function of the independent realizations of is given by .given these realizations , we calculate the estimated value of by means of the maximum likelihood method and denote this value by .the quantity is defined such that where is a measurement of the difference between and and is therefore a test of the hypothesis that .assuming to be close to and to be very large ( ) , the quantity is equal to the so - called _ score statistic _ , where is the vector whose components are , with the log - likelihood function defined as . is the transposed vector of and the components of the matrix are equal to . among all the possible choices of alternative density functions, we select that presented in the work of . in the work of alternative density function is defined as where is a normalization constant and the functions are orthonormal on with ( note that ) .it can be demonstrated that the score statistic associated with the alternative is given by note that the previous quantity is not the component of the vector in expression ( [ eq:001 ] ) ( although they are closely related ) .we denote the two quantities with the same letter to follow the notation of .as equation ( [ eq:001 ] ) shows , the expression ( [ eq:004 ] ) gives the score statistic only when the distribution of the data is ( because the condition is imposed ) . in this case( that is , when the distribution of the data is equal to the prefixed one ) , the statistic is distributed as when .this holds because is gaussianly distributed when ( the central limit theorem ) .moreover , it is easy to prove that the mean value of is equal to unity , independently of the value of .we see that we can work with the quantities or alternately with ( if the distribution of the data is equal to , these are distributed following a function ) . in this paperwe apply these statistics to two cases : on one hand , to the real and imaginary parts of the visibilities and on the other hand to the phases of the visibilities . in the first case ,the function is a gaussian distribution , and in the second case the distribution is a uniform . if is a gaussian distribution with zero mean and unit variance , the functions are the normalized hermite chebyshev polynomials with , , , and , for , .the statistics are given by : ^ 2 /24\nonumber\end{aligned}\ ] ] where is the estimated moment of order .bearing in mind the relation between the and the quantities , we see that the statistic is related to moments of order , so that this test is _ directional _ ; that is , it indicates how the actual distribution deviates from gaussianity . for example , if and are small and is large .the data then have a large value because of the relation between and .then , the data have a large skewness value .this holds for all distributions and not only for the gaussian case .these expressions ( or the equivalent ) are used in to test the gaussianity of the maxima data .the skewness and the kurtosis of these data are constrained in using the same method .the smooth test of goodness - of - fit applied to the uniform distribution was developed by the first time by , and his work is the starting point of the work of .if we want to test whether is a uniform distribution in the interval ] . in this casewe take the legendre polynomials : , and for .thus the normalized functions are with . in this case : ^ 2 /64\nonumber\end{aligned}\ ] ] where . as in the gaussian variable case, we see that the test is directional : deviations in the estimated moments from the expected values for a uniform distribution give deviations in the statistics .if we assume a small field size ( where the flat - sky approximation of the observed region is valid ) , an interferometer measures complex visibilities at a frequency ( the van cittert zernicke theorem ) : where is the angular position of the observed point on the sky and is a baseline vector in units of the wavelength of the observed radiation ( monochromatic radiation is assumed , and hereinafter we omit the dependence on the frequency , ) . is the primary beam of the antennas ( normalized to ) , and the function is the brightness distribution on the sky , which can be easily translated into temperature contrast values for the case of the cmb .if we denote by the fourier transform of and by the cmb power spectrum ( in units of brightness , ) then the correlations between the visibilities observed at and are given by where denotes the complex conjugate of .these quantities can be computed semi - analytically as described in , who show how to perform a maximum - likelihood analysis to extract the cmb power spectrum from interferometer observations .note that for the common case in which the primary beam can be approximated by a gaussian function , it can be demonstrated that the real and the imaginary parts of the observed complex visibilities are uncorrelated and thus independent , if the gaussianity of the cmb holds .our aim is to test if the visibilities are gaussianly distributed .this can be done by analysing the real and imaginary parts of the visibilities or studying their phases .the following subsections explain how we work with the data .we work with signal - to - noise eigenmodes as they are defined in the work of .let us suppose an observed variable , where the subscript denotes a pixel or a position in the space where that variable is defined ( for example the real or the imaginary parts of the visibility are measured in the position of the called space ) .this variable is the sum of a signal and a noise component ( ) whose respective correlation matrices have components given by and .the brackets indicate average to many realizations .the signal - to - noise eigenmodes are defined by where is the called _ square root matrix _ of the noise correlation matrix ( i.e. ) and is the rotation matrix which diagonalizes the matrix .the eigenvalues of this diagonalization are denoted by . in the case we are studying, the noise has zero mean and is not correlated , i.e. , so the components of are .( [ eq:003 ] ) gives us _ transformed _ signal and noise and such that : with and .so , the correlation matrix of is given by with .we now have a clear characterization of the signal - to - noise relation of our ( transformed ) data .the noise dominates over the signal if .so signal - to - noise eigenmodes with a very low value of the associated eigenvalue are very much dominated by the noise and we do not want to include them in our analysis . this is the main interest of this approach , but there is another interesting point : the signal - to - noise eigenmodes are uncorrelated data .this latter fact will be very useful in the application of the tests of gaussianity described in this paper as explained in the next subsection .we write the visibilities in terms of their real and imaginary parts : testing the gaussianity of the visibilities is equivalent to testing the joint gaussianity of their real and imaginary parts . as indicated above , when the primary beam is gaussian , the real and imaginary parts are independent if the gaussianity of the cmb holds .our data to be analysed are the set of real parts of the visibilities which are correlated among them ( that is : ) and imaginary parts which also are correlated ( ) .moreover , we only have one realization .the test here presented works with a large amount of independent data ( see the hypothesis in the deduction of the score statistic , section 2 ) . to convert our correlated ( and then dependent ) data to a sample of independent data we proceed in the following way .given the real part of the visibilities , we perform the transformation in signal - to - noise eigenmodes ( expression [ eq:003 ] ) and define the variable these new data are uncorrelated and normalized ( zero mean , unit dispersion ) .we operate in the same way with the imaginary parts and add the resulting quantities to those obtained with the real parts .finally , we have the data ( of visibilities ) which are uncorrelated and normalized . moreover , if the visibility distribution is multinormal then the data are independent and gaussian distributed with zero mean and unit dispersion . then, if gaussianity holds , the data fulfill the hypothesis of the smooth tests of goodness - of - fit and their statistics must have a distribution .the test is then applied to the quantities .the analysis can be performed on the visibility phases .it is well - known that if the visibility , , is gaussianly distributed , its phase has a uniform distribution .the phases of the visibilities are not independent because there are correlations of the real parts among themselves , and among the imaginary parts ( the real and imaginary parts are not correlated if the primary beam is a gaussian function ) .thus , and for the case of a gaussian primary beam , given two points and in the plane , the joint probability of finding the phase value in and in is given by where : thus .as an example , we have calculated the form of the previous probability for the particular case and .the plot is shown in figure [ fig1 ] . and in the positions and .,width=321 ] the statistics of subsection 2.3 can not be applied directly to the phases because these are not independent .we therefore construct a related independent quantities in the following way .we decorrelate the different visibilities so that , under the assumption of gaussianity , they will be independent . to do this we proceed as in the previous section : we decorrelate the real parts of the visibilities . that transformation gives us the quantities .we proceed in the same way with the imaginary parts and obtain the associated quantities . in this way, we have the complex number .these quantities are independent for different .we calculate the phase of each one and apply to them the statistics explained in subsection 2.3 for a uniform variable .to illustrate the method described above , in what follows we apply it to simulated observations with the very small array .the vsa is a 14-element heterodyne interferometer array , operating at frequencies between 28 and 36 ghz with a 1.5 ghz bandwidth and a system temperature of approximately 30 k , sited at 2400 m altitude at the teide observatory in tenerife ( see for a detailed description ) .it can be operated in two complementary configurations , referred as the compact array ( which probes the angular range 150900 ; see for observations in this configuration ) and the extended array ( = 3001500 ; see and for observations in this configuration ) . for the simulations in this paper, we will use as a template experiment the extended array configuration , which corresponds to the antenna arrangement shown in figure [ fig_ant ] .the observing frequency is 33 ghz , and the primary beam at this frequency has a fwhm of 2.1 ( corresponding to a diameter of the aperture function of ) . for the vsa, the shape of the antenna can be approximated by a gaussian function , so we can calculate the correlation matrix using the expressions ( 11 ) and ( 12 ) presented in . to a tilting table hinged along its northern edge . ]we use a template observation corresponding to hr integration time , and we explore several values of .the corresponding noise level will be simulated by adding to each visibility a random gaussian number with an amplitude of jy per visibility ( one channel , one polarization ) in an integration time of 1 min , to reproduce the observed sensitivity of the vsa .thus , in a single day s 4 hr observation we get an rms of mjy / beam in the real - space map .a simulated 1 day vsa file typically contains 25000 64 s visibilities . in order to perform the analyses here ,we proceed to bin these data into cells of a certain size , as described in . for the analyses carried out in this paper ,we have used a bin size of 9 , where is the wavelength of the measured signal , which is similar to that used for the power spectrum evaluation in .we have explored different values of the bin size , and the number has been chosen according to two criteria .if the bin size is too small , the signal - to - noise ratio per pixel is very low . on the other hand , for large bin sizes, the number of binned visibilities is too small to apply the test .the value of 9 provides both a reasonable number of visibilities and good power in the detection .nevertheless , we have checked that small variations in this number ( using values from 9 to 14 ) do not produce significant changes .our template observation of the vsa contains a total of 895 visibility points after binning in 9 cells .this template will be used throughout the paper .in this section we calibrate the method by using simulated gaussian observations .it can be shown that the correlation matrix of the real parts of the visibilities can be written as with , where are the signal - to - noise eigenvalues and is the rotation matrix defined in subsection [ sec:002 ] . to generate gaussian simulations with the desired correlation we start with a set ( of visibilities ) .the real parts of the visibilities are given by .in an analogous way we construct . after that, a gaussian realization of the noise is added to each visibility .given the simulated visibilities we decorrelate their real and imaginary parts as explained in subsection [ sec:002 ] .after that , we calculate the distributions of the quantities ( subsection [ sec:001 ] ). as we have gaussian simulations , if the amount of data is relatively large , these distributions must be very close to functions as it is explained above .the form of the distributions is shown in fig .[ fig2 ] and they are compared with functions normalized to the number of simulations used ( dashed line ) .we have used 10000 simulations of a vsa field , with a noise level corresponding to hr for each one .this is the typical noise level achieved in a single - field vsa observation .we bin these data using a cell size of 9 , to obtain 895 binned visibilities . in table[ tab1 ] the mean value and the standard deviation ( ) of the distributions is compared with the same quantities of the distribution . as mentioned above , the mean value of the must be equal to 1 independently of the number of data . from table[ tab1 ] we see that the convergence to this value is very accurate .the value of the standard deviation must be only when the number of data tends to infinity ( asymptotic case ) . in the casewe are simulating ( 895 visibilities ) we see that the values of the standard deviation are very close to the values ( only seems to separate slightly from the asymptotic value ) .the vsa fields we are analysing have a number of visibilities between and and then the approximation to the functions will be better than the case shown in table [ tab1 ] . because of that , we can approximate the to distributions in a very accurate way .we then suppose the distributions to be for the statistics or for the .we have applied to these simulations the method described for the phases and we also find distributions for the statistics .the advantage of this assumption is that we do not have to make simulations to calculate the statistical distributions for every vsa field for the gaussian or uniform null hypothesis .we have checked that the distribution of the statistics is also very close to functions even when the number of data is . in the following sections we cut data with low signal - to - noise eigenvalues andthe statistics are calculated with data ; also in this case , the distributions of the statistics will then be approximated by functions .c c c c c c & & & & & & 0.9863 & 0.9983 & 1.0370 & 1.0131 & 1.0000 & 1.4077 & 1.4426 & 1.4848 & 1.5265 & 1.4142 [ tab1 ]in this section , given our template of a vsa observation , we generate non - gaussian simulations of the visibilities and add the ( gaussian ) vsa noise to them .we analyse three kinds of non - gaussian simulations .first , we consider simulations created by means of the edgeworth expansion in the visibility space ; in second place , we analyse a cosmic strings simulation ; finally , simulations are considered as an exercise to estimate the power of the test to constrain the parameter . in order to characterize the power of the method in each of these cases, we use the `` power of the test '' .roughly speaking , the power , , of a given test at a certain significance level ( ) is parameterized as the area of the alternative distribution function outside the region which contains the area of distribution function of the null hypothesis ( the gaussian case ) .thus , a large value of for a prefixed small value of indicates that the two distributions have a small overlap , so we can distinguish between them ( e.g. ) .for definiteness , in what follows we adopt for the significance levels the values ( i.e. significance levels of 5% and 1% respectively ) , so we shall quote these pair of values for each one of the cases considered .we first construct simulations with a certain degree of non - gaussianity by using the edgeworth expansion .we then analyse these simulations and calculate the power of the test to discriminate between a gaussian distribution ( the null hypothesis ) and a distribution with skewness and kurtosis injected via the edgeworth expansion ( the alternative hypothesis ) .the aim is to quantify which signal - to - noise level is required to detect a certain degree of non - gaussianity in the data with our method .we have two options in preparing these simulations : we can inject the skewness and kurtosis either in real space or in visibility space . if we include the non - gaussianity through the edgeworth expansion in real space , this non - gaussianity is diluted when we transform to the fourier ( or visibility ) space ( this is illustrated in appendix a ) .thus , we decide to inject the skewness / kurtosis directly in fourier space , and calibrate our method in terms of the ability to detect deviations of the moments of the signal - to - noise eigenmodes of the visibilities with respect to the gaussian case .a similar approach has been adopted by other authors ( e.g. in they explore the non - gaussianity of the visibilities simultaneously to the estimation of the power spectrum , by using a modified non - gaussian likelihood function ) . to use non - gaussian simulations with given skewness / kurtosis in fourier space generated from the edgeworth expansion is not motivated by any physical model , but it is a reasonable approach to calibrate the power of a non - gaussianity test because these non - gaussian maps are easy to prepare with specified statistical properties . moreover, systematic effects could introduce a non - gaussian signal in the visibilities which could be mimic with the edgeworth expansion .the probability distribution of these systematic effects could be , for example , asymmetric and , in a first approximation , this fact can be modeled by a distribution with a non - zero skewness .in addition , the distribution could be more peaked or flat that the gaussian one , and that can be modeled by a distribution with a non - zero kurtosis . in this mannerthe edgeworth simulations are taken as a benchmark to quantify the power of our method to detect some kind of non - gaussian signals .these non - gaussian simulations are generated as follows .we generate a realization of independent values with zero mean , unit variance , and skewness and kurtosis and , respectively . for definiteness, we adopt here the values . as in subsection[ sec : gauss ] , the real parts of the visibilities are . the imaginary parts are generated in an analogous way .after that , we add different noise levels according to a vsa observation .we then decorrelate the data of the simulations and calculate the power of the tests .again , we use the same vsa template observation , binned into cells of side 9 to study the power of our tests .figure [ fig3 ] shows the logarithm of the eigenvalues associated with the real visibilities ( those resulting from the diagonalization of ) and those associated with the imaginary parts ( the diagonalization of ) .only the of the data have an eigenvalue signal - to - noise higher than 0.01 ; that is , a signal - to - noise ratio higher than 0.1. obtained from the template observation used in figure [ fig2 ] .we plot the values associated with both the real and imaginary parts of the visibilities ( see the text for details ) .we have eigenvalues , only 420 of which have a value above 0.01 , which corresponds to a signal - to - noise of .,width=302 ] first , we consider the case of a low signal - to - noise ratio , and we adopt the noise levels corresponding to a single - field vsa observation ( hr , ) .we analyse the non - gaussian simulations with different cuts in the eigenvalues ; that is , we include in the analysis only those eigenmodes whose eigenvalue is higher than a value . in table[ tab2 ] is shown the power of the , , and statistics to discriminate between a gaussian and a non - gaussian distribution .it can be demonstrated that the simulations are constructed such that has zero mean and unit variance , so only the and distributions are notably different from a distribution ( see section [ sec:001 ] ) .in fact , also deviates slightly from a distribution when the analysed data have some degree of kurtosis ( is related to , i.e. to the fourth power of the data ) , but , as we can see in table [ tab2 ] , this deviation is very low .the corresponding probability function is shown in fig .[ ng_vs_g ] ( dotted line ) and is compared with a gaussian function ( dashed line ) .we have used 5000 simulations in this computation . note that although the input skewness and kurtosis is 1.0 , the output simulations have a mean skewness equal to and a mean kurtosis equal to ; i.e. we are analysing some simulations with kurtosis .so we expect less power in the detection of the kurtosis that in the detection of the skewness .the first column in table [ tab2 ] indicates the minimum value of the signal - to - noise eigenvalues that we use to perform the analysis .we indicate the square root of the latter quantity ( signal - to - noise ratio ) in parentheses .the second column is the number of eigenmodes that we use to calculate the statistics ; that is , the number of eigenvalues such that .finally , the last columns indicate the power of the different statistics ( in percentages ) when the significance level is .we see that our test is not able to detect non - gaussianity when every eigenmode is included in the analysis . when eigenmodes with low signal - to - noise ratio are excluded the power is improved but the values are very poor .this tells us that the noise level of a single vsa pointing ( hr ) is too high to detect the non - gaussian simulations we have constructed via the edgeworth expansion . and .dashed line : gaussian distribution .histogram : distribution obtained from a single simulation ( signal plus noise ) involving all the eigenvalues , for a noise level corresponding to a hr integration .the three distributions have zero mean and unit variance .the non - gaussianity in the single simulation is hardly seen in its histogram ; however , it is easily detected when we used our method , as shown in table [ tab3].,width=302 ] c r c c c c ( ) & num.&&&& 0.0 ( 0.0 ) & 1790 & 5.10/1.12 & 4.92/1.06 & 9.06/2.46 & 6.44/ 1.90 0.1 ( 0.32 ) & 188 & 5.12/0.82 & 5.96/1.54&26.20/13.26&12.30/ 7.48 0.2 ( 0.45 ) & 133 & 5.08/0.92 & 6.68/1.70&31.26/17.68&13.76/ 8.76 0.3 ( 0.55 ) & 107 & 4.84/1.04 & 6.38/1.74&32.06/18.46&13.70/ 9.70 0.4 ( 0.63 ) & 89 & 5.06/0.94 & 6.56/2.14&33.64/20.22&15.02/10.90 0.5 ( 0.71 ) & 76 & 5.82/1.10 & 7.14/2.30&34.22/20.56&14.82/10.94 [ tab2 ] we therefore now explore the case of an integration time of hr , which is comparable to the whole signal - to - noise level achieved in the dataset presented in . in this case , the noise levels are reduced by a factor with respect to the previous case .the results are shown in table [ tab3 ] .as is to be expected , the detection of non - gaussianity is better .for example , we could detect our non - gaussian simulations with a power equal to ( for the statistic ) with the signal - to - noise achieved during hours and analysing eigenmodes with eigenvalues higher than 0.4 or 0.5 . in this last case the distribution of all the data ( every eigenmode ) for one simulation is shown by the histogram in figure [ ng_vs_g ] .we see that this distribution is closer to a gaussian ( dashed line ) than the initial one ( dotted line ) for two reasons : first , the loss of skewness and kurtosis by the construction of the edgeworth simulations ( the actual skewness and kurtosis of the simulations is lower than the input values , see above ) and , second , by the addition of gaussian noise . when we only plot the autovalues higher than 0.4 or 0.5 , the histogram is closer to the dotted line ; that is , we see the non - gaussianity .c r c c c c ( ) & num.&& & & 0.0 ( 0.0 ) & 1790 & 5.38/0.86&5.52/1.22 & 57.96/36.08&18.20/10.04 0.1 ( 0.32 ) & 513 & 5.62/1.22&6.48/1.62 & 95.92/88.62&35.58/25.64 0.2 ( 0.45 ) & 441 & 5.94/1.52&7.36/1.82 & 97.66/91.94&36.24/26.82 0.3 ( 0.55 ) & 402 & 5.60/1.08&7.60/2.22 & 98.26/93.28&37.42/28.52 0.4 ( 0.63 ) & 371 & 5.90/1.54&7.52/2.02 & 98.72/94.28&38.56/29.68 0.5 ( 0.71 ) & 245 & 5.52/1.40&7.80/2.38 & 98.82/94.44&39.24/30.80 [ tab3 ] in the tables shown in this section we have presented only the power of the statistics . in the case we are studying , has less power than because also has information about and , which do not have power and compensate for the power of .however , is better than the statistic if we also have detection of non - gaussianity in and/or .for the same reason , is slightly better than because it has combined information about and . in our case , however , is not better than , so we show only the statistics . finally , we have analysed the edgeworth simulations with the test applied to the phases ( subsection [ sec:3_3 ] ) .we have found that the non - gaussianity of this kind of simulation is not detected with the phases method ; that is , the phases are compatible with a uniform distribution .a deeper analysis of the phases shows that the moments ( section [ sec : unif ] ) are sligthly different from the values corresponding to a uniform distribution .however , the quantity , of data with high signal - to - noise eigenvalues is not large enough to give values of the statistics that differ sufficiently for the values obtained from a uniform distribution , so the power of the test has very low values .( the distribution for the phases can be calculated analytically by assuming , for example , a distribution for the real and imaginary parts given by equation ( 25 ) in .we have found this ditribution to oscillate about the uniform one , but the moments are very close to those of the uniform distribution . ) in this section , we probe if our method is able to detect the non - gaussian signature introduced by cosmic strings . to this end, we apply our method to the analysis of the string simulation shown in fig .[ strings_map ] .we first analyse the simulation without noise . in this waywe can learn what happens when we have only the string map .after that we add different noise levels to see how the method is able to detect the strings .although in theory the correlation matrix depends only of the power spectrum and the beam ( see expression [ eq:002 ] ) , the ( finite ) pixel size of the real string map we are analysing and the irregular coverage of the visibility plane could introduce noticeable imprecisions .the correlation matrix of the strings is then calculated using gaussian maps simulated with the same pixel size : we calculate the power spectrum of the string real map and construct gaussian real maps with this power spectrum ( we use 40000 gaussian simulations ) . from these gaussian mapswe calculate the corresponding visibility values ( expression [ eq:005 ] ) .the correlation of these visibility simulations is that used for the analysis of the strings .note that in the expression ( [ eq:002 ] ) there is no hypothesis concerning the statistic of the temperature field .the only hypothesis is the homogeneity of the field .the template observation of the vsa is the same as in previous sections ( 895 binned visibilities in 9 cells ) . in this casewe diagonalize the correlation matrix of the signal because we do not have noise . with this diagonalizationwe decorrelate the data in a similar way as is done in sections [ sec:002 ] and [ sec:004 ] .we analyse the real and imaginary parts of the visibilities obtained from the string map without noise .the values found for the statistics are : , , and .the estimated moments , , are , , and .these values are clearly incompatible with a gaussian realization .the quantities ( see expression [ eq : y_k ] ) associated with the real parts of the string visibilities are shown in the fig .[ fig_y_s ] ( those associated with the imaginary parts have similar features ) .every has an associated signal eigenvalue whose value decreases with .we see , then , that for the strings the absolute value ( or _ amplitude _ ) of grows when decreases ( this explains why the moments are so large ) .this feature shows that the are not well decorrelated or normalized .we analyse a gaussian map with the same procedure .figure [ y_s_gauss ] shows the resulting quantities for this gaussian case .we see that they have a width of order of unity .thus , we conclude that the behaviour of fig .[ fig_y_s ] is a feature of the non - gaussianity of the string signal .the fact that the quantities are not well decorrelated or normalized indicates that we are not able to estimate their correlation matrix accurately .this seems to be because we have a model with a very large cosmic variance , and the correlation matrix ( or equivalently the power spectrum ) can not be well estimated from only one realization .moreover , we are estimating the power spectrum with the expression that maximizes the likelihood under the hypothesis of gaussianity for the temperature field .but the temperature field is not gaussian for the strings , and so the estimated power spectrum for the strings ( that which maximizes the likelihood in this case ) can differ from the one we are using . for a study of the dependence of the power spectrum on non - gaussianity see .in the gaussian case , however , we are able to calculate the power spectrum and then we decorrelate and normalize the visibilities properly . in this way, the quantities have a width equal to unity , i.e. , as shown in figure [ y_s_gauss ] . summarizing , when there is no noise , in principle we should detect the strings by means of all the statistics ( even and ) because their values are not compatible with a distribution ( see table [ tab1 ] ) .associated with the real part of the visibilities for an observation of the string map presented in fig .[ strings_map ] using our vsa template without adding noise.,width=321 ] , but for a simulated gaussian cmb map.,width=321 ] finally , we have applied the smooth tests of goodness - of - fit developed by neyman to the phases of the visibilities of the noiseless map of strings ( section [ sec:3_3 ] ) .the values are : ( ) , ( ) , ( ) and ( ) . and for the statistics : ( ) , ( ) , ( ) and ( ) .the probability of obtaining a lower or equal statistic value under the null hypothesis ( the visibilities are gaussian ) is showed in parentheses . in the following subsectionwe discuss the effect of gaussian noise on the detectability of strings with our method . the features in fig .[ fig_y_s ] change when ( gaussian ) noise is added .the behaviour for low values of remains unchanged because these values are associated with high signal - to - noise values so that these data are dominated by the signal .however , the behaviour for high is dominated by the noise ( low signal - to - noise eigenvalues ) so that the behaviour will resemble that of a gaussian signal .the analysis with different noise levels is given in table [ tab4 ] ( noise corresponding to an integration time equal to hr ) and table [ tab5 ] ( noise corresponding to an integration time equal to hr ) .we have analysed 5000 simulations involving the string simulation plus a noise realization .the analysis is done for different cuts of the eigenvalues ( ) . the power to distinguish between a gaussian distribution and the strings plus noise when the significance level is shown in the tables .l c c c c & & & & 0 .& 5.16/0.82 & 5.72/1.20 & 4.92/1.00 & 4.50/0.98 0.1 & 3.72/0.58 & 6.00/0.86 & 2.28/0.36 & 1.70/0.28 0.2 & 3.10/0.30 & 4.16/0.44 & 1.70/0.24 & 1.20/0.34 0.3 & 5.10/0.66 & 3.36/0.22 & 1.48/0.26 & 1.24/0.30 0.4 & 1.46/0.14 & 1.78/0.12 & 1.08/0.36 & 1.28/0.30 0.5 & 1.00/0.10 & 2.04/0.08 & 0.80/0.18 & 0.90/0.22 [ tab4 ]l c c c c & & & & 0 .& 5.30/0.98 & 24.26/8.44 & 4.20/0.86 & 4.28/1.04 0.1 & 6.56/0.84 & 79.56/49.32 & 1.58/0.12 & 1.30/0.22 0.2 & 7.18/0.92 & 86.72/59.60 & 1.08/0.08 & 1.24/0.34 0.3 & 2.94/0.26 & 87.62/60.44 & 0.80/0.04 & 0.70/0.10 0.4 & 3.74/0.10 & 90.74/64.16 & 0.48/0.02 & 0.66/0.08 0.5 & 3.20/0.12 & 92.90/69.58 & 0.48/0.08 & 0.38/0.08 [ tab5 ] the results in table [ tab4 ] indicate that the vsa noise of a single field observed during hrs is too high for detecting the strings with our method .when the integration time is multiplied by 25 ( table [ tab5 ] ) we start to detect the strings by means of the statistic .it is important to note that , even if the data were well decorrelated , the statistic would also be a indicator of non - gaussianity .for example , given realizations of normalized and independent data with kurtosis , the mean value of is equal to .another example of the use of to detect non - gaussianity can be found in in the formalism of the multinormal analysis . as a test ,we repeat the same procedure using a gaussian simulation instead of the string map and , as expected , we obtain a negligible power . nowadays, non - gaussian simulations constructed by the addition of a gaussian field and its square have acquired a notable relevance because they could represent perturbations produced in several inflationary scenarios ( for a review on the subject , see and references therein ) .the parameter which measures the coupling with the non - linear part is denoted by and , then , an analysis to study the power of our method to detect this parameter seems to be very appropriate .realistic simulations in the flat - sky approximation would require to develop appropriate software ( see e.g. for simulations on the full sphere ) , and this is beyond the scope of the present work .however , we can estimate the power of our method in detecting a non - zero component using a simple approximation .we follow , and we generate what they call `` maps '' by assuming that the observed temperature contrast field on sky ( ) can be written as where is the gaussian ( linear ) component . in reality , this approximation is only valid in the sachs - wolfe regime ( see e.g. ) , but we use it as a toy model in our exercise of the study of the detection of the parameter , so the definition of the non - linear coupling parameter in our model gives .therefore , the coupling parameter used by and is related with our definition as , and the one used by is . ] .we first consider the ideal case where there is no instrumental noise , and we have analyzed different sets of simulations with values ( gaussian case ) , , and .we have run simulations for every case .only the case with has been detected via the statistic .[ fig_chi2 ] shows the distribution function of the statistic for the gaussian case ( dashed boxes ) , compared with the corresponding one for the non - gaussian case with ( solid boxes ) .from here , we infer that the power of this statistic to discriminate between the non - gaussian simulations and the gaussian ones is about the ( ) with a significance level of ( ) . analysing the same previous simulations but adding the noise level corresponding to an observation time of hr , we find that the power of our statistic is reduced roughly by a factor 2 . constrained the parameter by using the bispectrum of the vsa extended array data ( note that this is exactly the same configuration that we have adopted to build our visibility template ) . with their definition of , they found an upper limit of 7000 with a confidence level of . in the sachs - wolfe regime, this would correspond to a value of 31500 according to our definition .however , this value can not be direclty compared with the one obtained in our work , because our model does not correspond to the realistic case which was considered in . for the gaussian case ( dashed boxes ) and for the non - gaussian case with ( solid boxes ) .the solid line is a function normalized to the number of simulations ( 10000 simulations ) .( for the sake of clarity , the histogram for the gaussian case has been cut at the top.),width=302 ]in this paper we have presented a method of searching for non - gaussianity in data from interferometric cmb observations , directly in the visibility space .this method can be adapted to other interferometric experiments ( e.g. cbi and dasi ) if we know the correlation matrices of the signal and of instrumental noise .note that in this paper , we have dealt with decorrelated noise ( which is usually the case for these experiments ) , but the case of correlated noise can be studied in an analogous way .we have applied the method to work with the real and imaginary parts of the visibilities .the method tests whether they are gaussian distributed .however , we have applied the method to the phases of the visibilities to test whether they are uniformly distributed , but we found that it is not very sensitive for detecting the kind of non - gaussianity we have analysed here .we have integrated the signal - to - noise formulation into the smooth goodness - of - fit tests . in this way we can deal only with the data dominated by the signal we want to analyse . in the textit is noted that the correlation matrices of the signal - to - noise eigenmodes and of the signal are descomposed as the product of a matrix and its transpose ( subsection [ sec:002 ] and section [ sec : gauss ] ) .this decomposition is analogous to that of cholesky .this latter decomposition is computationally faster than the one used here ; however , the decomposition we use allows us to deal with better quality data , that is , with a higher signal - to - noise ratio .the analysis with the cholesky decomposition takes every eigenvalue ; that is , .it is important to stress that smooth goodness - of - fit tests do not require the data to be on a regular grid , but can be applied to any data set . in that sensethe test is perfectly adapted to interferometric data because the coverage of the plane is neither regular nor complete .the method could therefore also be applied by selecting those visibilities in a certain range , thus allowing study of the non - gaussianity as a function of angular scale .in addition , although the method has been presented here as a tool to study the non - gaussian properties of the sky signal , it could also be used as a powerful diagnostic for detecting systematics in the data , as we pointed out in section 6.1 . for short integration periods, a stack of visibilities will be dominated by instrumental noise , so this method could be used to trace the presence of spurious signals in the data ( e.g. those comming from cross - talk between antennas ) , or to study the correlation properties of the noise .summarizing , to study the power of our method in detecting non - gaussian signals on the sky , we have analysed three kinds of simulations .first we have analysed non - gaussian visibilities created by inserting some degree of skewness and kurtosis ( ) with the edgeworth expansion directly in fourier space . using the vsa as a reference experiment ,we have shown the performance of the method in detecting those levels of non - gaussianity in the data with realistic values of the integration time for this experiment .all these results can be easily adapted to other instruments , just by rescaling the integration times according to the square of the ratio of the different sensitivities .in addition , we have also shown the performance of the test in the detection of the non - gaussian signal introduced by cosmic strings .even though those kind of signals are usually detected using real - space statistics or wavelets , we have demonstrated that the signal - to - noise eigenmodes approach allows us to detect them because the method is very sensitive to the characterization of the covariance matrix . in this particular case , due to the non - gaussian nature of the strings ,a complete decorrelation can not be achieved and the statistics show huge deviations with respect to the gaussian case . for the case of the vsa ,this translates into the fact that cosmic strings can hardly be detected in single - field observations ( integration times of hr ) , but they could be detected ( if present ) using this method , given the current sensitivity achieved by the whole data set published by the vsa team ( with a sensitivity of hr integration ) .finally , we have studied the power of the method to detect a non - zero component in a toy model based on maps .we would like to thank f. r. bouchet for kindly providing the string map , and to r. rebolo and the anonymous referee for useful comments .we also acknowledge terry mahoney for revising the english of the manuscript .jar - m acknowledges the hospitality of the ifca during two visits .ama acknowledges the iac for its hospitality during a visit .rbb acknowledges the mcyt and the uc for a ramn y cajal contract .we acknowledge partial financial support from the spanish mcyt project esp2002 - 04141-c03 - 01 .aghanim , n. , kunz , m. , castro , p. g. , & forni , o. 2003 , a&a , 406 , 797 aliaga a. m. , martnez - gonzlez e. , cayn l. , argeso f. , sanz j. l. , barreiro r. b. , 2003a , new astronomy reviews , 47 , 821 aliaga a. m. , martnez - gonzlez e. , cayn l. , argeso f. , sanz j. l. , barreiro r. b. , gallegos j. e. , 2003b , new astronomy reviews , 47 , 907 amendola , l. 2002 , apj , 569 , 595 balbi , a. et al .2000 , apj , 545 , l1 barreiro , r. b. & hobson , m. p. 2001 , mnras , 327 , 813 bartolo , n. , komatsu , e. , matarrese , s. , & riotto , a. 2004 , phys ., accepted ( astro - ph/0406398 ) bond , j. r. 1995 , phys . rev .lett . , 74 , 4369 bond , j. r. , jaffe , a. h. & knox , l. 2000 , apj , 533 , 19 bouchet , f. r. , bennett , d. p. & stebins , a. 1988 , nat , 335 , 410 cayn , l. , martnez - gonzlez , e. , argeso , f. , banday , a.j . &grski , k.m . , 2003a , mnras , 339 , 1189 cayn , l. , argeso , f. , martnez - gonzlez , e. & sanz j. l. 2003b , mnras , 344 , 917 chiang , l. , naselsky , p. d. , verkhodanov , o. v. & way , m. j. 2003 , apj , 590 , l65 cox d. r. & hinkley , d. v. 1974 , theoretical statistics ( london : chapman & hall ) cruz , m. , martnez - gonzlez , e. , vielva , p. , & cayn , l. 2004 , mnras , in press ( astro - ph/0405341 ) dickinson , c. et al .2004 , mnras , 353 , 732 eriksen , h. k. , hansen , f. k. , banday , a. j. , grski , k. m. , & lilje , p. b. 2004a , apj , 604 , 14 eriksen , h. k. , novikov , d. i. , lilje , p. b. , banday , a. j. , & grski , k. m. 2004b , apj , 612 , 64 grainge , k. et al .2003 , mnras , 341 , l23 guth , a. h. , 1981 , phys .d. , 23 , 347 hanany , s. et al .2000 , apj , 545 , l5 hobson , m. p. , jones , a. w. & lasenby , a. n. 1999 , mnras , 309 , 125 hobson , m. p. & maisinger , k. 2002 , mnras , 334 , 569 komatsu , e. & spergel , d. n. 2001 , phys.rev.d , 63 , 063002 komatsu , e. et al . 2003 , apjs , 148 , 119 lay , o. p. & halverson , n. w. 2000 , apj , 543 , 787 leitch , e. m. et al .2002 , apj , 568 , 28 liguori , m. , matarrese , m. & moscardini , l. 2003 , apj , 597 , 57 martnez - gonzlez , e. , gallegos , j. e. , argeso , f. , cayn , l. & sanz , j. l. 2002 , mnras , 336 , 22 melhuish , s. j. , dicker , s. , davies , r. d. , gutierrez , c. m. , watson , r. a. , davis , r. j. , hoyland , r. & rebolo , r. 1999 , mnras , 305 , 399 neyman , j. 1937 , skandinavisk aktuarietidskrift , 20 , 150 padin , s. et al .2001 , apj , 549 , l1 park , c. 2004 , mnras , 349 , 313 rayner , j. c. w. & best d. j. 1989 , smooth tests of goodness of fit ( new york : oxford university press ) rayner , j. c. w. & best , d. j. 1990 , international statistical rev . , 58 , 9 rocha , g. , magueijo , j. , hobson , m. & lasenby , a. , 2001 , phys .d. , 64 , 063512 savage , r. et al .2004 , mnras , 349 , 973 smith , s. et al .2004 , mnras , 352 , 887 taylor , a. c. et al .2003 , mnras , 341 , 1066 vielva , p. , martnez - gonzlez , e. , barreiro , r. b. , sanz , j. l. , & cayn , l. 2004 , apj , 609 , 22 watson , r. a. et al .2003 , mnras , 341 , 1057 white , m. , carlstrom , j. e. , dragovan , m. , & holzapfel w. l. 1999 , apj , 514 , 12suppose a square real map of pixel coordinates and pixel data .these values are generated via the edgeworth expansion such that they are independent , normalized and have skewness .the map has pixels on a side and the area of the pixel is .we now perform its fourier transform : whose bispectrum is taking into account that are independent and we have where is the kronecker delta. the fourier modes could be normalized to the desired spectrum : such that .finally , the ( normalized ) bispectrum is given by note that the gaussianity associated to the third moment in the fourier space is reduced by a factor compared to that in real space .when we introduce some degree of non - gaussianity ( skewness or kurtosis ) by means of the edgeworth expansion in the visibilities , it is clear that each eigenmode will show a different degree of non - gaussianity which will depend on the associated signal - to - noise eigenvalue .this relation can be obtained as follows . as it is explained in the main text, we generate a realization of independent values with zero mean , unit variance , and skewness and kurtosis and , respectively . as in subsection[ sec : gauss ] , the real parts of the visibilities are . the imaginary parts are generated in an analogous way .after that , we add the corresponding different noise levels and decorrelate the data by using the transformation given by equation ( [ eq:003 ] ) .then , bearing in mind the definition of the quantities , it can be easily demonstrated that for the final decorrelated and normalized data we have as one would expect , the skewness and the kurtosis are lost if the ( gaussian ) noise dominates the data ( ) .in contrast , the skewness and the kurtosis are preserved for the case . in the previous expressions ,the angled brackets ( ) denote an average over realizations .
|
we adapt the smooth tests of goodness of fit developed by to the study of the non - gaussianity of interferometric observations of the cosmic microwave background ( cmb ) . the interferometric measurements ( visibilities ) are transformed into signal - to - noise eigenmodes , and then the method is applied directly in fourier space . this transformation allows us to perform the analysis in different subsets of eigenmodes according to their signal - to - noise level . the method can also deal with non - uniform or incomplete coverage of the plane . we explore here two possibilities : we analyse either the real and imaginary parts of the complex visibilities ( gaussianly distributed under the gaussianity hypothesis ) or their phases ( uniformly distributed under the gaussianity hypothesis ) . the power of the method in discriminating between gaussian and non - gaussian distributions is studied by using several kinds of non - gaussian simulations . on the one hand , we introduce a certain degree of non - gaussianity directly into the fourier space using the edgeworth expansion , and afterwards the desired correlation is introduced . on the other hand , we consider interferometric observations of a map with topological defects ( cosmic strings ) . to these previous non - gaussian simulations we add different noise levels and quantify the required signal - to - noise ratio necessary to achieve a detection of these non - gaussian features . finally , we have also studied the ability of the method to constrain the so - called nonlinear coupling constant using simulations . the whole method is illustrated here by application to simulated data from the very small array interferometer . [ firstpage ] methods : data analysis methods : statistical cosmic microwave radiation
|
the approximation of optimal control problems for evolutionary partial differential equations of parabolic and hyperbolic type is a very challenging topic with a strong impact on industrial applications .although there is a large number of papers dealing with several aspects of control problems from controllability to optimal control , the literature dealing with the numerical approximation of such huge problems is rather limited .it is worth to note that when dealing with optimal control problems for parabolic equations we can exploit the regularity of the solutions , regularity which is lacking for many hyperbolic equations .we also recall that the main tools is still given by the pontryagin maximum principle .this is mainly due to the fact that the discretization of partial differential equations already involves a large number of variables so that the resulting finite dimensional optimization problem easily reaches the limits of what one can really compute .the forward - backward system which describes pontryagin s optimality condition is certainly below that limit .however just solving that system one is using necessary conditions for optimality so , in principle , there is no guarantee that these are optimal controls . by this approach for general nonlinear control problems we can obtain just open - loop control .one notable exception is the linear quadratic regulator problem for which we have a closed - loop solution given by the riccati equation .this explains why the most popular example for the control of evolutive partial differential equations is the control of the heat equation subject to a quadratic cost functional .+ in recent years , new tools have been developed to deal with optimal control problems in infinite dimension .in particular , new techniques emerged to reduce the number of dimensions in the description of the dynamical system or , more in general , of the solution of the problem that one is trying to optimize .these methods are generally called _ reduced - order methods _ and include for example the pod ( proper orthogonal decomposition ) method and reduced basis approximation ( see ) . the general idea for all this method is that , when the solution are sufficiently regular , one can represent them via galerkin expansion so that the number of variables involved in this discretization will be strongly reduced . in some particular case , as for the heat equation , even 5 basis functions will suffice to have a rather accurate pod representation of the solution . having this in mind , it is reasonable to start thinking to a different approach based on dynamic programming ( dp ) and hamilton - jacobi - bellman equations ( hjb ) . in this new approachwe will first develop a reduced basis representation of the solution along a reference trajectory and then use this basis to set - up a control problem in the new space of coordinates .the corresponding hamilton - jacobi equation will just need 3 - 5 variables to represent the state of the system .moreover , by this method one can obtain optimal control in feedback form looking at the gradient of the value function .+ however , the solution of hjb equation it is not an easy task from the numerical point of view : the analytical solution of the hjb equation are non regular ( typically , just lipschitz continuous ) .optimal control problems for odes were solved by dynamic programming , both analytically and numerically ( see for a general presentation of this theory ) . from the numerical point of view , this approach has been developed for many classical control problems obtaining convergence results and a - priori error estimates ( , and the book ) .although this approach suffers from the curse - of - dimensionality some algorithms in high - dimension are now available ( and ) and the coupling with pod reppresentation techniques will allow to attack by this technique optimal control problems in infinite dimension .+ to set this paper into perspective we must say that a first tentative in this direction has been made by kunisch and co - authors in a series of papers for diffusion dominated equations . in particular , in the paper by kunisch , volkwein and xie one can see a feedback control approach based on coupling between pod basis approximation and hjb equations for the viscous burgers equation .our contribution here is twofold .the first novelty is that we deal with advection - diffusion equations .the solutions to these equations exhibit low regularity properties with respect to non degenerate diffusion equations so that a rather large number of pod basis functions will be required to obtain a good approximation if we want to compute the pod basis just once .naturally , this increases the number of variable in the hjb approach and constitutes a is a real bottle - neck . in order to apply the dynamic programming approach to this problem we have developed an adaptive technique which allows to recompute the pod basis on different sub - intervals in order to have always accurate results without an increase of the number of basis functions .the second contribution of this paper is the way the sub - intervals are determined .in fact , we do not use a simple uniform subdivision but rather decide to recompute the pod basis when an error indicator ( detailed in section 4 ) is beyond a given threshold . as we will show in the sequel, this procedure seems to be rather efficient and accurate to deal with these large scale problems .we briefly describe some important features of the pod approximation , more details as well as precise results can be found in the notes by volkwein .let us consider a matrix with rank we will call the column of the matrix we are looking for an orthonormal basis with such that the minimum of the following functional is reached : the solution of this minimization problem is given in the following theorem [ the_pod ] let \in{\mathbb{r}}^{m\times n} ] , \in{\mathbb{r}}^{n\times n} ] is continuous and locally lipschitz to ensure uniqueness . + the system ( [ pode ] )can be also interpreted as a semidiscrete problem , where the matrix represents the discretization in space of an elliptic operator , say laplacian for instance . to compute the pod basis functions ,first of all we have to construct a time grid and we suppose to know the solution of ( [ pode ] ) at given time , .we call _ snapshots _ the solution at those fixed times . for the moment we will not deal with the problem of selecting the snapshots sequence which is a difficult problem in itself ,we refer the interested readers to ) .as soon as we get the snapshots sequence , by theorem [ the_pod ] , we will be able to compute our pod basis , namely , .+ let us suppose we can write the solution in reduced form as \ ] ] substituting this formula into ( [ pode ] ) we obtain the reduced dynamics \\\\ \sum\limits_{j=1}^\ell y_j^\ell(0)\psi_j = y_0 .\end{array}\right.\ ] ] we note that our new problem ( [ p22 ] ) is a problem for the coefficient functions thus , the problem is low dimensional and with compact notation we get : where \rightarrow{\mathbb{r}}^\ell\ ] ] \times{\mathbb{r}}^\ell\rightarrow{\mathbb{r}}^\ell, ] its the solution , by the control \rightarrow{\mathbb{r}}^m ] and by \rightarrow u \}\ ] ] the set of admissible controls where is a compact set . whenever we want to emphasize the depence of the solution from the control we will write .assume that there exists a unique solution trajectory for provided the controls are measurable ( a precise statement can be found in ) .for the finite horizon optimal control problem the cost functional will be given by where is the running cost and is the discount factor .+ the goal is to find a state - feedback control law in terms of the state equation where is the feedback map .to derive optimality conditions we use the well - known _ dynamic programming principle _ due to bellman ( see ) .we first define the value function : for all then : due to ( [ dpp ] ) we can derive the _hamilton - jacobi - bellman _ equations ( hjb ) : this is nonlinear partial differential equation of the first order which is hard to solve analitically although a general theory of weak solutions is available .rather we can solve it numerically by means of a finite differences or semi - lagrangian schemes ( see the book for a comprehensive analysis of approximation schemes for hamilton - jacobi equations ) . for a semi - lagrangian discretization onestarts by a discrete version of ( hjb ) by discretizing the underlined control problem and then project the semi - discrete scheme on a grid obtaining the fully discrete scheme (x_i+\delta t\ , f(x_i , t_n , u))]\\\\ v_i^0=g(x_i ) .\end{array}\right.\ ] ] with and ] with a discrete step we have nodes : to solve an hjb in that dimension is simply impossible .fortunatelly , the pod method allows us to obtain reduced models even for complex dynamics .let us focus on the following abstract problem : where is a linear and continuous operator .we assume that a space of admissible controls is given in such a way that for each and there exists a unique solution of ( [ pabs ] ) . and are two hilbert spaces , with we denote the scalar product in is symmetric coercive and bilinear .then , we introduce the cost functional of the finite horizon problem where \rightarrow { \mathbb{r}}. ] with how should we compute these intervals ] we consider the trajectories solution such that the control is constant for each , then , we have we write to stress the dependence on the constant control each trajectory has some coefficients for the coefficients will belong to intervals of the type ] we use a finite difference approximation for this equation based on an explicit euler method in time combined with the standard centered approximation of the second order term and with an up - wind correction for the advection term .the snapshots will be taken from the sequence generated by the finite difference method .the final time is , moreover , .the initial condition is when , 0 otherwise .+ for and with only 3 pod basis functions , the approximation fails ( see figure [ test10 ] ) . note that in this case the advection is dominating the diffusion, a low number of pod basis functions will not suffice to get an accurate approximation ( figure 1.b ) .however , the adaptive method which only uses 3 pod basis functions will give accurate results ( figure 1.d ) .+ ) : ( a ) solved with finite difference ; ( b ) pod - galerkin approximation with 3 pod basi ; ( c ) solved via pod - galerkin approximation with 5 pod basis ; ( d ) adapting 3 pod basis functions.,title="fig : " ] ) : ( a ) solved with finite difference ; ( b ) pod - galerkin approximation with 3 pod basi ; ( c ) solved via pod - galerkin approximation with 5 pod basis ; ( d ) adapting 3 pod basis functions.,title="fig : " ] ( a)(b ) ) : ( a ) solved with finite difference ; ( b ) pod - galerkin approximation with 3 pod basi ; ( c ) solved via pod - galerkin approximation with 5 pod basis ; ( d ) adapting 3 pod basis functions.,title="fig : " ] ) : ( a ) solved with finite difference ; ( b ) pod - galerkin approximation with 3 pod basi ; ( c ) solved via pod - galerkin approximation with 5 pod basis ; ( d ) adapting 3 pod basis functions.,title="fig : " ] ( c)(d ) the idea which is behind the adaptive method is the following : we do not consider all the snapshots together in the whole interval ] we prefer to split it in sub - intervals =\cup_{k=0}^{k}[t_k , t_{k+1}]\ ] ] where is a - priori unknown , and for some in this way , choosing properly the length of the interval , ] .we will choose a parameter to check the accuracy of the pod approximation and define a threshold .above that threshold we loose in accuracy and we need to compute a new pod basis . a good parameter to check the accuracy is ( see ( [ ind : ratio ] ) ) , as it was suggested by several authors . the method to define the splitting of ] .now we just consider the snapshots related the solution up to the time .then we iterate this idea until the indicator is below the threshold .when the first interval is found , we restart the procedure in the interval ] according to + _ for _ i=0 to n-1 + _ do _ + apply svd to get the pod basis in each sub - interval ] + go back to the original coordinate space + _ end _in this section we present some numerical tests for the controlled heat equation and for the advection - diffusion equation with a quadratic cost functional .consider the following advection - diffusion equation : with ] , and that changing the parameters and we can obtain the heat equation ( ) and the advection equation ( ) .the functional to be minimized is i.e. we want to stay close to a reference trajectory while minimizing the norm of .note that we dropped the discount factor setting .typically in our test problems is obtained by applying a particular control to the dynamics .the numerical simulations reported in this papers have been made on a server supermicro 8045c-3rb with 2 cpu intel xeon quad - core 2.4 ghz and 32 gb ram under slurm ( https://computing.llnl.gov/linux/slurm/ ) .[ [ test-1-heat - equation - with - smooth - initial - data ] ] test 1 : heat equation with smooth initial data + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we compute the snapshots with a centered / forward euler scheme with space step , and time step , and .the initial condition is and in figure [ test ] we compare four different approximations concerning the heat equation : ( a ) is the solution for , ( b ) is its approximation via pod ( non adaptive ) , ( c ) is the direct lqr solution computed by matlab without pod and , finally , the approximate optimal solution obtained coupling pod and hjb .the approximate value function is computed for whereas the optimal trajectory as been obtained with test 1 , and even test 2 , have been solved in about half an hour of cpu time .+ note that in this example the approximate solution is rather accurate because the regularity of the solution is high due to the diffusion term .since in the limit the solution tends to the average value the choice of the snapshots will not affect too much the solution , i.e. even with a rough choice of the snapshots will give us a good approximation . the difference between figure 2c and figure 2dis due to the fact that the control space is continuous for 2c and discrete for 2d .( a)(b ) ( c)(d ) [ [ test-2-heat - equation - with - no - smooth - intial - data ] ] test 2 : heat equation with no - smooth intial data + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this section we change the initial condition with a function which is only lipschitz continuos : according to test 1 , we consider the same parameters .( see figure [ testass ] ) . ; ( b ) exact solution for pod ( 3 basis functions ) ; ( c ) approximate optimal solution for lqr - matlab ; ( d ) approximate solution pod ( 3 basis functions)+ hjb.,title="fig : " ] ; ( b ) exact solution for pod ( 3 basis functions ) ; ( c ) approximate optimal solution for lqr - matlab ; ( d ) approximate solution pod ( 3 basis functions)+ hjb.,title="fig : " ] ( a)(b ) ; ( b ) exact solution for pod ( 3 basis functions ) ; ( c ) approximate optimal solution for lqr - matlab ; ( d ) approximate solution pod ( 3 basis functions)+ hjb.,title="fig : " ] ; ( b ) exact solution for pod ( 3 basis functions ) ; ( c ) approximate optimal solution for lqr - matlab ; ( d ) approximate solution pod ( 3 basis functions)+ hjb.,title="fig : " ] ( c)(d ) riccati s equation has been solved by a matlab lqr routine .thus , we have used the solution given by this routine as the correct solution in order to compare the errors in and norm between the reduced riccati s equation and our approach based on the reduced hjb equation .since we do not have any information , the snapshots are computed for this is only a guess , but in the parabolic case fits well due to the diffusion term .+ .test 2 : and errors at time for the optimal approximate solution . [ cols="^,^,^",options="header " , ] as in test 1 , the choice of the snapshots does not effect strongly the approximation due to the asymptotic behavior of the solution .the presence of a lipschitz continuous initial condition has almost no influence on the global error ( see table 1 ) .[ [ test-3-advection - diffusion - equation ] ] test 3 : advection - diffusion equation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the advection - diffusion equation needs a different method .we can not use the same we had in the parabolic case , mainly because in riccati s equation the control is free and is not bounded , on the contrary when we solve an hjb we have to discretize the space of controls .we modified the problem in order to deal with bang - bang controls .we get in ( [ lqr_oss ] ) just plugging in the control .we have considered the control space corresponding only to three values in ] was divided into \cup[0.744 , 1.496]\cup[1.496,3]. ] gives a very poor result due to the presence of the advection term .looking at figure 5 one can see the improvement of our adaptive technique which takes always 4 basis functions in each sub - interval .+ in order to check the quality of our approximation we have computed the numerical residual , defined as : the residual for the solution of the control problem computed without our adaptive technique is 1.1 , whereas the residual for the adaptive method is . as expected from the pictures , there is a big difference between these two value .[ [ test-4-advection - diffusion - equation ] ] test 4 : advection - diffusion equation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in this test we take a different , namely the solution of ( [ eq : test_cont ] ) corresponding to the control we want to emphasize we can obtain nice results when the space of controls has few element .the parameters were the same used in test 3 .the is 0.09 , and the time was the same we had in test 3 . in figure [ dt_ad4 ]we can see our approximation .( left ) , approximate optimal solution ( right).,title="fig : " ] ( left ) , approximate optimal solution ( right).,title="fig : " ] in figure 6 one can see that the adaptive technique can also deal with discontinuous controls .+ in this test , the residual for the solution of the control problem without our adaptive technique is 2 , whereas the residual for the adaptive method is .again , the residual shows the higher accuracy of the adaptive routine .as we have discussed , a reasonable coupling between pod and hjb equation can produce feedback controls for infinite dimensional problem . for advection dominated equations that simple idea has to be implemented in a clever way to be successful .it particular , the application of an adaptive technique is crucial to obtain accurate approximations with a low number of pod basis functions .this is still an essential requirement when dealing with the dynamic programming approach , which suffers from the curse - of - dimensionality although recent developments in the methods used for hjb equations will allow to increase this bound in the next future ( for example by applying patchy techniques ) .another important point is the discretization of the control space . in our examples ,the number of optimal control is rather limited and this will be enough for problems which have a bang - bang structure for optimal controls . in general , we will need also an approximation of the control space via reduced basis methods .this point as well as a more detailed analysis of the procedure outlined in this paper will be addressed in our future work . 1 m. bardi , i. capuzzo dolcetta ._ optimal control and viscosity solutions of hamilton - jacobi - bellman equations_. birkhauser , basel , 1997 .s. cacace , e. cristiani , m. falcone , a. picarelli . _ a patchy dynamic programming scheme for a class of hamilton - jacobi - bellman equations _ , preprint , 2011 to appear on siam j. scie. carlini , m. falcone , r. ferretti ._ an efficient algorithm for hamilton - jacobi equations in high dimension_. computing and visualization in science , vol.7 , no.1 ( 2004 ) pp .m. falcone ._ numerical solution of dynamic programming equations _, appendix of the book m. bardi , i. capuzzo dolcetta , optimal control and viscosity solutions of hamilton - jacobi - bellman equations , birkhser , boston , 1997 , 471 - 504 . m. falcone , r. ferretti . semi - lagrangian approximation schemes for linear and hamilton - jacobi equations , siam , to appear m. falcone , t. giorgi . _ an approximation scheme for evolutive hamilton - jacobi equations _mceneaney , g. yin and q. zhang ( eds . ) , `` stochastic analysis , control , optimization and applications : a volume in honor of w.h .fleming '' , birkhuser , 1999 , 289 - 303 .k. kunisch , s. volkwein ._ control of burgers equation by a reduced order approach using proper orthogonal decomposition_. journal of optimization theory and applications , 102 ( 1999 ) , 345- 371 .k. kunisch , s. volkwein ._ galerkin proper orthogonal decomposition methods for parabolic problems _ numer .( 2001 ) , 117 - 148 .k. kunisch , s. volkwein ._ optimal snapshot location for computing pod basis functions _ esaim : m2an 44 ( 2010 ) , 509 - 529 .k. kunisch , s. volkwein , l. xie ._ hjb - pod based feedback design for the optimal control of evolution problems_. siam j. on applied dynamical systems , 4 ( 2004 ) , 701 - 722 .k. kunisch , l. xie . _ pod - based feedback control of burgers equation by solving the evolutionary hjb equation _ , computers and mathematics with applications .* 49 * ( 2005 ) , 1113 - 1126 .a. t. patera , g. rozza ._ reduced basis approximation and a posteriori error estimation for paramtrized partial differential equations . _ mit pappalardo graduate monographs in mechanical engineering , 2006. m. l. rapun j.m ._ reduced order models based on local pod plus galerkin projection ._ j. comput .phys . , 229 ( 2010 ) , pp 3046 - 3063 . s. volkwein , _ model reduction using proper orthogonal decomposition _ , 2011 www.math.uni-konstanz.de/numerik/personen/volkwein/index.php f. trltzsch ._ optimal control of partial differential equations : theory , methods and applications _ , ams 2010
|
we present an algorithm for the approximation of a finite horizon optimal control problem for advection - diffusion equations . the method is based on the coupling between an adaptive pod representation of the solution and a dynamic programming approximation scheme for the corresponding evolutive hamilton - jacobi equation . we discuss several features regarding the adaptivity of the method , the role of error estimate indicators to choose a time subdivision of the problem and the computation of the basis functions . some test problems are presented to illustrate the method .
|
in this paper we describe the first fully -dimensional application of regge calculus to general relativity .we develop an initial - value prescription based on the standard york formalism , and implement a 4-stage parallel evolution algorithm .we benchmark these on the kasner cosmological model .we present three findings .first , that the regge solution exhibits second - order convergence of the physical variables to the continuum kasner solution .secondly , regge calculus appears to have a complete diffeomorphic structure , in that we are free to specify three shift and one lapse condition per vertex .furthermore , the four corresponding constraint equations are conserved , to within a controllable tolerance , throughout the evolution . finally , the recently - developed decoupled parallel evolution scheme ( the `` sorkin evolution scheme '' ) yields stable evolution .although we have taken just the first few steps in developing a numerical regge calculus programme , every indication ( both numerical and analytic ) suggests that it will be a valuable tool in the study of gravity .our numerical studies , together with analytic results should put to rest some of the recent concerns about regge calculus it does appear to be a viable approximation to general relativity .einstein described gravitation through the curvature of a pseudo - riemannian manifold .regge calculus , on the other hand , describes gravity through the curvature of a piecewise flat simplicial pseudo - riemannian manifold .the fundamental platform for regge calculus is a lattice spacetime , wherein each lattice cell is a simplex endowed with a flat minkowski geometry .the physical and geometric basis of regge calculus distinguishes it from all other discretizations of general relativity .one applies the principles of einstein s theory directly to the simplicial geometry in order to form the curvature , action and field equations .this is in stark contrast to the finite difference approach , where one starts with a representation of the continuum field equations and proceeds to discretize them over a grid of points .the goal of our work is to evaluate the relative strengths and weaknesses of regge calculus .the geometrically transparent nature of regge calculus should be useful in the interpretation of simulations .one example of this is the kirchoff - like form of the contracted bianchi identity in regge calculus . all edges in the lattice geometry , in a strict sense , carry a flow of energy - momentum , and the sum of these flows at each vertex is zero .this single example illustrates that every term in every regge equation has a clear geometric interpretation .we choose to benchmark our code on the kasner cosmology because it has a well defined solution with symmetries .more importantly , however , the kasner model is a prototype of the belinsky - khalatnikov - lifschitz mixmaster oscillation generic to all crunch cosmologies . unlike all previous regge simulations , we impose no symmetries on the model it is free to exhibit all dynamical degrees of freedom .the symmetries of the kasner cosmology are encoded in the freely specifiable portion of the initial data , and the full regge initial - value problem is then solved .the solution obtained displays encouraging agreement with the continuum kasner solution , and this agreement is found to be remarkably well preserved during evolution .the code used in these calculations is applicable to spacetimes , as well as the topology presented here .there are two major areas of active research in numerical relativity today. the first effort deals with gravitational wave generation by astrophysical processes .the laser interferometric gravity wave observatory , which should be operational within a few years , will require a set of gravity wave templates from colliding and coalescing black holes .the second area of research deals with the structure of cosmological singularities in inhomogeneous cosmologies .the gravity wave problem is dominated in many ways by boundary conditions , whilst the cosmological singularity problems have no boundary .we have chosen to focus our initial application of regge calculus toward issues in inhomogeneous cosmology . in this paperwe will develop the simplicial lattice used in the calculations ( section [ sec : kinematics ] ) , and then briefly describe the geometric and dynamical structure of regge calculus ( section [ sec : dynamics ] ) .these ideas will be applied to motivate a york - style two - surface initial - value prescription for regge calculus ( section [ sec : ivp ] ) , which is then specialized to the kasner cosmology ( section [ sec : kasner ] ) . with initial data in hand ,we proceed to the time evolution problem in section [ sec : evolution ] , and apply a four - step implementation of the sorkin evolution scheme .we examine the convergence of the regge solution to the continuum , and confirm the existence of simplicial gauge freedoms by demonstrating that the constraint equations are satisfactorily preserved during evolution .finally , in section [ sec : conclusion ] , we discuss our future plans for numerical regge calculus .the first issue one must address when beginning a regge calculus simulation is the choice of lattice structure . how do we choose to represent a spacetime geometry with a lattice geometry ? we are aided in our decision by the following four guiding principles * we want the simplicial spacetime foliated into 3-dimensional tetrahedral spacelike hypersurfaces .although these hypersurfaces will be geometrically distinct , their lattice structure should be identical .the spacetime sandwiched between any two of these surfaces should be decomposed into simplexes .* each spacelike hypersurface should have the topology of a 3-torus ( ) . *the simplicial structure sandwiched between two adjacent hypersurfaces must be consistent with the recently proposed sorkin evolution scheme .* the lattice connectivity , or local topology , of the 3-geometries should be maximally homogeneous .in other words , the lattice structure at one vertex should be identical to all others .these four guiding principles have led us to the choice of lattice described below .it is in this sense that we have introduced a split of spacetime in regge calculus .the fundamental geometric element in regge calculus is the four - simplex , consisting of five vertices , ten edges , ten triangular hinges , and five tetrahedra . in this sectionwe build the four dimensional simplicial lattice by constructing a simplicial three geometry , and then carry the triangulation of forward in time , to obtain a simplicial manifold with topology .the best tetrahedral subdivision of that we know is based on isosceles tetrahedra , and will be referred to as the quantity production lattice ( qpl ) . a major advantage of this lattice is that it is easily refined whilst maintaining local homogeneity , in the sense that the connectivity at each vertex is the same .the qpl consists of isosceles tetrahedra only in flat euclidean space , with slight variations in the edge lengths introducing curvature about edges in the three - lattice .the qpl may be constructed from a single cube .begin by identifying opposing faces , thus fixing the global topology .subdivide the cube into smaller cubes , until the desired resolution is obtained , and then introduce a new vertex at the centre of each small cube . joining these centred vertices together yields a new cubic lattice which pierces the faces of the original . finally , join the centred vertices to each of the eight vertices of the original cube in which they reside , creating four tetrahedra through each face of the original cubic lattice .this completes the construction of the qpl .figure [ fig : qpl ] displays some of the local structure of the resulting lattice .fourteen edges emanate from each and every vertex in the qpl .six legs lie along cube edges , and 8 diagonal edges join the two cubic lattices together .these two types of edges will be refered to as cube - aligned and diagonal edges , respectively .the qpl , as outlined here , is isomorphic to the right - tetrahedral lattice , which can be formed by adding face and body diagonals to a cubic grid . despite this, we have found that assigning edge lengths using the right tetrahedral approach leads to minor numerical difficulties , due to degeneracy in the first derivatives of the regge equations .this has perhaps impeded previous work with formulations of regge calculus .we now have in hand the simplicial three geometry , and turn to the construction of the four geometry in which this initial slice is to be embedded .the four geometry is constructed by dragging vertices forward , one by one , until the whole initial surface has been replicated , and the intervening region filled with four - simplices .we refer to the resulting lattice as a sorkin triangulation . another useful approach to the construction of the four geometry is the `` vertex data structure '' , whereby each vertex in the lattice has , in so far as is possible , an identical local structure . in general ,the sorkin approach to the evolution problem is inconsistent with such a lattice .the major advantage of sorkin triangulations over a four geometry built using a vertex data structure is that it allows the use of an order ( number of vertices in each three - geometry ) , parallel , four - step evolution algorithm .that is , one quarter of the vertices in the three - geometry may be evolved together , in parallel .we construct the sorkin triangulation as follows .identify four classes of vertices in the qpl , two types on the original cubic structure , and the remaining two on the cube - centred lattice .the vertex types are defined as * half the vertices on the original cubic lattice , sharing no common edges .* the remaining vertices on the original cubic lattice . * half the vertices on the cube centred lattice , not sharing common edges .* remaining vertices on the cube centred lattice .these identifications are indicated in figure [ fig : qpl ] .we now construct the four geometry from the simplicial three geometry .= 3truein begin by selecting a single a - type vertex , say , and drag it forward in time , to create a new vertex . in the process we introduce 14 brace edges over the 14 legs emanating from in the spacelike hypersurface ( ) , together with a vertical ( timelike ) leg joining to .this timelike edge is taken small enough to ensure that the brace edges are spacelike .each brace edge joins the newly evolved vertex to the corresponding vertices in the previous slice which share an edge with .figure [ fig:1 + 1 ] shows the situation in dimensions . from the figure it is clear that `` evolving '' an a - type vertex creates a 2-simplex ( triangle ) over each 1-simplex ( edge ) meeting at . in dimensions ,the procedure creates a 4-simplex above each tetrahedron ( 3-simplex ) which shares the vertex .having identified the 4-simplices in this way , we may tabulate the new tetrahedra and triangles which they contain .this process can be repeated on all a - type vertices simultaneously , since none of these vertices share a common edge .now consider a single b - type vertex . dragging it forwardyields 8 brace legs between slices , one over each of the 8 diagonal legs in .the original vertex is also joined to 6 a - type vertices which have already been dragged forward , so joining the new vertex to these creates 6 spatial legs on . a vertical edge joining the new vertex to its counterpart on the original sliceis also created .all b - type vertices can be dragged forward in parallel , and this process creates a set of 4-simplices in a manner similar to the a - type evolution step . next the cube centred c - type vertices .each of these creates 6 brace legs between slices , since each c - type vertex in is connected to 6 d - type vertices , which have yet to be carried forward .also created are 8 spatial diagonal legs on , joining the c - type to surrounding a- and b - type vertices , together with a single timelike edge . finally , the d - type vertices are dragged forward , creating 14 new spatial legs on , together with a single timelike leg . in this way the initial surface is carried forward in time , constructing an identical three geometry on , and filling the intervening region with four - simplices .the resulting regge spacetime is consistent with our four guiding principles .in section [ sec : kinematics ] we developed the kinematic structure of the simplicial spacetime .we now apply the regge form of einstein s field equations to the lattice .regge constructed the simplicial form of the hilbert action , and obtained what have become known as the regge equations via a variational principle .the independent variables in this approach are the lattice legs .each edge in the lattice has associated with it a simplicial equivalent of the einstein equations , which take the form where the summation is over all triangles which hinge on the leg .the defect about is the angle that a vector rotates when parallel transported around the hinge , is the area of the hinge , and is the 3-volume of the voronoi cell dual to edge . equation [ eqn : regge ] is the regge calculus analogue of .when applied to our lattice , there are three types of regge equation , depending on the type of edge in the lattice it is associated with .the equations associated with edges in a spacelike hypersurface may be viewed as `` evolution''-type equations , since they couple information on that surface to the edges lying above and below .this is the regge equivalent of the second order evolution equations in the adm split .equations associated with brace edges , those lying between successive spacelike hypersurfaces , only involve quantities on and between the two slices . in this sensethey are `` constraint '' or first order equations .similarly , the `` timelike regge equations '' obtained by varying the vertical edges between two spacelike hypersurfaces are first order equations .the einstein tensor in the continuum is a rank 2 symmetric tensor , and hence has two indices .e. cartan has provided us with a geometric interpretation of this tensor and its components .he showed that the einstein tensor is expressible as a double dual of the sum of moments of rotation over the 2-dimensional faces of a 3-volume .thus one index can be interpreted as the orientation of that 3-volume , and the other can be associated with the orientation of the dual of the sum of moments of rotation .the cartan approach provides a derivation of the einstein equations independent of the action , and has been successfully applied to regge calculus .the usual regge equations are recovered as a sum of moments of rotations .thus for each edge in the regge spacetime the voronoi 3-volume dual to the edge , and each of the moment of rotation vectors over the 2-dimensional faces of are parallel and directed along , yielding a single regge equation per edge .it is in this sense that the regge equation is the double projection of the einstein tensor along edge l ( ie : ) . since the timelike edges in our kasner simulation are orthogonal to the homogeneous spacelike hypersurfaces , the regge equations associated with these edges ( the `` timelike regge equations '' ) are the canonical hamiltonian constraint equations .similarly , the other first - order regge equations , associated with the brace edges , are the only regge equations that carry components of the momentum constraints ( ) . thus we use three of these brace equations per vertex as momentum constraints , and the timelike regge equation as the hamiltonian constraint . in continuumgeneral relativity , the ten einstein equations are functions of the ten components of the metric tensor . however , of these ten metric components ,only six functions per spacetime point may be considered truly independent , since we have the freedom to choose a system of co - ordinates .correspondingly , there are four relations per point amongst the einstein equations themselves , the contracted bianchi identities .a similar structure is found in regge geometrodynamics .each vertex in the lattice may be associated with a unique set of edges , and each of these edges has a corresponding regge equation .however , the simplicial form of the contracted bianchi identity holds at each vertex , a total of four constraints per vertex .so there can only be truly independent edges per vertex , with the remaining four edges representing simplicial gauge freedom .this corresponds to the lapse and shift freedom in continuum relativity .the four redundant regge equations per vertex introduced in this process may be compared with the four constraint equations in continuum general relativity .the finite rotations involved in the simplicial contracted bianchi identity do not commute , so these `` identities '' are only approximate . in the infinitesimal limit , however , it is expected that the regge bianchi identities become exact in correspondence to the continuum .this failure to precisely conserve the constraints is no worse than the failure of a finite difference scheme to exactly conserve the numerical constraints .we demonstrate this conservation of energy - momentum in section [ sec : evolution ] .before evolution can proceed , we must construct initial data consistent with the regge constraint equations . all previous initial data constructed in regge calculus has been for the special case of a moment of time symmetry , where the problem reduces to the requirement that the scalar curvature of the initial three geometry is zero .successful time - symmetric regge calculations include initial data for single and multiple black holes , friedmann - robertson - walker and taub cosmologies , and brill waves on both flat and black hole backgrounds .the major simplification of calculating initial data at a moment of time symmetry is that the problem reduces to the calculation of the purely three - dimensional scalar curvature , and does not involve the four dimensional lattice structure . in spacetimes of astrophysical interestthere will not ordinarily be a moment of time symmetry to simplify the initial - value problem .indeed , this is the case in the kasner class of cosmologies considered here .the discrete time regge lattice neccesitates the construction of fully four - dimensional , two - slice initial data ( a `` thin sandwich approach '' ) .we apply a novel two - slice initial - value formalism , to be described in more detail elsewhere , to construct initial data for the kasner cosmology in regge calculus .the two - surface approach outlined below is constructed to mirror , in as far as possible , the york technique for constructing initial data in the continuum , rather than the belasco - ohanian two - surface formulation .the advantage of the york approach is that it clearly delineates the true degrees of freedom of the gravitational field from the embedding quantities .york begins by conformally decomposing the three - metric , introducing the conformal factor , and base metric .we are then free to specify the * base metric ( `` where '' ) .* momenta ( `` how fast '' ) , and * the trace of the extrinsic curvature , ( `` when '' ) . once these have been fixed , the only remaining variables are the conformal factor , and the gravitomagnetic three - vector potential .the four constraint equations are used to calculate these quantities .the key to our simplicial two - surface formulation of york s procedure is the simplicial representation of the dynamical degrees of freedom and embedding variables .the identification presented below is by no means unique . in other words ,we provide a representation of the degrees of freedom and not a diffeomorphically invariant , conformally invariant representation of the dynamic degrees of freedom . to this end, it is convenient to perform a conformal decomposition on each hypersurface , yielding leg lengths in ( ) of the form where is the base leg between vertices and lying in one hypersurface , the simplicial equivalent of the base metric .the conformal factor is defined on the vertices of the 3-lattice , and is applied to the edge between vertices and using a centred , second order , approximation within each hypersurface , 14 edges emanate from each vertex ; 6 cube - aligned , together with 8 diagonal legs .it is convenient at this stage to view the lattice , and the three - lattice in particular , from the viewpoint of a vertex data structure .the three geometry can be constructed by applying the generator shown in figure [ fig : qpl ] to each vertex , which consists of 3 cube - aligned legs , together with 4 diagonal edges .the four geometry sandwiched between the two hypersurfaces contains a single brace leg over each of these 7 edges per vertex , and so a similar decomposition into edges per vertex can be obtained for the brace legs .however , the exact structure of these braces depends on the choice made in the construction of the four lattice , and in particular , each type of vertex ( a , b , c , d ) will have a different arrangement of braces about it .since we are dealing with a full four - dimensional region of the lattice , the initial - value problem requires that we specify lapse and shift , otherwise we would be unable to construct the thin sandwich analogue of and .these quantities both depend on derivatives of the 3-metric , the lapse function and the shift vector .our freedom to freely choose lapse and shift is linked to the simplicial contracted bianchi identities , discussed above .although regge calculus deals directly with geometric , co - ordinate independent quantities , we must still specify how vertices on the initial three geometry are pushed forward in time .the york prescription outlined above allows us to freely specify the base three - geometry ( the `` where '' ) , ( `` when '' ) , and the momentum conjugate to the true dynamical degrees of freedom of the gravitational field ( `` how fast '' ) .a two - surface formulation of these ideas requires that the momenta terms be split across both surfaces . in direct analogy to the assignment of the base three - metric , we specify freely all base legs on the initial hypersurface . to avoid specifying everything on either surface ( and thus obtaining a belasco - ohanian style initial data prescription ) , we must not specify the conformal factor on . the conformal factor at each vertex on is freely specified instead , in analogy to the fixing of .the final step is to identify the regge analogue of .we choose the 4 diagonal base legs per vertex on to represent this freedom , since , together with the lapse and shift choice , they enable us to specify part of the change in the base `` metric '' across the sandwich . our two - surface formulation on the lattice , in the spirit of the standard york decomposition ,may be summarized as follows .specify * all 7 base edges per vertex on ( `` where '' ) , * the 4 diagonal base edges per vertex on ( `` how fast '' ) , * the conformal factor at each vertex on ( `` when '' ) , and * 4 lapse and shift conditions per vertex ( `` how fast '' and `` when '' ) , which leaves the conformal factor on , the 3 spatial cube - aligned edges per vertex on , and the 4 braces per vertex which lie over diagonal edges as the true representations of the geometric degrees of freedom .these 8 unknowns per vertex may be calculated by solving the 8 regge constraint equations per vertex which are available during the solution of the initial - value problem .the constraints are associated with the 7 braces per vertex , together with the timelike edge joining the vertex on to its counterpart on .it is possible to introduce sophisticated definitions of lapse and shift by mirroring the continuum structure , however a more natural approach is to select appropriate legs in the lattice , and define them to represent the simplicial gauge freedom . in this spirit , we define lapse to be the proper time measured along a timelike leg joining a vertex to its counterpart on the next hypersurface , and the shift to be a combination of three legs that uniquely fix the location of the new vertex above the previous hypersurface . detailed definitions of shift differ as to which type of vertex ( a , b , c or d ) we are considering .the evolution of an a - type vertex creates 14 brace edges stretching between and the nascent hypersurface , together with a timelike `` regge lapse '' edge .six of these edges will lie above cube - aligned edges in , and it is these braces that we select as our simplicial shift freedom . in particular , at a given a - type vertex , the shift edges correspond to the brace legs above the 3 cube - aligned edges in the vertex generator shown in figure [ fig : qpl ] .this definition of shift is also applied to the c - type vertices . in the case of b- and d - type vertices , assigning the lengths of these braces will not determine the position of the evolved vertex .this can be seen in figure [ fig:1 + 1 ] , where assigning all brace lengths will only rigidify the `` a''-type vertices , leaving the `` b''-type to flap .this would result in an ill - conditioned system of equations , since the location of the b - type vertices is completely undetermined .this -dimensional example suggests that we define the shift edges on b- and d - type vertices by assigning the lengths of the 3 cube - aligned edges in .the three edges chosen are always those associated with the vertex - by - vertex generator shown in figure [ fig : qpl ] .it is clear from this discussion that care must be taken in choosing the simplicial counterparts of lapse and shift .we now apply the initial - value formalism outlined above to the kasner class of cosmologies .the metric for these vacuum solutions of einstein s equations takes the form where the unknown constants satisfy we construct the two surface initial data such that the initial surface , , is at , and is at .the base leg lengths on are assigned the flat - space values obtained by setting in ( [ eqn : metric ] ) .this yields for the three cube - aligned base edges per vertex on , and for the 4 diagonal base edges per vertex .the conformal factor on is the simplicial equivalent of setting , and we choose to take the kinematic degrees of freedom correspond to the 4 diagonal legs per vertex on , and these are set in the same manner as the base diagonal legs on the initial slice , using ( [ eqn : metric ] ) evaluated at .this yields the only edges which remain to be fixed are our choice of shift and lapse .the lapse edge is assigned the squared proper time between slices , and the squared lengths of the shift edges are obtained by applying a power series expansion along the continuum geodesics .accurate to third order in the lattice spacing , the expansion between vertices and is where the continuum metric is used to obtain .the spatial edge length assignments above are identical to the expressions obtained from ( [ eqn : series ] ) , to third order . in principlewe could ignore the cubic term in the expansion , and take the squared shift edge lengths to be however it was found that the initial - value problem converged faster using the higher order expansion .the simplest approximation was tested , and found not to change the character of the initial - value solution appreciably .these expansions are used only whilst constructing initial - value data , and only when we wish to compare with a continuum metric .the shift conditions on a- and c - type vertices are applied to brace legs lying above cube - aligned edges , and the geodesic - length approximation ( [ eqn : series ] ) for these yields for the b- and d - type vertices , shift is applied to the three cube - aligned base spatial edges per vertex in , where the series expansion takes the form in all calculations shown in this paper , we use the qpl obtained by two barycentric subdivisions of the original cube .the three geometry consists of 128 vertices , 896 legs , 1536 triangles and 768 tetrahedra per spatial hypersurface .the region contained between two consecutive spacelike hypersurfaces contains 3072 four - simplices . whilst this is a modest grid resolution by modern standards ,the major aim of this work is the evaluation of regge calculus as a competitive technique in numerical relativity ; we feel that the grid is sufficient for this task , and experiments with increased spatial resolution do not effect the conclusions reached below .the kasner exponents enter into the freely specifiable portion of the initial data through equations ( [ eqn : next_leg ] ) , ( [ eqn : shift1 ] ) and ( [ eqn : shift2 ] ) .they are not used at any other stage of the calculations , and in particular , do not appear in the evolution problem .the information borrowed from the continuum solution in this way could , in principle , be constructed within the regge framework by seeking a solution with the required symmetries .analytic calculations in which the regge lattice is forced to maintain the symmetries of the kasner solution have been undertaken .these confirm that the regge equations produce the expected einstein equations for the kasner cosmology to leading order in the limit of very fine discretizations .any calculation must begin with a suitable ansatz to determine the desired class of solutions ; we have chosen the freely specifiable portion of the initial data to closely model the kasner cosmology , allowing us to compare with the analytic solution .this provides a test - bed for regge calculus , whilst maintaining maximal freedom in the non - specifiable portion of the initial data .the spatial length scales which appear in the geodesic expansions above must be chosen for a particular simulation , noting that the total volume of the initial base geometry will be . to avoid problems with the courant condition, the timestep is scaled together with the spatial resolution .we choose where .the only constraint on the choice of spatial length scale is the condition that we require many grid zones across the co - ordinate horizon , which for the metric ( [ eqn : metric ] ) is located at , so at we require .the initial - value algorithm outlined in section 4 can now be applied to the qpl model of the kasner cosmology .the freely specifiable variables are fixed using equations ( [ eqn : first])-([eqn : last ] ) , with equations ( [ eqn : shift1 ] ) and ( [ eqn : shift2 ] ) being used to enforce the shift conditions .the remaining variables at each vertex , namely the conformal factor on , the four diagonal brace edges between slices , and the three cube aligned edges in , are calculated by solving the available regge equations .the resulting system of eight equations per vertex ( a total of 1024 variables for the qpl considered here ) is solved using newton - raphson iteration .finally , we present a sample initial data solution for an axisymmetric kasner cosmology ( ) , where we take and .the initial - value solution is found to be homogeneous to a very high degree .solving the constraints for the independent variables in this case yields the mean values where , and are the three cube - aligned legs in , is a diagonal brace edge between slices , and is the conformal factor on . the standard deviation of both the edges and the conformal factor from these mean values is of the order , which is the convergence tolerance used for the newton - raphson iteration .in the previous sections we described the structure of the quantity production lattice , and built two - surface initial data for the kasner cosmology . we now consider the evolution of this data .the initial data is evolved using the parallel sorkin evolution scheme described by barrett . in section [ sec : kinematics ] we constructed the simplicial lattice from the initial 3-geometry by dragging forward individual vertices .this procedure was used because it results in a lattice suited to sorkin evolution . the key realization in the sorkin approach , based on the original examples ,is that when a vertex is carried forward to the next slice , a new regge equation becomes available below every new edge created .thus a purely local evolution algorithm is possible .the evolution of a single vertex in the quantity production lattice creates 15 new edges , and makes available 15 regge equations . in the case of the a - type vertices, 14 brace edges are created between and , together with the timelike edge .this completely encloses the region surrounding the 14 spatial edges emanating from the vertex in , and the timelike edge joining the vertex to its future counterpart .the regge equations associated with the legs in emanating from the vertex a , together with the timelike edge , may be used to solve for the lengths of the 14 new brace edges and single new timelike edge .a similar situation applies for the b - type vertices , except here there are 6 new cube - aligned edges in , 8 new spacelike diagonal braces and a timelike leg between the cauchy slices . in this case , the new regge equations which become available are associated with the timelike edge , the 6 cube - aligned braces below the new spatial edge , and the 8 spatial diagonal edges in . evolving a c - type vertexcreates 6 new brace edges , together with 8 spatial edges and a single timelike edge , and the available regge equations correspond to the 6 spatial edges in , 8 brace edges , and the timelike edge .finally , d - type vertices create a vertical edge and 14 new spatial edges on the newly formed spacelike hypersurface .the evolution equations in this case are associated with the brace edges between slices and the timelike edge at the vertex . however , this is not the entire story . as we saw in section[ sec : dynamics ] , the simplicial form of the contracted bianchi identities imply that four equations per vertex are dependent on the remaining equations .the quantity production lattice has a total of 15 dynamical edges per vertex during evolution , but we can only consider 11 of those as independent , with the remaining four freely specifiable the simplicial equivalents of shift and lapse freedom . we choose to apply the lapse and shift conditions on the same combination of edges as in the initial - value problem , however the technique is slightly different .consider an a - type vertex . about this vertexthere are 6 brace edges , lying above the cube - aligned edges in the current hypersurface .the zero shift condition is applied by demanding that opposing brace edges , lying along the same `` axis '' , have equal lengths .this condition is applied to each of the three pairs of such edges .provided that the homogeneity of the initial slice is maintained , this ensures that the timelike edge is locally orthogonal to the current hypersurface .similar conditions are used for the remaining vertices , applied either to brace edges ( c - type ) or spatial edges on the next hypersurface ( b- and d - type vertices ) .the lapse freedom is utilised to specify the squared length of the timelike edge at each vertex .the application of four gauge conditions per vertex leaves 11 dynamical edges and 15 equations at each vertex in the lattice .clearly 4 regge equations per vertex are redundant , becoming constraints which may be tracked during evolution .the redundant equation associated with lapse freedom is our `` hamiltonian constraint '' , the timelike regge equation .when applying shift conditions , we choose the redundant equation to lie below the edges upon which the gauge condition is applied . for a- and c - type vertices , shift is applied to brace edges , so the 3 redundant equations per vertex correspond to the cube - aligned spatial edges in the vertex generator ( see figure [ fig : qpl ] ) . for b- and d - type vertices ,shift is applied to spatial edges in , so the redundant equations are associated with the brace edges lying below the 3 cube - aligned vertex generator legs on .the sorkin evolution scheme is , by construction , a partially constrained algorithm , since the redundant equations associated with shift edges at a- and c - type vertices are second order evolution - type equations , whereas at b- and d - type vertices we discard first order constraint - type equations . for simplicity we shall refer to the redundant shift equations at each vertex as the `` momentum constraints '' , although they are in fact a mixture of first - order and second - order equations .the evolution of an axisymmetric kasner cosmology ( with kasner exponents ) is shown in figure [ fig : results ] , and figure [ fig : flat ] shows the evolution of an expanding flat space solution , corresponding to the kasner exponents .both runs cover a ten - fold increase in the three - volume .it can be seen that the fractional error in the three - volume remains small throughout the evolution .the fractional volume discrepancy is defined as and the summation is over all tetrahedra on a surface . also apparent , particularly in the axisymmetric kasner solution in figure [ fig : results ] ,are high frequency oscillations ( typical wavelengths of a few timesteps ) , which decrease in magnitude as the evolution proceeds .it was pointed out to us by r. matzner that gravitational waves in a kasner background would evolve with such a behaviour .we believe we are seeing a grid resolution limit effect reminiscent of such waves ; however , in our case they are not fully resolvable .the oscillations are controllable , with magnitudes varying as and wavelengths reducing as , where is a typical edge length on the lattice .the presence of these relatively high frequency , low amplitude waves does not appear to cause any instabilities , and it is apparent from figures [ fig : results ] and [ fig : flat ] that the waves gradually die out as the lattice evolves .the evolution of the three different classes of spatial cube - aligned edge are shown in figures [ fig : results] and [ fig : flat] , and are found to closely match the expected kasner evolution . in both figures the and edges lie atop one another .the standard deviation of the spatial edge , shown in part of the figures , increases gradually throughout the evolution .it reaches a few parts in after ten thousand timesteps for the axisymmetric kasner solution , and remains below in the expanding flat spacetime of figure [ fig : flat ] .we now consider the convergence properties of the simplicial cosmological solution . due to the global topology and homogeneity of the solution ,we are able to examine convergence by reducing the typical scale length of the lattice , whilst keeping the number of vertices fixed .this is equivalent to looking at a smaller region of the total manifold .for the remainder of this section we introduce the scale parameter , defined such that which ensures that the courant condition remains satisfied , and the total volume of the initial base three - geometry is .there are several important convergence issues that must be examined in regge calculus , and our fully simplicial calculation provides an excellent opportunity to do so for a specific application .the first , and most important issue to address is the convergence of simplicial solutions to the corresponding solutions of einstein s equations .the second issue is the consistency of the regge equations .that is , as the typical lattice length scale is reduced , do the redundant regge equations also converge ?this latter question is related to the simplicial bianchi identities , and the conservation of energy momentum .we find convergence in all solutions and equations .the error in lattice edge lengths compared to corresponding geodesic segments in the continuum was examined , together with the fractional difference of the simplicial three volume from linear expansion displayed by the continuum .the fractional volume discrepancy , ( [ eqn : volume ] ) , was evaluated at , although due to the fluctuations apparent in figure [ fig : results ] , the values were averaged over several periods of the oscillation .the results are shown in figure [ fig : converge ] .figure [ fig : converge ] also shows that the diagonal spatial edges converge to the continuum solution second order in the typical lattice scale length .all other spatial edges were found to converge at the same rate .this indicates that the simplicial kasner solution converges to the continuum solution as the second power of the lattice spacing .recent work has cast doubt over the convergence of regge calculus to the continuum .the calculations of brewin applied continuum geodesic lengths directly to the lattice , and evaluated the regge equations on the resulting simplicial spacetime .numerical convergence tests indicated that the regge equations failed to converge as the scale length of the lattice was decreased .we have shown explicitly , for one of the cases studied by brewin , that solutions of the regge equations converge to the corresponding solution of the continuum einstein equations .figure [ fig : converge ] also displays the convergence of the redundant regge equations as the lattice is refined . the standard deviation of the momentum constraints , a combination of both evolution and true constraint - type equations , is shown to converge as at least the second power of .the hamiltonian constraint shows linear convergence .the convergence of the equations is consistent with the regge bianchi identities , and the existence of diffeomorphism freedom in regge calculus .the convergence analysis shows that for the kasner cosmological model , regge calculus is a second order discretization of einstein s theory of gravity .we have successfully performed the first fully -dimensional calculation in regge calculus , without the imposition of symmetry conditions . in the process we applied a novel 2-surface initial value formalism to the lattice , demonstrated the gauge freedom implied by the simplicial form of the contracted bianchi identities , and showed explicitly that the solution of the regge equations converges to its continuum counterpart .the simplicial kasner solution was found to agree well with the analytic solution , and maintains a remarkable degree of homogeneity throughout the evolution .convergence analysis showed that all edge lengths converge to their continuum values as the second power of the typical lattice length scale , countering recent doubts over the convergence of regge calculus to general relativity .the regge constraint equations , arising from simplicial gauge freedoms , were also found to converge to zero as the lattice was refined .this demonstrates explicitly the existence of gauge freedom in the lattice , the simplicial counterpart of coordinate freedom in continuum general relativity .three issues direct our research in the immediate future .first is an analytic formulation of the convergence properties of the regge equations , and their solutions .secondly , we wish to provide an benchmark of our code , based on a taub - like cosmology .finally , we are investigating planar numerical perturbations of the kasner cosmology , in order to analyze gravity wave propagation in our simplicial spacetime geometry . in the longer termwe intend to address three issues using regge calculus .the first is related to the wiring of matter terms to the lattice geometry .we can not think of a better approach than to utilize the cartan analysis described in section [ sec : dynamics ] . with a formulation of matter in regge calculus, we can begin to study the generic properties of collapse in inhomogeneous cosmologies . at both early and late stages in cosmological expansion, the effective gravity wave energy density dominates the contributions from matter and radiation fields . for this reason we wish to investigate geon states of the gravitational field .we believe that these goals provide a clear direction for the future development of regge calculus .we wish to thank john a. wheeler for his continued encouragement to tackle this problem .we are also indebted to leo brewin , ben bromley , arkady kheyfets , pablo laguna , richard matzner and ruth williams for many stimulating discussions on this and related topics .we wish to acknowledge support for this work from an ldrd grant from los alamos national laboratory , and the sir james mcneill foundation .# 1 , # 2 , # 3 , # 4 # 1 , * # 2 * , # 3 ( # 4 ) .
|
we describe the first discrete - time 4-dimensional numerical application of regge calculus . the spacetime is represented as a complex of 4-dimensional simplices , and the geometry interior to each 4-simplex is flat minkowski spacetime . this simplicial spacetime is constructed so as to be foliated with a one parameter family of spacelike hypersurfaces built of tetrahedra . we implement a novel two - surface initial - data prescription for regge calculus , and provide the first fully 4-dimensional application of an implicit decoupled evolution scheme ( the `` sorkin evolution scheme '' ) . we benchmark this code on the kasner cosmology a cosmology which embodies generic features of the collapse of many cosmological models . we ( 1 ) reproduce the continuum solution with a fractional error in the 3-volume of after evolution steps , ( 2 ) demonstrate stable evolution , ( 3 ) preserve the standard deviation of spatial homogeneity to less than and ( 4 ) explicitly display the existence of diffeomorphism freedom in regge calculus . we also present the second - order convergence properties of the solution to the continuum . [ regge model of the kasner cosmology ]
|
the nonlinear and fracturing behavior of quasi - brittle , heterogeneous materials such as fiber reinforced composites , toughened ceramics , and cementitious composites , among many others , is governed by weak spots and defects of their internal structures which cause initiation of cracks and localization of damage . in order to model such behavior, one can use discrete particle models or continuum models .each type of model has its own advantages and disadvantages . in discrete particle models ,the internal structure of materials is directly simulated through its geometric approximation and accounts for the actual intrinsic randomness specific of the selected length scale .particles replicate the effect of major heterogeneities and their interaction is formulated by contact algorithms or through inter - particle lattice struts whose constitutive behavior models specific physical mechanisms . as such, discrete models have the advantage that many microscopic effects can be accounted for in a straightforward manner .however , they often lead to large computational systems , especially to simulate real structures , and require a significant , often overwhelming , amount of computational resources . continuum models are computationally less expensive . butmost of the continuum models are based on the cauchy continuum which can not take into account features of the internal structure and does not possess a characteristic length .these type of models have some well - known drawbacks .first of all , they are not able to produce a mathematically well - posed problem in the softening regime causing strong mesh sensitivity in the numerical results .secondly , the dispersive nature of wave propagation , typical of many heterogeneous materials , can not be reproduced .furthermore conventional continuum formulations predict unrealistic stress singularities at the tip of sharp cracks and fail to simulate size effect phenomena correctly . in order to overcome these drawbacks ,non - local continuum theories have been proposed in the literature .a popular approach consists in enriching continua with high order strain and stresses .cosserat brothers study was one of the initial attempt in this direction .they considered independent rotational degrees of freedom , modeling smaller scale kinematics , in addition to translational degrees of freedom .later , the couple - stress theory was developed by toupin . in this modelthe only independent field is the displacement field that also determines the rotations .eringen developed the so - called micropolar continuum theory and non - local elasticity .another type of high - order theory incorporate higher order gradients of displacements , mostly second gradient ( strain gradient ) and sometimes higher , into the constitutive model .mindlin proposed an enhanced elastic theory with microstructural effects by considering the potential energy density as a quadratic form of gradient of strain in addition to quadratic form of strains .aifantis and coworkers introduced a simplified gradient elasticity theory with one internal length scale which incorporates laplacian of strain to enrich the continuum .they applied it for investigation of crack tips , localization of shear bands and to simulate size effect in torsion and bending .various forms of gradient elasticity and their performance in static and dynamic applications are discussed by askes and aifantis .they also provide a procedure for the identification of the relevant characteristic lengths . in this work ,the finite element implementation of gradient elasticity was also discussed .al developed a high - order gradient model based on a discrete particle structure , which includes both high - order strains and stresses .they investigated the possibility of formulating high - order constitutive equations with and without high order stresses .they found that , contrarily to the model with high - order stresses , the one without is unstable .misra and yang developed a model for cohesive materials based on microstructural concepts .they considered a representative volume of material as a collection of grains interacting through inter - granular force displacement relations .inter - granular force displacement relations are formulated based on atomistic - level particle interactions .these force displacement relationships are then used to derive the incremental stress - strain relationship of a representative volume of material .later yang et .al applied this type of microstructural granular based higher - order continuum theory to model the failure behavior of nanophased ceramics .they compared results obtained from the ab initio simulations with the result of higher - order continuum theory with good agreement .nonlinear nonlocal constitutive equations were also proposed without enrichment of the elastic part and by using integral or gradient approaches to enrich plasticity and damage theories .integral type models are based replacing a local variable with its weighted spatial averaging over a certain volume of material .pijaudier - cabot and baant pioneered research in this area . on the contrary ,gradient plasticity or gradient damage models are based on enriching state variables with high - order terms .a phenomenological strain gradient plasticity theory was developed by fleck and hutchinson in the context of couple - stress theory where only rotational part of strain gradients is taken into account .they extended the classical j2 plasticity to account for strain gradient effects .later they reformulated their model and included also stretch gradients in addition to rotational ones .they investigated their model performance by applying it to the simulation of size effect on torsion of wires , sheet bending , indentation and void growth .later , the so - called mechanism based theory of strain gradient plasticity ( msg ) was proposed on the basis of a multiscale framework linking statistically stored and geometrically necessary dislocations to plastic strains and strain gradient .a reformulation of mechanism - based strain gradient plasticity was given by that involves the third - order tensor of higher - order stress to a much simpler version within fleck and hutchinson s theoretical framework of strain gradient plasticity theory .huang et al . provide a conventional theory of mechanism - based strain gradient plasticity where the plastic strain gradient appears only in the constitutive model , and the equilibrium equations and boundary conditions are the same as the conventional continuum models .jiang et al . studied on fracture employing mechanism - based strain gradient plasticity .finite element implementation of material models that includes second gradient of displacements as degrees of freedom , require continuity at the inter - element boundaries or mixed element formulation in which both kinematic and static variables are defined as degrees of freedom .papanicolopulos et al . , zervos et al . developed type finite element for gradient elasticity .fischear et al . implemented isogeometric analysis for gradient elasticity in two dimensions . on the contrary , in mixed formulations , continuity element can be used but in this case first derivatives should be interpolated in addition to displacements which leads to formulations with a large number of degrees of freedom .these types of elements can be non - conforming and are required to fulfill the patch test .furthermore , ru and aifantis developed a strategy based on operator split method for their simplified strain gradient model , in which the fourth - order partial differential equations of gradient elasticity are split into two separate second - order partial differential equations and they implement it using finite element .classical microplane model formulations , pionered by baant and coworkers , assume that nonlinear phenomena such as , but not limited to , fracture , shearing , and plastic deformations occur on specific orientations within the internal structure of the material . therefore , microplane models simulate these phenomena through vectorial constitutive laws formulated on the so called `` microplanes '' representing generic orientations in space .the strain vector on a generic microplane is obtained by projecting the strain tensor onto a local system of reference and the stress tensor is computed from tractions on all possible orientations in space through an energetic equivalence .the initial formulations of microplane models focused on concrete materials and the most recent one , labeled as m7 , has demonstrated a remarkable ability to reproduce typical experimental data relevant to a large variety of loading conditions , from tension to confined compression ; from quasi - static to highly dynamic conditions .microplane models have also been successfully developed for other materials such as rock ; rigid foam and shape memory alloys ; fiber reinforced concrete ; as well as composite laminates .the aforementioned microplane formulations are `` local '' in the sense that the stress tensor at a given point is only function of the strain tensor at the same point .unfortunately , such constitutive equations suffer from mesh sensitivity and spurious energy dissipation when softening behavior is simulated . to overcome this problem ,nonlocal microplane models were proposed by various authors by exploiting integral methods , in which the nonlinear behavior at one point depends on a weighted average of the strain in the neighborhood of that point . these approacheswhile preventing mesh sensitivity quite effectively lead to a significant increase of the computational cost of simulations .an alternative to integral microplane models is to adopt strain - gradient formulations such as the ones proposed by kuhl et al . and , more recently , by cusatis and zhou . in the latter ,starting from a discrete particle model , they derived a high order microplane ( hom ) theory which includes cosserat theory and strain gradients elasticity as a special cases .the hom model possess two characteristic lengths : the first associated with the spacing of flaws in the material internal structure and related to the gradient character of the continuum ; and the second associated with the size of these flaws and related to the micro - polar character of the continuum .following cusatis and zhou , one can calculate microplane strains and curvatures as where is the strain tensor , is the curvature tensor ; is the strain gradient tensor also referred to as high order strain tensor ; , are displacement and rotation , respectively , at a generic position ( ) in a 3d continuum .the second order tensor and the third order tensor denote projection operators for strain and strain gradient , respectively , in which are unit vectors defining a local system of reference on each microplane ( see fig [ geometry ] ) .the characteristic length , assumed to be a material property , can be interpreted to be the average half spacing of weak spots in the material internal structure . by using appropriate vectorial constitutive equations, the microplane stresses and microplane couple stresses can be calculated from the microplane strains and microplane curvatures : formally , one can write and , which can be used to obtain stress , couple stress , and high order stress tensors as where the integrals are calculated over the surface of a unit sphere , ; ; and summation rule applies over . furthermore , stress , couple stress , and high order stress tensors must satisfy the conservation of linear and angular momenta which reads where is the so - called effective stress tensor ; denote body forces and moments ; and is the levi - civita permutation symbol .the weak form of equilibrium that is used for the finite elelemnt formulation read finally , the differential boundary value problem presented above must be completed with appropriate boundary conditions . with reference to fig .[ hom_bc ] , one can write , on the volume boundary , or , or , or , and , where is the unit vector orthogonal to the boundary , is the surface gradient differential operator , = kronecker delta , =applied tractions , =applied displacements , =applied moment energetically conjugate to , is the displacement normal derivative , = applied displacement normal derivative , =applied moments energetically conjugate to rotations , =applied rotations , = applied edge traction acting along a sharp edge , is the jump operator defined on a sharp edge , , and is the unit vector tangent to a sharp edge ( see fig . [ hom_bc ] ) .detailed derivation of eqs .[ eq : equilibrium ] and boundary conditions can be found in cusatis and zhou .isogeometric analysis ( iga ) , introduced by hughes et al . , combines computer aided design ( cad ) and finite element method ( fem ) technologies . in iga , the basis functions used in cad to describe exactly the geometry of a certain volume of material are also used to discretize field variables describing , for example , its deformation and mechanical behavior .cad technologies commonly employ non - uniform rational b - splines ( nurbs ) basis functions which are built from b - splines .b - splines are piecewise polynomials composed of a linear combination of basis functions defined recursively as where ; and are number and maximum order , respectively , of the basis functions . the _ knots vector _, is a set of non - decreasing real numbers called _ knots _ and representing coordinates in the parameter domain . if all knots are equally spaced , the knot vector is called uniform , otherwise they are non - uniform . for , is contained in the interval ] can be expressed as where where , , are the basis function defined according to eq .[ eq : basisp ] and with reference to given knot vectors for each direction , and ; is the position vector of a _ control point _ ( a.k.a . _node _ , per the usual computational mechanics terminology ) defined on a 3d grid ( , , ) associated to the knot vectors , , and ; is the total number of control points . in this studya isogeometric finite element is implemented in 3d with interpolation of displacement and rotational fields assumed to be independent .this element features 6 degrees of freedom ( 3 displacement and 3 rotations ) at each control point .following the isoparametric concept , the displacement field and rotation field are formulated using the same nurbs basis functions of the geometry and , for a given element , they can be approximated in terms of element s control point displacements and rotations as follows where is the number of control points that are supported by one element and ] the nodal rotation vector .it is worth nothing that in this study the adopted shape functions in eq .[ eq : displacement_field ] are quadratic which can be obtained from eq .[ eq : nurbsolid2 ] with and control points per element . in eq .[ eq : displacement_field ] and throughout the rest of the paper , the shape function superscripts are dropped for clarity of notation .low and high order strain fields are obtained computing the spatial derivatives of the displacement field : where ^t ] collecting all degrees of freedom of the element ( in total ) , is a differential operator , is the shape function matrix and is the strain - displacement matrix obtained by applying the differential operator to the shape functions .the matrix can be subdivided into sub - matrices for low - order and high - order terms and the separate contribution of different nodes : in which , \qquad \mathbf{b}_{i \kappa } = \left [ \begin{array}{cccccc } 0 & 0 & 0 & r_{i , x } & 0 & 0 \\ 0 & 0 & 0 & 0 & r_{i , y } & 0 \\ 0 & 0 & 0 & 0 & 0 & r_{i , z } \\ 0 & 0 & 0 & 0 & r_{i , x } & 0 \\ 0 & 0 & 0 & 0 & 0 & r_{i , y } \\ 0 & 0 & 0 & r_{i , z } & 0 & 0 \\ 0 & 0 & 0 & r_{i , y } & 0 & 0 \\ 0 & 0 & 0 & 0 & r_{i , z } & 0 \\ 0 & 0 & 0 & 0 & 0 & r_{i , x } \end{array } \right]\ ] ] ^t ] is the stress vector collecting all components of the stress tensor , couple stress tensor and high order stress tensor ; and is the stiffness matrix , . even though there are recent studies attempting to derive optimal quadrature rules for iga , in this study , gauss quadrature is used and number of gauss points in each direction is assumed equal to be 1 plus the order of basis function in each direction , that is three for quadratic element which leads to 27 gauss points per element .gauss quadrature was originally developed for polynomial function , but it has been employed successfully for iga element as well .it is important to point out that the proposed element formulation implicitly assumes that the high order surface moments conjugate to the normal gradient of the displacement field ( see sec .[ sec : microplane ] and fig .[ hom_bc ] ) are zero .this is due to the fact that only displacement and rotation degrees of freedom are introduced and no kinematic boundary conditions can be imposed on the normal gradient of displacements .the performance of the proposed isogeometric implementation of the high order microplane model is demonstrated in this section with reference to classical examples for gradient elasticity , cosserat elasticity and strain softening .the exact analytical solution for a cantilever beam subjected to a concentrated load at the free end is used here as a reference for comparison with the strain gradient version of the high order microplane model . to obtain the macroscopic constitutive equations used to derive the exact solution , the microplane constitutive equation for gradient elasticity are formulated as follows : and , where , , , is the kronecker delta , , and , , are volumetric , deviatoric and shear elastic moduli , respectively .as already shown by cusatis and zhou [ ref ] , eq .[ hom : strain - gradient - elasticity ] lead to classical elastic constitutive equations if , and where young s modulus and poisson s ratio . with the formulation in eq .[ hom : strain - gradient - elasticity ] , the microplane integration ( see eq .[ eq : stress_integration_sphere ] ) can be done analytically and explicit form of stress , couple stress , and high order stress tensor can be obtained as follows : \gamma_{kl } , ~~ \mu_{ij}=0 \nonumber \\\sigma_{ijk}=r_0 ^ 2 \left [ e_v d_{ijklmp } + e_t \left(e_{ijklmp}+f_{ijklmp}\right ) \right ] \gamma_{lmp}\end{gathered}\ ] ] where , , , , , and . by assuming , , and then the gradient elasticity formulation in beskou et al obtained for the components and .it must be observed however that this microplane formulation leads to other non - zero high order stress components which are absent in the analytical solution .a cantilever beam , 1000 mm long with mm cross - section , is modeled with quadratic isogeometric solid elements .the beam is discretized with 4 different meshes with , , and elements , respectively .essential boundary conditions are applied at both ends of the beam .e=25000 mpa , =0.2 , and are assumed in the calculations . in fig .[ fig : cantilever_beam_con ] the relative displacement ( normalized with displacement value obtained from the cauchy solution ) as a function of position along the beam is given for different meshes and compared with analytical result given by beskou et al .it can be seen from the figure that with mesh refinement the tip displacement converges to a certain value .the overall calculated elastic curve is in general good agreement with the analytical solution , to which , however , it does not converge exactly .the main reason for this discrepancy is that in the analytical solution , they only consider one high - order term , but the proposed hom model is a 3d model and it features automatically other high order terms .[ fig : cantilever_beam_disp ] reports the obtained response for different values of the internal length scale , .as expected , with the increase of the internal length scale , the beam response becomes stiffer . also , internal length that tends to zero, the response converges to the classical bernoulli beam solution .the second beam example is relevant to an experimental study carried out by lam et al . on epoxy beams .tests were carried out in plane strain conditions on end - point loaded cantilever beams with 20 , 38 , 75 , 115 m depth and length - to - depth ratio equal to 10 .the elastic properties of the epoxy are reported as gpa and .the microplane simulations were performed with and and the material characteristic length was optimized according to the experimental result for the 20 m beam .the best fit is obtained with m .based on these material parameters , the other beams are simulated .3d quadratic isogeometric elements are used and the beams are discretized with elements . in order to provide plane strain condition , the lateral displacement ( ) of nodes on external surfaces ( x - y plane ) are restricted .figure [ fig : cantilever_beam_stiffness ] illustrates the change of normalized bending stiffness , , predicted by the proposed model and obtained in the experiments . in the definition of , is the applied load ; is the corresponding displacement ; , and are beam thickness , depth and length , respectively .the model can reproduce very well the beam behavior exhibiting size effect : the smaller the thickness the larger the normalized bending stiffness .this behavior can not be produced by classical beam , which always gives constant value for bending stiffness corresponding to the current formulation for large thickness values . the presented high order microplane theory reduced to a cosserat continuum when the material characteristic length , , is equal to zero , andthe microplane constitutive equations are formulated as follows : where is the volumetric strain , , , . also in this case , the microplane integration ( eq .[ eq : stress_integration_sphere ] ) can be done analytically and explicit form of stress and couple stress tensors are obtained as follows : \gamma_{kl } \nonumber \\\mu_{ij}= w_v \chi_v v_{ij } + \left [ w_d a_{ijkl}+w_t \left(b_{ijkl}+c_{ijkl}\right ) \right ] \kappa_{kl}\end{gathered}\ ] ] where the tensors , , and are the same ones introduced in the previous section . cusatis andzhou derived the relationship between the microplane parameters and the ones of cosserat elasticity as , , and , , . in order to demonstrate the efficiency of the isogeometric element for the integration of the cosserat theory ,this section discusses a well - known stress concentration problem consisting of an infinite plate with a circular hole subjected to a uniaxial far field tension .the cosserat continuum has a characteristic length which arises from the unit mismatch between stresses and couple stresses .this effects the stress concentration factor around the hole .analytical solutions of this problem for classical cosserat continuum was given by eringen and for couple stress theory by mindlin in 2d case .in both references the stress concentration factor is defined as where / [ 4+(r / l_c)^2 + 2 ( r / l_c ) k_0(r / l_c)/k_1(r / l_c ) ] ] , ^{1/2} ] , which is a non - dimensional quantity taking values between 0 and 1 . for the couple - stress theory is recovered and for the cauchy continuum is obtained .this example was also simulated by pothier and rencis and nakamura et .al . . in the finite element simulations discussed in this section, the material parameters are taken from ref . and they are reported in table [ table_scf_mat ] .four simulations with different coupling factor are performed by assuming plate dimensions of mm and hole radius of mm .considering the symmetry along the x- and y - axes , only one quarter of the specimen is modeled as shown in figure [ plate_hole ] . applied boundary conditions are also specified in figure [ plate_hole ] .a 3d finite element model was set up with quadratic isogeometric solid element and the iga meshes was obtained by using igafem .two different meshes are used with 1024 ( ) and 4096 ( ) elements . for both mesh only one element is used along the thickness .the coarser mesh is shown in fig .[ plate_hole ] where a zoom - in highlights the geometry of one element and the corresponding control points .two more layers of control points are present through the thickness ( for a total of 27 control points ) but they are not visible in the figure . table [ table_scf ] presents the comparison of the obtained stress concentration factors with the analytic solution .the results show very good agreement with analytical values with error less than 3.7 % and 0.9 % for mesh-1 and mesh-2 , respectively .this section presents the simulation of softening behavior with the objective of evaluating the effectiveness of the regularization strategy proposed by cusatis and zhou within the presented high order isogeometric implementation .the adopted softening constitutive equation follows the formulation proposed by cusatis et al . .the microplane couple stresses are assumed to be zero =0 .the microplane stresses are computed with damage - like constitutive equations as follows where is a material parameter , is the effective stress , is the total shear stress , is the effective strain , and is the total shear strain . in the elastic regime ,the effective stress is proportional to the effective strain : , in which is the microplane normal modulus . and be related to young s modulus , , and poisson s ratio , , through the following expressions : and .the nonlinear behavior is imposed by computing the effective stress incrementally , , and satisfying the inequality where ] , where is the ratio of shear strength to tensile strength . by assuming gpa , , mm , mpa , mpa , and mm , a prismatic bar subject to uniaxial tension is modeled with 3d quadratic isogeometric solid elements .the bar is 100 mm long and has mm rectangular cross section .one end of the bar is fixed in all directions and a displacement is prescribed at opposite end in the longitudinal direction .four different meshes are used with , , and elements .the microplane stress integration is performed by a voronoi integration scheme ( see appendix a ) with 66 microplane over the entire unit sphere . in order to control the location of the the localization zone , the central one tenth of the bar is weakened by reducing the tensile strength by 10 % . for comparison , simulations with also performed .as one might expect , for the bar response is mesh dependent .the load displacement curve is increasingly more brittle ( fig .[ reg_uniaxial_bar]a ) as the mesh become more refined and damage localizes always in one single element as one can see in fig .[ reg_uniaxial_bar]e where the longitudinal strain profile is plotted for a bar end displacement of 0.2 mm . figs .[ reg_uniaxial_bar]b and fig .[ reg_uniaxial_bar]f show that mesh sensitivity is only partially mitigated for . in this case , the bar response is increasingly brittle upon mesh refinement immediately after the peak .the solution is less dependent on the mesh in the softening branch . the strain profile ( fig .[ reg_uniaxial_bar]f does not show the extreme strain localization of the local simulation but still there is not a clear convergence upon mesh refinement .cusatis and zhou conducted a 1d spectral analysis of localization and proposed to regularize the solution by computing the high order stresses as where and is the localization limiter .for the 1d case and for linear softening , one can obtain a localization band equal to if . for exponential softening ,in which the softening modulus is not constant , the localization limiter must be a function of the current value of strain .in this case , and again with reference to a 1d setting , one can write with . in order to use this 1d result in the microplane formulationone can write where accounts for the presence of shear on the microplanes ( see the adopted constitutive equations above ) . from the physical point of view , this regularization strategy corresponds to adding , in parallel with the softening microplane stresses , additional non - softening microplane stresses which ensure that the phase velocity of a propagating uniaxial wave is always real .it is worth noting that such parallel coupling of softening and elastic components is also intrinsically captured by the microplane model by virtue of the kinematic constraint ( eq . [ eq : def_microplane_strain ] ) and the micro - macro stress relation ( eq .[ eq : stress_integration_sphere ] ) .this is the reason why partial regularization is attained with and ( figs .[ reg_uniaxial_bar]b and [ reg_uniaxial_bar]f ) figs .[ reg_uniaxial_bar]c and [ reg_uniaxial_bar]g show the obtained response by adopting the formulation in eqs .[ eq : reqhigherorderstress ] and [ eq : localization - parameter ] .as one can see , the initial softening branch is very well regularized .however , the rest of the softening curve shows an oscillatory behavior , the convergence upon mesh refinement is not clear , and a residual load appear even for relatively large strains ( stress locking ) .the strain profile does not localize in one element but , again , clear convergence can not be demonstrated .the stress locking phenomenon is due to the fact that the additional elastic stresses associated with are added during the entire simulation , since the beginning when the material has yet to soften .this is clearly unnecessary .an alternative regularization scheme which avoids this shortcoming can be formulated by introducing the regularization term on the stress increments as opposed to total stress . in this case[ eq : reqhigherorderstress ] is substituted by the following equations where for and eq .[ eq : localization - parameter ] holds for . figs .[ reg_uniaxial_bar]d and [ reg_uniaxial_bar]h shows the results obtained with this incremental formulation . as one can see , both the load displacement curve and the strain profile along the bar are fully regularized and both show a clear convergence upon mesh refinement .the localization band is larger than the theoretical 1d value of .this is due to the intrinsic 3d formulation of the microplane model which , even under uniaxial tensile macroscopic conditions , feature tensile and shear strains and stresses at each microplane orientation .the correct estimate of the localization band requires a fully 3d spectral analysis , which , however is intractable from the analytical point of view ( see discussion in ref .this paper presents the isogeometric finite element implementation of a recently developed high - order microplane ( hom ) model .the hom model was originally derived based on a discrete particle model and the resulting theory includes gradient elasticity and cosserat theory as a special cases .in addition , the hom model allows for an effective regularization of softening constitutive equations by means a simple modification of the high order stresses .the numerical simulations are carried out with an isogometric finite element characterized by 27 control points and 6 degrees of freedom , 3 displacement components and 3 rotation components , at each control point .the spatial integrals were performed with a gauss quadrature scheme with 27 gauss points and the microplane integration was carried out by a novel integration scheme based on the voronoi discretization of the unit sphere with 66 microplanes . based on the results presented in the paper the following conclusionscan be drawn : 1 . the implemented finite element with adoption of the hom model as constitutive equation performs very well in the numerical simulations of classical high order elasticity problems .this was verified by simulating bending of strain - gradient elastic cantilever beams and computing the stress concentration due to a circular hole in a cosserat elastic plate subject to tension .2 . the hom formulation with softening constitutive laws is only partially regularized and it shows mesh dependence in the softening branch close to the peak load .the regularization technique previously proposed bt cusatis and zhou solves the mesh dependency in the initial post peak but leads to stress locking and lack of clear convergence in the far post peak .an optimal regularization of the post peak response is obtained by introducing the regularizing term on the high order stress increments as opposed to the total high order stresses . with this approacha clear convergence can be observed for both the load versus displacement curve and the strain profile along the bar .financial support from the u.s . national science foundation ( nsf ) under grant cmmi-1435923 is gratefully acknowledged .the work of the first author was also partially supported by the scientific and technological research council of turkey ( tubitak ). 10 bazant , z. p. , and pijaudier - cabot , g. , 1988 .`` nonlocal continuum damage , localization instability and convergence '' .( 2 ) , pp .287293 .bazant , z. p. , belytschko , t. b. , and chang , t .-p . , 1984 .`` continuum theory for strain - softening '' .( 12 ) , pp .16661692 .rodrguez - ferran , a. , bennett , t. , askes , h. , and tamayo - mas , e. , 2011 .`` a general framework for softening regularisation based on gradient elasticity '' .( 9 ) , pp . 13821394 .chang , c. , askes , h. , and sluys , l. , 2002 .`` higher - order strain / higher - order stress gradient models derived from a discrete microstructure , with application to fracture '' . (17 ) , pp . 19071924 .cosserat , e. , and cosserat , f. , 1909 .`` thorie des corps dformables '' . .toupin , r. a. , 1962 .`` elastic materials with couple - stresses '' .( 1 ) , pp .385414 .eringen , a. c. , 1965 .linear theory of micropolar elasticity .tech . rep ., dtic document .eringen , a. c. , 1992 .`` vistas of nonlocal continuum physics '' . (10 ) , pp .15511565 .mindlin , r. d. , 1964 .`` micro - structure in linear elasticity '' .( 1 ) , pp .mindlin , r. d. , 1965 .`` second gradient of strain and surface - tension in linear elasticity '' .( 4 ) , pp .417438 .mindlin , r. , and eshel , n. , 1968 .`` on first strain - gradient theories in linear elasticity '' .( 1 ) , pp .109124 .altan , s. , and aifantis , e. , 1992 .`` on the structure of the mode iii crack - tip in gradient elasticity '' .( 2 ) , pp .319324 .aifantis , e. c. , 1992 .`` on the role of gradients in the localization of deformation and fracture '' .( 10 ) , pp .12791299 .altan , b. , and aifantis , e. , 1997 .`` on some aspects in the special theory of gradient elasticity '' .( 3 ) , pp .231282 .aifantis , e. , 1999 .`` strain gradient interpretation of size effects '' .( 1 - 4 ) , pp .299314 .aifantis , e. , 2003 .`` update on a class of gradient theories '' .( 3 ) , pp .259280 .askes , h. , and aifantis , e. c. , 2011 .`` gradient elasticity in statics and dynamics : an overview of formulations , length scale identification procedures , finite element implementations and new results '' .( 13 ) , pp .19621990 .misra , a. , and yang , y. , 2010 .`` micromechanical model for cohesive materials based upon pseudo - granular structure '' .( 21 ) , pp .29702981 .yang , y. , ching , w. , and misra , a. , 2011 .`` higher - order continuum theory applied to fracture simulation of nanoscale intergranular glassy film '' . ( 2 ) , pp .pijaudier - cabot , g. , and bazant , z. p. , 1987 .`` nonlocal damage theory '' .( 10 ) , pp . 15121533 .fleck , n. , and hutchinson , j. , 1997 .`` strain gradient plasticity '' . , pp .296361 .fleck , n. , and hutchinson , j. , 2001 .`` a reformulation of strain gradient plasticity '' . ( 10 ) , pp . 22452271 .gao , h. , huang , y. , nix , w. , and hutchinson , j. , 1999 .`` mechanism - based strain gradient plasticity - i .theory '' .( 6 ) , pp . 12391263 .huang , y. , gao , h. , nix , w. , and hutchinson , j. , 2000 .`` mechanism - based strain gradient plasticity - ii .analysis '' .( 1 ) , pp . 99 128 .yun , g. , hwang , k. , huang , y. , and wu , p. , 2005 .`` a reformulation of mechanism - based strain gradient plasticity '' .( 33 - 35 ) , pp .40114029 .huang , y. , qu , s. , hwang , k. , li , m. , and gao , h. , 2004 . `` a conventional theory of mechanism - based strain gradient plasticity '' .( 45 ) , pp . 753 782 .jiang , h. , huang , y. , zhuang , z. , and hwang , k. , 2001 .`` fracture in mechanism - based strain gradient plasticity '' .( 5 ) , pp .979993 .papanicolopulos , s .- a . , zervos , a. , and vardoulakis , i. , 2009 .`` a three - dimensional c1 finite element for gradient elasticity '' .( 10 ) , pp . 13961415 .zervos , a. , papanastasiou , p. , and vardoulakis , i. , 2001 .`` a finite element displacement formulation for gradient elastoplasticity '' . , pp .177187 .fischer , p. , klassen , m. , mergheim , j. , steinmann , p. , and mller , r. , 2011 . `` isogeometric analysis of 2d gradient elasticity '' .( 3 ) , pp .325334 .zervos , a. , papanicolopulos , s .- a . , and vardoulakis , i. , 2009 .`` two finite - element discretizations for gradient elasticity '' . ( 3 ) , pp . 203213 .amanatidou , e. , and aravas , n. , 2002 .`` mixed finite element formulations of strain - gradient elasticity problems '' .( 15 ) , pp .17231751 .phunpeng , v. , and baiz , p. , 2015 .`` mixed finite element formulations for strain - gradient elasticity problems using the fenics environment '' . , pp .ru , c. , and aifantis , e. , 1993 .`` a simple approach to solve boundary - value problems in gradient elasticity '' .( 1 - 4 ) , pp .baant , z. p. , and oh , b. h. , 1983 . microplane model for fracture analysis of concrete structures .adp001715 , dtic document .baant , z. p. , and oh , b. h. , 1985 .`` microplane model for progressive fracture of concrete and rock '' .( 4 ) , pp .559582 .caner , f. c. , baant , z. p. , et al . , 2013 .`` microplane model m7 for plain concrete : ii . calibration and verification '' .baant , z. p. , and zi , g. , 2003 .`` microplane constitutive model for porous isotropic rocks '' .( 1 ) , pp .brocca , m. , baant , z. p. , and daniel , i. m. , 2001 .`` microplane model for stiff foams and finite element analysis of sandwich failure by core indentation '' .( 44 ) , pp .81118132 .brocca , m. , brinson , l. c. , and baant , z. p. , 2002 .`` three - dimensional constitutive model for shape memory alloys based on microplane model '' .( 5 ) , pp .10511077 .beghini , a. , baant , z. p. , zhou , y. , gouirand , o. , and caner , f. c. , 2007 .`` microplane model m5f for multiaxial behavior and fracture of fiber - reinforced concrete '' .( 1 ) , pp . 6675 .cusatis , g. , beghini , a. , and bazant , z. p. , 2007 .`` spectral stiffness microplane model for quasibrittle composite laminates : i. theory '' .salviato , m. , esfahani , s. e. a. , and cusatis , g. , 2015 .`` spectral stiffness microplane model for quasibrittle textile composites '' . .baant , z. p. , and oh , b. h. , 1983 .`` crack band theory for fracture of concrete '' .( 3 ) , pp .155177 .baant , z. p. , and ozbolt , j. , 1990 .`` nonlocal microplane model for fracture , damage , and size effect in structures '' . ( 11 ) , pp .24852505 .baant , z. p. , and di luzio , g. , 2004 . `` nonlocal microplane model with strain - softening yield limits '' . ( 24 ) , pp . 72097240 .di luzio , g. , 2007 .`` a symmetric over - nonlocal microplane model m4 for fracture in concrete '' . ( 13 ) , pp . 44184441 .kuhl , e. , and ramm , e. , 1999 .`` simulation of strain localization with gradient enhanced damage models '' .( 1 ) , pp .176185 .kuhl , e. , ramm , e. , and de borst , r. , 2000 .`` anisotropic gradient damage with the microplane model '' . , pp .87103 .cusatis , g. , and zhou , x. , 2013 .`` high - order microplane theory for quasi - brittle materials with multiple characteristic lengths '' . .hughes , t. j. , cottrell , j. a. , and bazilevs , y. , 2005 .`` isogeometric analysis : cad , finite elements , nurbs , exact geometry and mesh refinement '' .( 39 ) , pp .41354195 .farin , g. e. , 1995 . .ak peters , ltd .nguyen , v. p. , anitescu , c. , bordas , s. p. , and rabczuk , t. , 2015 .`` isogeometric analysis : an overview and computer implementation aspects '' . , pp .89116 .auricchio , f. , calabro , f. , hughes , t. , reali , a. , and sangalli , g. , 2012 .`` a simple algorithm for obtaining nearly optimal quadrature rules for nurbs - based isogeometric analysis '' . , pp .schillinger , d. , hossain , s. j. , and hughes , t. j. , 2014 .`` reduced bzier element quadrature rules for quadratic and cubic splines in isogeometric analysis '' . , pp .hiemstra , r. , calabro , f. , schillinger , d. , and hughes , t. , 2016 .`` optimal and reduced quadrature rules for tensor product and hierarchically refined splines in isogeometric analysis '' . ( 11 ) .papargyri - beskou , s. , tsepoura , k. , polyzos , d. , and beskos , d. , 2003 .`` bending and stability analysis of gradient elastic beams '' .( 2 ) , pp .385400 .lam , d. c. c. , yang , f. , chong , a. , wang , j. , and tong , p. , 2003 .`` experiments and theory in strain gradient elasticity '' .( 8) , pp . 14771508 .eringen , a. , 1967 .theory of micropolar elasticity .tech . rep ., dtic document .mindlin , r. , 1963 .`` influence of couple - stresses on stress concentrations '' .( 1 ) , pp .pothier , a. , and rencis , j. , 1994 .`` three - dimensional finite element formulation for microelastic solids '' .( 1 ) , pp .sachio , n. , benedict , r. , and lakes , r. , 1984 .`` finite element method for orthotropic micropolar elasticity '' .( 3 ) , pp .319330 .cusatis , g. , pelessone , d. , and mencarelli , a. , 2011 .`` lattice discrete particle model ( ldpm ) for failure behavior of concrete .i : theory '' .( 9 ) , pp . 881890 .for inelastic consitutive equations , eq .[ eq : stress_integration_sphere ] can only be computed numerically . in previous microplane model work gaussian optimal integration formulaswere developed for the integration over the unit hemisphere .however , for the high order microplane model adopted in this paper , integration over the entire unit sphere is required .hence , a new integration scheme was developed based on the voronoi tessellation of the unit sphere * ? ? ?[ fig : vorinoi66 ] show the discretization of a unit sphere with 66 microplanes .the corresponding areas and spherical angles are reported in table [ table : microplanes66 ] for the discretized sphere the energetical equivalence , that relates microplane stresses , microplane couple stresses , and microplane high order stresses to their macroscopic counterparts , can be written as where is the number of microplanes , , is the area of the generic microplane and is the volume of a pyramid of unit eight and base area equal to . since the macroscopic quantities are uniform inside the unit sphere and by using the kinematic contraint in eq .[ eq : def_microplane_strain ] , one obtains where is the area ratio of fmicroplane over the total area of all microplanes .
|
_ in this paper , a recently developed higher order microplane ( hom ) model for softening and localization , is implemented within a isogeometric finite element framework . the hom model was derived directly from a three dimensional discrete particle model and it was shown to be associated with a high order continuum characterized by independent rotation and displacement fields . furthermore , the hom model possesses two characteristic lengths : the first associated with the spacing of flaws in the material internal structure and related to the gradient character of the continuum ; and the second associated with the size of these flaws and related to the micro - polar character of the continuum . the displacement - based finite element implementation of this type of continua requires continuity both within the elements and at the element boundaries . this motivated the implementation of the concept of isogeometric analysis which ensures a higher degree of smoothness and continuity . nurbs based isogeometric elements were implemented in a 3d setting , with both displacement and rotational degrees of freedom at each control point . the performed numerical analyses demonstrate the effectiveness of the proposed hom model implementation to ensure optimal convergence in both elastic and softening regime . furthermore , the proposed approach allows the natural formulation of a localization limiter able to prevent strain localization and spurious mesh sensitivity known to be pathological issues for typical local strain - softening constitutive equations . _ * center for sustainable engineering of geological and infrastructure materials * + [ 0.1 in ] department of civil and environmental engineering + [ 0.1 in ] mccormick school of engineering and applied science + [ 0.1 in ] evanston , illinois 60208 , usa + 0.5 in + + * segim internal report no . 16/8 - 587i * +
|
homogeneous and isotropic solutions of einstein s laws of gravitation do not account for inhomogeneities in the universe .the question whether they do _ on average _ is a longstanding issue that is the subject of considerable debate especially in the recent literature ( , and follow up references ; comprehensive lists may be found in the reviews , and ) .averaging the scalar parts of einstein s equations on space like hypersurfaces of a foliation of spacetime it was found that the friedmannian framework is still applicable , however , one must include additional source terms due to the backreaction of inhomogeneities on a homogeneous isotropic solution .these terms have geometrical origin and , as has been recently shown , can be represented by a minimally coupled scalar field component , a so called _ morphon field _ , if those geometrical terms are interpreted as effective sources in a cosmological model with friedmannian kinematics .this effective field can , like quintessence , other scalar field models , e.g. models motivated by higher order ricci curvature lagrangians , or string motivated effective actions , be employed to model dark energy .while the newtonian and post newtonian frameworks suppress these effective scalar field degrees of freedom by construction , , and so can not lead to an explanation of dark energy , general relativity not only offers a wider range of possible cosmologies , since it is not constrained by the assumption of euclidean or constant curvature geometry and small deviations thereof , but it is also needed to describe an effect that is strictly absent in a newtonian model and in a standard ( quasi newtonian ) perturbation approach at a fixed background .this effect is reflected by the coupling of the fluctuations to the averaged model . in other words ,fluctuations may be small , but measured relative to a non friedmannian background , and the evolution of this latter is most clearly expressed in terms of the evolution of effective geometrical properties such as the averaged scalar curvature we are considering in this paper ( see for detailed explanations ) . speaking in favour of an averaged cosmology , it certainly enjoys the more physical status of incorporating inhomogeneities , and the clearcut fact that the effect of these inhomogeneities can be modelled by a scalar field speaks , by william of ockham s _ razor _ , against introducing a dominating cosmological constant or an extra fundamental scalar field that is known to violate energy conditions in order to explain observational data . from this point of view onewould also conclude that perturbation theory , if formulated at the background of a flrw model with a dominating cosmological constant or an external scalar field source , would also not account for the physics behind the dark energy component .on the other hand , the flrw cosmology provides a remarkably successful _ fitting model _ to a large number of observational data .as already mentioned , the price to pay is an unclear physical origin of either a dominating cosmological constant or an extra scalar field source that dominates recently .given the fact that also a large amount of sources in the form of dark matter has not yet been detected in ( non gravitational ) experiments , the standard model parametrizes an overwhelming fraction ( 95 percent ) of physical ignorance .the generally held view , however , is that the flrw cosmology indeed describes the _ physical universe _ on average , which if true in turn asks for either a modification of the laws of gravitation , or the postulation of the above mentioned dark sources of yet unknown origin . moreover, the widespread use of the wording ` fitting model ' is just name dropping unless we devise a way to explicitly construct a smooth metric out of the inhomogeneous distributions of matter and curvature . in this more refined sense the flrw cosmology is not a fitting model , rather _ it furnishes a conjecture on integral properties of the physical universe _ that , as we believe , has to be first verified or falsified before more exotic vehicles of explanation are invoked. both , the flrw cosmology and a backreaction driven averaged cosmology are candidates for the description of these integral properties , and in this paper we shall estimate these properties from regionally ( up to , say , 100 mpc ) observable quantities . for the flrw cosmology in the form of the _ concordance model _ , , , the physical model is described by on average vanishing scalar curvature , while for a backreaction driven cosmology , if we expect that dark energy can be fully routed back to inhomogeneities , the issue appears open .however , the consequences of a backreaction driven model have been qualitatively , and to the extent we need also quantitatively , exploited in a number of recent papers ( see and references therein ) . for example , since a quantitative estimation of kinematical backreaction depends on specifying an evolution model for the inhomogeneities , the analysis of exact solutions like the lematre tolman bondi solution ( see , e.g. , , , , , and the reference lists in and ) , or scaling laws that satisfy the averaged equations have been investigated. _ consistency _ of an explanation of dark energy with the framework of the averaged equations has been demonstrated for both globally homogeneous cosmologies ( an assumption that we also adopt in the present paper ) and , alternatively , globally inhomogeneous cosmologies .although a quantitative evaluation of the backreaction effect in a generic inhomogeneous model is still to come , we already know a few features of a ` working ' model , which are enough for our considerations. we do not aim at investigating a ` fitting model ' ( see figure 1 ) for the present day universe in the strict sense mentioned above ; this is the subject of ongoing work .we only remark here that any model for the evolution of the averaged variables can be subjected to a smoothing procedure in order to find the corresponding smooth , i.e. constant curvature fitting model that we may call a _flrw template_. we would then consider a hypersurface at a given instant of time ; we can not expect that the time evolved inhomogeneous model can be mapped to a constant curvature model .the above two candidates would provide different starting points , i.e. the initial data of a smoothing procedure for a given hypersurface are different .it was recently argued that the averaged universe model ( now both kinematically and geometrically averaged ) could be represented by an effective flrw metric with a time scaling factor that differs from the usual global scale factor of a homogeneous isotropic model and which is determined by the kinematically averaged einstein equations .this ansatz for an effective metric assumes that smoothing ( i.e. spatial rescaling ) the actual matter and curvature inhomogeneities does not leave traces in the smoothed out flrw template metric _at all times_. in a forthcoming paper we are going to analyze this assumption in detail employing previous results on an explicit smoothing algorithm .we emphasize that we are entitled to investigate integral properties of physical variables on a given hypersurface _ without _ entering the different question of whether this hypersurface ( if actively deformed ) can be effectively described by a ` best fit ' constant curvature geometry. a rough guide that helps to understand the motivation of the present work is the following . for small inhomogeneities in the matter and curvature distributions , an approximate description of the cosmic evolution _ on average _ by a homogeneous solution of einstein s laws of gravitation may be fine , but for the late universe featuring strong inhomogeneities , the validity of this approximation is not evident .we are going to address and justify this remark in the present work _ on the assumptions _ that ( i ) a homogeneous model satisfactorily describes the early stages of the matter dominated epoch and ( ii ) there exists a scale of homogeneity .we proceed as follows . in section [ section : phenomenology ]we look at the present day universe and device a three scale model for it , where we hope that both readers , those who advocate the standard picture of the concordance model and those who advocate a backreaction driven cosmology , agree .then , in section [ section : multiscaling ] , we implement the details of this multi scale picture and reduce the determining sources to those that are in principle measurable on regional ( i.e. up to 100 mpc ) scales .detailed estimates of the kinematical backreaction and averaged scalar curvature follow in section [ section : q ] , where we also provide simplified estimates in the form of robust bounds by , e.g. , restricting the measurement of fluctuations to a comoving frame . in section [ section : conclusions ] we confront this latter result with the different assumptions on the actual averaged scalar curvature of the present day universe .observing the universe at low redshift returns the impression of large volumes that are almost devoid of any matter ; a network of large scale structure surrounds these _ voids _ that seem to be hierarchically nested and their sizes , depending on their definition , range from regional voids with less than 10 percent galaxy number content of the order of ten megaparsecs , , to relatively thinned out regions of larger number density that , if smoothed , would span considerable volume fractions of currently available large scale structure surveys .while the overall volume of the observable universe seems to be dominated by underdense regions ( the particular value of the volume fraction of underdense regions being dependent on the threshold of this underdensity ) , the small fraction of the volume hosting overdensities ( groups , clusters and superclusters of galaxies ) is itself sparsely populated by luminous matter and , this latter , appears as a highly nonlinear ` spiky ' distribution .the phenomenological impression that matter apparently occupies a tiny fraction of space _ at all length scales _ could be questioned by saying that _ dark matter _ might be more smoothly distributed .also , clusters contain a large amount of intergalactic gas , there are non shining baryons , etc ., so that the notion of an ` underdense region ' has to be treated with care .however , simulations of _ cold dark matter _ , assumed to rule the formation of large scale structure , also demonstrate that voids dominate the present day distribution . again depending on the particular definition of a void, their fraction of volume occupation could , to give a rough value , be conservatively quoted as being 60 percent in standard _ dark matter _ simulations counting strong underdensities , and is certainly larger for more densly populated but still underdense regions .thinking in terms of a _ homogeneous model _ of the universe ( not necessarily a homogeneous _ solution _ ) , i.e. a distribution of matter that on average does not depend on scale beyond a certain large scale ( the scale of homogeneity ) , one would paint the picture of a redistribution of matter due to nonlinear gravitational instability . in a newtonian simulation ( where an eventually constant curvature of a flrw spacetime is factored out on a periodic scale )this would happen in such a way that , due to the preservation of the overall material mass , an _ equipartition _ of overdense small volume regions and underdense large volume regions with respect to the mass content results , so that a sensible spatial average of the matter distribution must comply with the original value of the homogeneous density .in other words , the assumption that a volume averaged distribution of matter would be compatible with a homogeneous model of the same average density seems to be a robust assumption , especially if inhomogeneities are dynamically generated out of an almost homogeneous distribution . this picture is true in newtonian simulations , but for a subtle reason : although the time evolution of the averaged density as a result of _ non commutativity _ of evolution ( time scaling ) and spatial averaging gives rise to kinematical backreaction , the periodic architecture and the euclidean geometry of a newtonian cosmology imply that these additional terms have to vanish ( see for a detailed discussion of all these issues and proofs ) . in a general relativistic framework this picture is in general false , even at one instant of time : the reason is that a riemannian volume average incorporates the volume measure that is different for negatively and positively curved domains of averaging , and curvature does not obey an equipartition law ( see figure 2 ) .note also that a volume averaging on a riemannian 3surface could , even on the largest scale , introduce a _ volume effect _ in the comparison of the volume of a constant curvature space and the actual volume of an inhomogeneous hypersurface ( see and for the definition and discussion of the _ volume effect _ ; see also hellaby s volume matching example ) .the standard model ( but also the recent suggestion by ) implies that there is no such effect on large scales .it is illustrative to think of 2surfaces , where curvature inhomogeneities always add up in the calculation of the total surface , so that there certainly is a large _ 2volume effect _ due to surface roughening , but for three dimensional manifolds , negative and positive curvature contribute with opposite signs , and so the _3volume effect _ can not easily be quantified .given the above remarks , one is no longer tempted to draw a picture of equipartition for the intrinsic curvature distribution .we shall , in this paper , not discuss the time evolution of the scalar curvature ( see and for detailed illustrations and discussions ) , large time asymptotics , the role of a constant curvature parameter in the fit to observations , or curvature models ( that are all related subjects of interest ) , but instead contemplate on the distribution of curvature at one given instant of time . here, we demonstrate that the picture we would wish to establish in the concordance model , namely that the scalar curvature would average out on some large scale of homogeneity , is naive in a number of ways , and we shall implement the geometrical aspects of such a picture in section [ section : multiscaling ] .obviously , this issue can not be addressed with newtonian simulations ; the curvature degree of freedom is simply absent .we know that in riemannian geometry negatively curved regions have a volume that is larger than the corresponding volume in a euclidean space section , and positively curved regions have a smaller volume , thus enhancing the actual volume fraction of underdense regions . we are now going to develop a multi scale picture of the present day universe that is useful in the context of quantitative estimates , and that also helps to quantify multi scale dynamical models , e.g. the one proposed by wiltshire and his collaborators , but it relates as well to any model that involves considerations of structures on different spatial scales .we are going to introduce three spatial scales .first , a _ scale of homogeneity _ that could be identified with the size of a compact universe model , but need at least be larger than the largest observed typical structures .second , a scale that is as large as a typical void ( within a range of values that depends on our definition of a devoid region ) , and third , a scale that is large enough to host typical bound large scale objects such as a rich cluster of galaxies . in observational cosmologywe strictly have for the first we may also think of a length of the order of the hubble scale .the lower bound on this scale not only depends on the statistical measure with respect to which one considers the matter and curvature distributions as being _ homogeneous _ , but also on the concept of _ homogeneity _ that we have in mind . we here imply that averages _ of any variable _ beyond this scale will in practice no longer depend on scale , while generically , this may not happen at all .we do not claim here that this is indeed true , but we adopt this point of view in order to have a more transparent way of comparison with the standard model of cosmology .the assumption of existence of a scale of homogeneity may be a strong hypothesis ; it is our choice of restricting the generality of the problem ., the scale of a typical void , and the scale hosting typical bound large scale objects , _ e.g. _ a rich cluster of galaxies . ]according to what has been said above , we are entitled to assign different properties to the different scales , and we shall also sometimes idealize these properties in order to construct a simple but flexible model that reflects the phenomenology described above in terms of a small set of parameters .we start with an overview of the basic ingredients of our model and postpone details to section [ section : multiscaling ] .we employ the hamiltonian constraint ( see ( [ constraints ] ) below ) , spatially averaged on a given domain ( scale ) that covers a union of underdense regions ( scale ) and occupied overdense regions ( scale ) .we write the hamiltonian constraint averaged over the first scale ( for details see below and , e.g. ) : with the total restmass .the averaged spatial scalar curvature is denoted by , abbreviates the averaged rate of expansion in terms of a _ volume hubble rate _ , and the _ kinematical backreaction term _ encodes inhomogeneities in the extrinsic curvature distribution ( or the kinematical variables ) ; it is detailed in section [ section : multiscaling ] ( see figures 3 and 4 ) . . ] now , let us consider for illustrative purposes an idealization on the two other scales ( in our concrete calculations later we shall indicate clearly when we make use of it ) : we require the volume hubble expansion to be subdominant in matter dominated regions and , on the other hand , the averaged density to be subdominant in devoid regions . in the first case , an expansion or contraction would contribute negatively to the averaged scalar curvature and so would , e.g. , enhance a negative averaged curvature ; in the second case , the presence of a low averaged density would contribute positively . we can therefore reasonably expect that , whether we use a strong idealization ( see below ) or a weaker distinction between over and underdense regions , the overall argument based on the existence of such a partitioning enjoys some robustness .we shall also be able to condense our assumption on the partition between over and underdense regions by introducing a parameter for the occupied volume fraction , , where denotes the total volume of the union of occupied regions ; its value may be chosen more conservatively to weaken an eventually unrealistic idealization . at any rate , we shall keep our calculations as general as possible before we eventually invoke an idealization for illustrative purposes ; it is only this latter quantity that parametrizes the ( geometrical ) volume partitioning of the distributions of inhomogeneities .note that , if we would strictly idealize voids to have and and matter dominated regions to have for , where is some critical scale ( see section [ section : multiscaling ] for more details on this transitory scale and the controling of this idealization ) , then we would have : and together with this simplified view is useful as a rough guide on the sign of the averaged scalar curvature : consider for example the case where the kinematical backreaction terms in the above equations are quantitatively negligible , and let us put ; we then infer that the averaged scalar curvature must be _ negative _ on scale and _ positive _ on scale , what obviously complies with what we expect . a non vanishing is employed in the concordance model to compensate the negative curvature . for our discussionwe introduce a set of adimensional average characteristics that we define for the largest scale : we shall , henceforth , call these characteristics ` parameters ' , but the reader should keep in mind that indexed variables are scale dependent functionals . expressed through these parameters the averaged hamiltonian constraint( [ averagehamiltond ] ) on the scale assumes the form : in this set , the averaged scalar curvature parameter and the kinematical backreaction parameter are directly expressed through and , respectively . in order to compare this pair of parameters with the ` friedmannian constant curvature parameter ' that is the only curvature contribution in the standard model , we can alternatively introduce the pair ( see , e.g. , ) being related to the previous parameters by .for any of the smaller domains we discuss the corresponding adimensional parameters by dividing averages on the domain always by to avoid confusion .this will also avoid the pathological and useless definition of the cosmological parameters , e.g. on the domains , where they are actually undefined in a strict idealization , since is assumed to vanish . to give an illustration for the scale dependence , notethat , in the strictly idealized case , can be traced back to the average density in matter dominated regions , , and thus , inevitably , the density parameter constructed with an observed on the scale and divided by the global hubble factor can not be extrapolated to the global parameter .for example , a value today of for this parameter would , for a volume fraction of matter dominated regions of , result in , i.e. a substantially smaller value that compensates the missing matter in the regions , if they are idealized to be empty .note in this context that a smaller mass density parameter on the global scale would also imply a smaller value of the necessary amount of backreaction .this can be seen by considering the _ volume deceleration parameter _ , which , for , shows that decreasing the matter density parameter would also decrease the necessary backreaction in order to find , e.g. _ volume acceleration _ ( ) .let us now discuss one of the motivations of the present work related to the dark energy debate . to this end, we have to explain why a substantial negative averaged scalar curvature is needed , so that dark energy could be partly or completely routed back to inhomogeneities , i.e. in the above equations , and , say , also as soon as we go to numerical estimates , indicating the value of the corresponding parameter _ today_. note that , in the averaged models , we may weaken this constraint , since we can allow for a scale dependence of this parameter in contrast to the situation in the standard model ; see and for related discussions . ] in conformity with the standard model of cosmology .first , by _ substantial _ we mean a cosmologically large negative curvature , i.e. expressed in terms of the cosmological parameters .this parameter is exactly equal to , i.e. the averaged curvature is of the order of minus the square of the hubble parameter , if there are no matter and no expansion and shear fluctuations .this applies to a simplified model of a void .second , in order to explain dark energy , the sum of the curvature and backreaction parameters has to add up to mimic a cosmological constant parameter of the order of in the case of a full compensation of the cosmological constant today , where we have adopted the assumption of the concordance model of .a conservative scenario has been quantified in where it was assumed that the universe at the cosmic microwave background epoch is described by a weakly perturbed flrw model and the amount of early dark energy is negligible .the resulting scenario that would create enough dark energy features a strong curvature evolution from a negligible value to , while the backreaction parameter must evolve from a negligible value to , i.e. it has to be dominated by expansion fluctuations on the scale today .this scenario implies that the averaged scalar curvature must be close to the value of our simple void model , while at early times the averaged curvature parameter was compatible with zero . speaking in terms of a _ morphon field _ , where the scalar curvature is associated with the potential of a scalar field , this scenario corresponds to the _ phantom quintessence _sector ( with negative kinetic energy of the scalar field ) , in which extrinsic curvature fluctuations ( _ kinematical backreaction _ ) grow slightly .we shall adopt this ( present ) value of the curvature parameter in our analysis as an extreme candidate for a backreaction driven cosmology .finally , we wish to emphasize that we are looking at spatial integral properties of inhomogeneous _ models _ of the universe at the present time. this study does not include the important questions of ( i ) how the present day structure we are looking at evolved dynamically out of an earlier state , and ( ii ) how these integral properties would relate to deep observations that necessarily involve considerations of the inhomogeneous lightcone and observable averages along the lightcone .we shall later propose strategies for the determination of these integral properties , also from observations . as a rule of thumb, observational results may be directly used in a shallow redshift interval , where the lightcone effect would be subdominant ; galaxy catalogues as they are compared with the spatial distribution of fluctuations `` at the present time '' in simulations is an example ) .our main result is a formula for the spatially averaged scalar curvature involving quantities that are all measurable on regional ( i.e. up to 100 mpc ) scales , and it is therefore accessible by a shallow redshift interval . the key results of the following , necessarily technical multi scale analysis are eqs .( [ resultq ] ) and ( [ resultr ] ) , and their discussion thereafter . in particular , disregarding the contribution of gravitational radiation of cosmological origin , the adimensional backreaction term can be written as ( see ( [ resultq ] ) ) : \ , \frac{l_{j , j}^{2}}{l_{\delta h_{{{\cal m}}}}^{2}}\,\left ( \frac{(8\pi g)^{2}\,\left\langle \varrho \right\rangle^{2 } _ { \cal m } \ , l_{\delta h_{{{\cal m}}}}^{2}}{h_{\cal d } ^{\,2}}\right ) \right ] \nonumber\\ \nonumber\\ \fl\qquad + ( 1-\lambda _ { { { \cal m}}})\ , \left[\left(1 - 8\,\frac{l_{\nabla\theta,\nabla\theta}^{2}}{l_{\delta h_{{{\cal e}}}}^{2}}\right)\,\left(\frac{\delta ^{2}h_{{{\cal e}}}}{h_{\cal d } ^{\,2}}\right)-2\,v^{2}_{\varrho } [ { { \cal e}}]\ , \frac{l_{j , j}^{2}}{l_{\delta h_{{{\cal e}}}}^{2}}\,\left ( \frac{(8\pi g)^{2}\,\left\langle \varrho \right\rangle^{2 } _ { \cal e } \ , l_{\delta h_{{{\cal e}}}}^{2}}{h_{\cal d } ^{\,2}}\right ) \right ] \nonumber\\\nonumber\\ \fl\qquad + { \lambda_{{\cal m}}}(1-\lambda_{{\cal m}})\,\frac{\left(h_{{\cal e}}-h_{{\cal m}}\right)^{2}}{h_{{\cal d}}^{2 } } \pm \,\,2v^{2}_{\varrho } [ { { \cal d}}]\,\frac{l_{\nabla\theta , j}^{2}}{l_{\delta h_{{{\cal d}}}}^{2}}\ , \left ( \frac{32\pi g\,\left\langle \varrho \right\rangle _ { \cal d } \,\;l_{\delta h_{{{\cal d}}}}\ , \left(\delta ^{2}h_{{\cal d}}\right)^{\frac{1}{2}}}{h_{\cal d } ^{\,2 } } \right ) \nonumber\;,\\ \label{mform}\end{aligned}\ ] ] together with the formula for the adimensional averaged scalar curvature term : where the rough meaning of the various terms is _ pictorially _ described in figure 5 . the following sections also prepare future work , e.g. on the determination of an `` optimal frame '' in which the variables are to be averaged in an evolving cosmological hypersurface .in order to discuss the geometric structure of spatial curvature and of its fluctuations in observational cosmology , let us recall the essential steps required for constructing a cosmological spacetime out of the evolution of a riemannian three dimensional manifold , which we assume for simplicity to be closed and without boundary .note , however , that such a condition is not essential for our analysis and in due course it will be substantially relaxed .the geometry and the matter content of such a three manifold is described by a suitable set of initial data ( latin indices run through ; we adopt the summation convention ) subjected to the energy ( hamiltonian ) and momentum ( codazzi ) constraints : where is the cosmological constant , , and where is the scalar curvature of the riemannian metric ; the covariant spatial derivative with respect to is denoted by .if such a set of admissible data is propagated according to the evolutive part of einstein s equations ( see figure 6 ) , then the symmetric tensor field can be interpreted as the extrinsic curvature ( or second fundamental form ) of the embedding of in the spacetime resulting from the evolution of , whereas and are , respectively , identified with the mass density and the momentum density of the material self gravitating sources on .for short we shall call the _ physical space _ associated with the riemannian manifold . inwhat follows we shall make no use of the evolutive part of einstein s equations and accordingly we do not explicitly write it down .let us recall the various hierarchical length scales involved that we have described in section [ section : phenomenology ] , now associated with the curvature structure on the physical space : ( i ) the length scale defined by a spatial region over which can be viewed as describing to a good approximation a homogeneous and isotropic state ( being not necessarily a homogeneous isotropic solution of einstein s equations ) ; ( ii ) the length scales associated with the smaller domains over which the typical cosmological inhomogeneities regionally dominate , with an alternance of underdense regions and matter dominated regions , with , , , , for all and for all .we denoted these latter length scales by and respectively .for the former we sometimes say simply ` voids ' or ` empty regions ' , but our calculations are kept more general ( see figure 7 ) . in order to discuss the implications generated by such a partitioning of ,let us rewrite the hamiltonian and the momentum constraints ( [ constraints ] ) over as where is the local _ rate of expansion _ , and the square of the local _ rate of shear _ , with being the shear tensor defined by .we wish to average ( [ hamr ] ) over the region and discuss to what extent such an averaged constraint characterizes the sign of the curvature .let us observe that on the scale of near homogeneity the averaged hamiltonian constraint ( [ hamr ] ) is _ assumed _ to have the structure ( [ averagehamiltond ] ) where is the ( average ) hubble parameter on the scale .this assumption characterizes the kinematical backreaction term as our strategy will be to express both and in terms of the _ typical _ local fluctuations of , , and in the voids and in the matter dominated regions ( see figure 8) . as a preliminary step ,let us consider the average of a generic scalar valued function over , where , and , the riemannian volume of .if we partition according to , where all individual regions are disjoint in the partitioning , then we can rewrite as where since both the averages , and the corresponding regions and may fluctuate in value and size over the set of underdense and overdense regions , it is useful to introduce the weighted averages of and of , _ viz . _ and where .since , if we now introduce the adimensional parameter we can write ( [ splitav ] ) equivalently as applying in turn this formula to the volume average of the scalar curvature , then we simply get which , according to ( [ hamr ] ) implies + \lambda _ { { \cal m}}\left [ 16\pi g\langle \varrho \rangle_{{\cal m}}+2\langle \sigma ^{2}\rangle _ { { \cal m}}- \frac{2}{3}\langle { \theta } ^{2}\rangle _ { { \cal m}}\right ] \;.\end{aligned}\ ] ] at this stage , we can look at the regional hubble parameters and assigned to the empty and matter dominated regions and their associated mean square fluctuations and according to one easily computes + \lambda _ { { \cal m}}\left [ 16\pi g\langle \varrho \rangle _ { { \cal m}}-6h_{{\cal m}}^{2}-{\cal q}_{{\cal m}}\right ] , \label{r}\ ] ] where and denote the _ kinematical backreaction terms _ on the respective scales : and if we insert ( [ r ] ) into the expression ( [ averageham ] ) characterizing we get since a direct computation provides or , more explicitly , the above formulae for the averaged curvature and kinematical backreaction are _ general _ for our choice of a partitioning into _ overdense _ and _ underdense _ domains . it is important to observe that in the factorization ( [ hfactor ] ) both and are effectively functions of , ( this is simply a fact coming from the definition of the average factorization we have used ) , and for discussing the meaning of the expressions we obtained for the kinematical backreaction and scalar curvature it is often useful to assume a reasonable scaling for and .our basic understanding is that , on small scales , the local dynamics of gravitationally bound matter will obliterate , whereas , if matter happens to be distributed over larger and larger domains , then it will more and more participate in the global averaged dynamics . by continuity, there should be a scale marking a significant transition between these two regimes .clearly , one can elaborate on the most appropriate model for such a transition , but the one described below , basically a gaussian modeling , is quite general and has the merit of avoiding sudden jumps in the behavior of .also , it can be a natural starting point for a more elaborate analysis .thus , we wish to make an idealization by assuming the `` stable clustering hypethesis '' to hold on the matter dominated regions , , it will only hold up to a critical scale , whereas for the quantity smoothly increases up to . to achieve this we model the scale dependence of according to \quad { \rm for}\quad \lambda^{cr}_{{\cal m}}\leq\lambda_{{\cal m}}\leq 1\;\ ; , \label{scalingmodel1}\ ] ] and itis easily verified that is a smooth ( ) function of ] and ] and ],\leq 1 $ ] . if we normalize this latter expression by then we can define the ( squared ) typical velocity of matter in the region according to \,:= \frac{\left\langle \left|j\right| ^{2}\right\rangle _ { \cal d } } { \left\langle \varrho \right\rangle^{2 } _{ \cal d } } = \,\lambda _ { { { \cal m}}}\,v^{2}_{\varrho } [ { { \cal m}}]\,\frac{\left\langle \varrho \right\rangle^{2 } _ { \cal m } } { \left\langle \varrho \right\rangle^{2 } _ { \cal d } } + ( 1-\lambda _ { { { \cal m}}})\,v^{2}_{\varrho } [ { { \cal e}}]\,\frac{\left\langle \varrho \right\rangle^{2 } _ { \cal e } } { \left\langle \varrho \right\rangle^{2 } _ { \cal d } } \;.\ ] ] inserting these parametrizations into ( [ mcd ] ) , and normalizing to the squared effective hubble parameter , we eventually get \ , \frac{l_{j , j}^{2}}{l_{\delta h_{{{\cal m}}}}^{2}}\,\left ( \frac{(8\pi g)^{2}\,\left\langle \varrho \right\rangle^{2 } _ { \calm } \ , l_{\delta h_{{{\cal m}}}}^{2}}{h_{\cal d } ^{\,2}}\right ) \right ] \label{mattshear}\\ \nonumber\\ + ( 1-\lambda _ { { { \cal m}}})\ , \left[4\,\frac{l_{\nabla\theta,\nabla\theta}^{2}}{l_{\delta h_{{{\cal e}}}}^{2}}\,\left(\frac{\delta ^{2}h_{{{\cal e}}}}{h_{\cal d } ^{\,2}}\right)+\,v^{2}_{\varrho } [ { { \cal e}}]\ , \frac{l_{j , j}^{2}}{l_{\delta h_{{{\cal e}}}}^{2}}\,\left ( \frac{(8\pi g)^{2}\,\left\langle \varrho \right\rangle^{2 } _ { \cal e } \ , l_{\delta h_{{{\cal e}}}}^{2}}{h_{\cal d } ^{\,2}}\right ) \right ] \nonumber\\ \nonumber\\ \mp \,\,v^{2}_{\varrho } [ { { \cal d}}]\,\frac{l_{\nabla\theta , j}^{2}}{l_{\delta h_{{{\cal d}}}}^{2}}\ , \left ( \frac{32\pi g\,\left\langle \varrho \right\rangle _ { \cal d } \,\;l_{\delta h_{{{\cal d}}}}\ , \left(\delta ^{2}h_{{\cal d}}\right)^{\frac{1}{2}}}{h_{\cal d } ^{\,2 } } \right ) \nonumber\;.\end{aligned}\ ] ] at this point , it is important to stress that the norm of the transverse part , is not determined by ( [ vectw ] ) . the term is associated with the presence of a non trivial initial rate of variation of the conformal geometry of , _i.e. _ , with initial data describing the presence of gravitational radiation of cosmological origin in ( see figure 18 ) .we define the total energy density of gravitational waves in by where is the density formally associated with the `` critical density '' of the standard friedmannian model ( in the region of near homogeneity ) .the ratio describing the relative strength of the energy density of gravitational waves with respect to the critical density , is the quantity conventionally used in cosmology for describing gravitational waves of cosmological origin .thus , we parametrize the shear term according to equipped with the above estimates , we now consider the backreaction term , ( divided by ) . from ( [ mexpl ] )we finally arrive at the key result of this paper : \ , \frac{l_{j , j}^{2}}{l_{\delta h_{{{\cal m}}}}^{2}}\,\left ( \frac{(8\pi g)^{2}\,\left\langle \varrho \right\rangle^{2 } _ { \cal m } \ , l_{\delta h_{{{\cal m}}}}^{2}}{h_{\cal d } ^{\,2}}\right ) \right ] \nonumber\\ \nonumber\\ \fl + ( 1-\lambda _ { { { \cal m}}})\ , \left[\left(1 - 8\,\frac{l_{\nabla\theta,\nabla\theta}^{2}}{l_{\delta h_{{{\cal e}}}}^{2}}\right)\,\left(\frac{\delta ^{2}h_{{{\cal e}}}}{h_{\cal d } ^{\,2}}\right)-2\,v^{2}_{\varrho } [ { { \cal e}}]\ , \frac{l_{j , j}^{2}}{l_{\delta h_{{{\cal e}}}}^{2}}\,\left ( \frac{(8\pi g)^{2}\,\left\langle \varrho \right\rangle^{2 } _ { \cal e } \ , l_{\delta h_{{{\cal e}}}}^{2}}{h_{\cal d } ^{\,2}}\right ) \right ] \nonumber\\ \nonumber\\ \fl + { \lambda_{{\cal m}}}(1-\lambda_{{\cal m}})\,\frac{\left(h_{{\cal e}}-h_{{\cal m}}\right)^{2}}{h_{{\cal d}}^{2 } } \pm \,\,2v^{2}_{\varrho } [ { { \cal d}}]\,\frac{l_{\nabla\theta , j}^{2}}{l_{\delta h_{{{\cal d}}}}^{2}}\ , \left ( \frac{32\pi g\,\left\langle \varrho \right\rangle _ { \cal d } \,\;l_{\delta h_{{{\cal d}}}}\ , \left(\delta ^{2}h_{{\cal d}}\right)^{\frac{1}{2}}}{h_{\cal d } ^{\,2 } } \right ) \nonumber\;.\\\end{aligned}\ ] ] together with the formula for the averaged scalar curvature ( see figure 19 ) : we are now going to invoke approximate assumptions in order to illustrate the result ([resultq ] , [ resultr]) . at this stagethe reader may critically compare our set of assumptions with the assumptions he would wish to make . at any rate , the following discussion employs simplifying assumptions and a more refined analysis has to take care of the neglected terms . at the stage we arrived at with the above formulae for the backreaction term and the averaged scalar curvature , it is clear what is the potential contribution of the various terms involved . in particular , if , in line with the above analysis , we now assume that : + _ ( i ) _ = 0\;;\ ; v_{\varrho } [ { { \cal m } } ] = 0 \;\;,\ ] ] namely that we are describing the region of near homogeneity in a frame comoving with matter ; + _ ( ii ) _ _ i.e. _ , absence of significant gravitational radiation of cosmological origin , then we finally obtain for the total kinematical backreaction parameter ( [ idealizedq ] ) : \,\frac{\delta ^{2}h_{{{\cal m}}}}{h_{\cal d } ^{\,2}}+(1-\lambda_{{\cal m}})\ , \left[8\,\left(\frac{l_{\nabla\theta,\nabla\theta}^{2}}{l_{\delta h_{{{\cal e}}}}^{2}}\right)-1\right]\,\frac{\delta ^{2}h_{{{\cal e}}}}{h_{\cal d } ^{\,2}}\nonumber\\ -\lambda_{{\cal m}}(1-\lambda_{{\cal m } } ) \frac{(h_{{\cal e}}- h_{{\cal m}})^2 } { h_{{\cal d}}^2 } \;.\end{aligned}\ ] ] this provides a reliable estimate in terms of the natural physical parameters involved . as expected , the shear terms ( responsible for the factors proportional to and ) tend to attenuate the overall negative contribution of the fluctuations of the hubble parameter . in a self gravitating system with long ranged interactionsit is difficult to argue , if the current correlation length is smaller , larger or comparable to the expansion fluctuation length . on the one hand ,the above estimate indicates that the attenuation mechanism due to shear fluctuations can not compensate for the _ global _( i.e. on the homogeneity scale ) negative contribution generated by and , since due to the assumption of existence of a scale of homogeneity and , in addition , due to our implicit assumption of a globally almost isotropic state the large scale bulk flow must cease to display correlations and will be on some large scale subordered to the global hubble flow pattern .this latter , however , could display large scale fluctuations or not , and it is therefore to be expected that the ratio of the correlation lengths must be small , in conformity with our setup , only if there are significant differences between the globally averaged _ homogeneous state _ and a _ homogeneous solution _ which , this latter , features no fluctuations . on the other hand , we are confident that , on the void scale , the two correlation lengths are certainly comparable , and the shear term may also dominate over the expansion fluctuation term in .this latter property was generically found in the newtonian analysis and it can be summarized by the expectation that the kinematical backreaction term would act as a _ kinematical dark matter _ rather than as a _ kinematical dark energy _ on the scale of voids ( see for a discussion ) , whereby on the global scale the domination of the expansion fluctuation term is possible and would then argue for an interpretation as _ kinematical dark energy_. it appears to us that this is a fixed point of the analysis , since in any case a non vanishing fraction and is going to add a negative contribution to ( a positive contribution to ) and , therefore , neglecting this term will still allow us to provide bounds on the expected ( or for some model prior necessary ) fluctuations . as an estimate for the large scale asymptotics we may ,along the lines of this reasoning , propose the following simple formula for a rough , but to our opinion robust _ lower bound _ on a _ negative _ kinematical backreaction parameter , valid on the largest scales : with this formula we can bound a positive kinematical backreaction from above , which provides information on the maximally expected backreaction and , in turn , on the maximally expected magnitude of the global averaged scalar curvature , to which we turn now .adopting the above restrictions of the general formula ( [ resultq ] ) we can write down an estimate for a global _ upper bound _ on the sum of a _ positive _ curvature parameter ( corresponding to a negative averaged curvature ) and the cosmological constant parameter , valid on the homogeneity scale . using ( [ hamiltonomegad ] ) , ( [ result_idealq ] ) , and , we immediately get + \lambda_{{\cal m}}\left[-\omega^{{\cal m}}_m + \frac{\delta ^{2}h_{{\cal m}}}{h_{{\cal d}}^{2}}\right]+ \lambda_{{\cal m}}(1-\lambda_{{\cal m}})\frac{(h_{{\cal e}}- h_{{\cal m}})^2 } { h_{{\cal d}}^2 } \;.\nonumber\end{aligned}\ ] ] the above formula demonstrates that the major players in a discussion on the maximal magnitude of the averaged scalar curvature will be ( i ) the volume fraction of occupied regions , ( ii ) the matter densities on the two regional scales , and ( iii ) the fluctuations in the volume averaged hubble parameters on the two regional scales , all normalized by the global volume hubble rate that , this latter , can also be determined through measurement of regional parameters in view of .we are now going to discuss this latter result more quantitatively .first , an important remark that is relevant irrespective of whether we use a simple or a more refined estimating formula , and also irrespective of how exactly we determine the right hand side of , e.g. the inequality ( [ result_idealr ] ) from models or observations : in any case we would obtain a non conclusive result for the value of the averaged scalar curvature itself due to the obvious degeneracy of the left hand side of eq .( [ result_idealr ] ) .this latter depends on the value for the cosmological constant parameter that we have to choose _ ad hoc_. for example , suppose that the right hand side of ( [ result_idealr ] ) ( representing the measured physical properties ) would return a value ( now and in what follows concentrating on values today ) , we could not discriminate between the _ concordance model _ , i.e. adopting , and at the same time , and a backreaction driven cosmology with an intrinsic physical curvature parameter and vanishing cosmological constant .such a value would therefore be compatible with two models of very different physical nature . an estimate would only be conclusive by setting a prior on the value of the cosmological constant . only in the case where we would `` exorcise '' a cosmological constant , , a measurement of the right hand side of ( [ result_idealr ] ) would be conclusive , since we could clearly discriminate between a zero curvature universe model and a model with large curvature parameter .this degeneracy provides an obstacle especially in the expected cases of ( i ) the need of a dominating positive cosmological constant in the concordance model , and ( ii ) a substantial negative averaged curvature in a backreaction driven cosmology ( since in that case , by definition , ) . however , despite this degeneracy , the physical status of the _ concordance model _ is weaker , since a conspiracy is required between the scale independent value of the cosmological constant and the scale dependent physical averages that can in principle be measured on any scale : even if such a conspiracy would hold on the largest scale , it is likely to be violated on smaller scales , a remark that essentially mirrors the status of a `` fitting model '' compared to a physical model for the averaged variables .notwithstanding , an estimate of the right hand side of ( [ result_idealr ] ) , or the more general formulae derived in this paper , is possible and useful , and we are now suggesting strategies for its realization .despite the fact that we wish to estimate riemannian properties of the present day universe , the derived estimates for the large scale averaged scalar curvature are determined by average properties on the scales of voids and matter dominated regions .this in turn would suggest , as a first approximation , to determine the required _ regional _ parameters from a standard newtonian n body simulation .thereby we accept to ignore the differences to the values that would be obtained for the riemannian averages , but we expect to get an idea for the relative magnitudes of matter and expansion fluctuations within a well studied framework .the determination of the regional parameters has to be done by controlled n body simulations , this control respecting the mutual dependence of all the parameters involved : the volume fraction of matter dominated regions , the calculation of averages and their fluctuation properties ; both sensibly depend on spatial and particle resolution as well as on definitions of , e.g. void finding algorithms and the employed thresholds on what we consider as over and underdensities . in any caseit is important to determine values within a _ single _ set of priors on the construction of the simulation and attaching a precise meaning to the involved parameters .alternatively , an analytical estimate using relativistic models for the inhomogeneities can be investigated , first using perturbation theory along the lines of , and its nonlinear extensions , or studying constraints on the size of voids from swiss cheese type or peak models . in this contextwe are currently generalizing the newtonian analysis , which is built on a relativistic generalization of a non perturbative evolution model . in this model ,the ( root of the ) metric fluctuations are calculated perturbatively , which is sensible , since the amplitude of metric perturbations is indeed small .combining this with the exact averages then allows to take into account that the metric derivatives may not be small .in all these cases we can directly estimate the shear terms , since the assumption of a frame comoving with matter is adopted .related to the time evolution properties in n body simulations or explicit analytical models we may also approach the problem dynamically from the point of view of effective ( spatially averaged ) einstein equations .we may focus on the dynamical late time properties of typical regions on the regional scales . as an example we may exploit the kinematically averaged einstein equations , valid on any scale , and roughly estimate their time asymptotics for a typical matter dominated region .the aim would consist in understanding gravitationally bound systems and a corresponding virial equilibrium assumption that would constrain the involved energies .for example , if such a virialized state is characterized by a stationary `` stable clustering '' volume , then the averaged raychaudhuri equation on provides a relation between the averaged matter density and the kinematical fluctuations , together with the corresponding adimensional parameter , as an average over kinematical terms that include other forces relevant on regions .this term could have any sign and would be associated with velocity dispersion ( compare for a derivation of the corresponding non averaged terms in a newtonian setting note also that vorticity is an important stabilization term that would have to be included in such a constraint ) . ] : such a condition would replace the need for estimating fluctuations on typical domains .with neglection of the shear term and terms contained in the above relation would read , or , which still requires estimation of .the most direct , a priori model independent strategy is , of course , to determine the values of the required parameters through observations .while this strategy is in principle free of priors on the spatial geometry , the _ interpretation _ of observational results has to involve model priors .note that , at any rate , a model dependence arises due to the fact that a volume averaged value needs information on metrical properties , i.e. the volume depends on the geometry which is not directly observable ( see , e.g. , for strategies related to the determination of metrical properties ) .actually all ingredients that are needed in our estimates are controversal in the literature , and accordingly an observational determination would involve an ample range of values for the parameters and would suffer from the fact that different methods and interpretation mechanisms are necessarily involved and consequently would imply a loss of control on the mutual dependence of the parameters .let us now put different model assumptions on the averaged curvature into perspective by just choosing some numbers for the physical properties on the regional domains .we write down the bound ( [ result_idealr ] ) for ( i ) the _ concordance model _ with ( let us take rough numbers ) , , , and ( ii ) an extreme _ backreaction driven model _ with , , and ; we obtain on the left hand side of ( [ result_idealr ] ) the value lhs for the former and lhs for the latter .the right hand side of ( [ result_idealr ] ) reads : here , we have first ignored the possibility of choosing different values for the regional matter density parameters to ease the comparison . to make the average expansion properties concrete , we employ our scaling models for and ,( [ scalingmodel1])ff . , with a transition value .the reader may now specify the free parameters involved ( , , ) .considering a maximal range for the volume fraction , , we then obtain rhs = for a completely empty universe model , and rhs = , for a volume equipartition of over and underdense regions .this shows how global constraints imposed by our model priors would constrain the fluctuation terms .e.g. , a value of would be the minimally necessary fluctuation amplitude of the expansion rate on the void scale , if the expectation from the extreme backreaction driven model , that has practically emptied the universe , is met . for a non zero we have to also specify .as another example let us now assume different density parameters on the different regions .for a range of values for the volume fraction of occupied regions ( remember that this parameter is the relativistic volume fraction ) , we for example get with the priors on the global density parameter , , and the density in voids ( i.e. roughly 10 percent of the average density is found in voids ) the corresponding values . by working out a stationarity assumption in line with ( [ virial ] ), we could then find directly from the density parameter in matter dominated regions . at this stagewe leave it to the reader to exploit the result further and to understand the relations of assumed or measured numbers to the global model priors according to his / her experience . in order to reconcile the standard concordance model with the estimated bounds on the averaged scalar curvature, the above discussion indicates that this would be difficult , since for an assumed cosmological constant parameter of , the curvature parameter acquires _ additive _ pieces due to the fluctuations of the hubble parameter on the two regional scales .overall , the above ( rough ) discussion would suggest that , independent of the prior on a specific value of the cosmological constant parameter in between and a value that compensates the matter density parameter ( in our example ) , the averaged curvature would be negative .the remaining question is , how robust this latter result is , given our assumptions . for this purposewe are taking the role of advocating the standard concordance model by pushing the estimates to their extremes .for this end we have to argue that all the additional contributions due to inhomogeneities are negligible , so that our formulae in fact reduce to the standard hamiltonian constraint for a homogeneous distribution of matter and curvature . only in this case the concordance model , i.e. a zero curvature universe model , can be reconciled with the _physical space_. this statement implies that there must be a conspiracy between ( a1 ) the shear terms on _ both _ regional scales , ( a2 ) the expansion terms on _ both _ regional scales , and ( a3 ) the difference between the hubble rates on those regional scales , such that the overall contribution of these terms cancels on the homogeneity scale .alternatively , we could conjecture that ( b1 ) the magnitudes of the involved fluctuation amplitudes in our formulae are quantitatively negligible and ( b2 ) that .however , these latter options are unrealistic , since we would have to require negligible fluctuation amplitudes also on regional scales ( ad b1 ) , and that e.g. a cluster of galaxies participates in the full hubble flow ( ad b2 ) .whether the above conspiracy ( a ) could hold is difficult to test .however , we know that it holds in a newtonian model due to boundary conditions , , i.e. this conspiracy is suggesting that the present day universe can be effectively described in newtonian terms , i.e. it is equivalent to saying that the structure formation history had no impact on the evolution of the averaged curvature , and fluctuations have been decoupled from the geometry ( being a dynamical variable in general relativity ) until today .although we can not exclude this possibility , it relies on a fine tuning assumption that can be physically justified for a euclidean universe model , where the curvature distribution trivially satisfies an equipartition law ( since there is no curvature ) .as we have discussed , we can not expect such an equipartition of curvature in the general case , especially when the partitioning is biased towards a volume dominance of underdense regions ( negative curvature ) .thus , to reconcile the standard concordance model with the physical average of the present day universe is essentially equivalent with postulating an equipartition law for the scalar curvature on the homogeneity scale , hence a fine tuning assumption on the curvature distribution .here , one should be aware of the fact that such an equipartition law must be the result of a conservation law for the scalar curvature in a dynamical situation , similar as an equipartition of the density distribution on being the result of restmass conservation .we know that such a conservation law does not exist in general .in the case of a dust matter model it is replaced by the condition ( see for a detailed discussion ) : i.e. a particular dynamical combination of kinematical fluctuations and averaged scalar curvature is conserved , not the averaged scalar curvature itself , that here would be represented by the conservation of the yamabe functional .this does not mean that such a conservation law is excluded , but it would be equivalent to an uncoupled evolution of fluctuations in a constant curvature space section , since implies .it is interesting that the mechanism of a backreaction driven model relies on a coupled evolution between and ( thoroughly discussed in ) , hence , it genuinely violates an equipartition of curvature . we conclude that we need a fine tuning assumption on the scalar curvature distribution ( an equipartition law on the scale of homogeneity ) in order to reconcile the concordance model as a viable model for the physical properties of the present day universe .dynamically , this hypothesis implies an uncoupled evolution of kinematical fluctuations and intrinsic curvature . if such a hypothesis is not adopted , our investigations point to an overall negative averaged scalar curvature .measuring fluctuations in the volume hubble rate on the two regional scales together with the volume fraction of occupied regions would allow us to support or to rule out a large negative averaged scalar curvature , required for an extreme backreaction driven model , only if .the crux in this consideration is the degeneracy by assuming a non zero cosmological constant .however , if the large value for the fluctuations needed for a backreaction driven model is achievable ( for calculations and discussions of estimates see ; , sect.3.3 ; and ) , there would be good reasons to shift our _ interpretation _ from the assumption of a `` curvature compensating '' to the acceptance of a substantial negative physical curvature .2008 p. astier _ et al ._ : the supernova legacy survey : measurement of , and from the first year data set .astron . & astrophys . * 447 * , 31 ( 2006 ) .j. behrend , i.a .brown , g. robbers : cosmological backreaction from perturbations .jcap * 0801 * , 013 ( 2008 ) .r. blumenhagen , b. kors , d. lst and s. stieberger : four dimensional string compactifications with d branes , orientifolds and fluxes .rep . * 445 * , 1 ( 2007 ) .t. buchert : on average properties of inhomogeneous fluids in general relativity : 1 .dust cosmologies .. grav . * 32 * , 105 ( 2000 ) .t. buchert : on average properties of inhomogeneous fluids in general relativity : 2 . perfect fluid cosmologies .. grav . * 33 * , 1381 ( 2001 ) .t. buchert : a cosmic equation of state for the inhomogeneous universe : can a global far from equilibrium state explain dark energy ?quant . grav . * 22 * , l113 ( 2005 ) .t. buchert : on globally static and stationary cosmologies with or without a cosmological constant and the dark energy problem . class .* 23 * , 817 ( 2006 ) .t. buchert : dark energy from structure a status report .. grav . * 40 * , 467 ( 2008 ) .t. buchert and m. carfora : regional averaging and scaling in relativistic cosmology . class .* 19 * , 6109 ( 2002 ) .t. buchert and m. carfora : cosmological parameters are ` dressed ' .lett . * 90 * , 31101 - 1 - 4 ( 2003 ) . t. buchert and m. carfora : the cosmic quartet : cosmological parameters of a smoothed inhomogeneous spacetime . in : 12th jgrg meeting , tokyo 2002 , m. shibata et al .( eds . ) , pp .57 - 161 ( 2003 ) ; astro ph/0312621 .t. buchert and a. domnguez : adhesive gravitational clustering .astron . astrophys . *438 * , 443 ( 2005 ) .t. buchert and j. ehlers : averaging inhomogeneous newtonian cosmologies .. astrophys . * 320 * , 1 ( 1997 ) .t. buchert , m. kerscher and c. sicka : backreaction of inhomogeneities on the expansion : the evolution of cosmological parameters .d * 62 * , 043525121 ( 2000 ) .t. buchert , j. larena and j.m .alimi : correspondence between kinematical backreaction and scalar field cosmologies the morphon field. class .* 23 * , 6379 ( 2006 ) .capozziello , s. , cardone , v.f . ,troisi , a. : reconciling dark energy models with f(r ) theories .d. * 71 * , 043503 ( 2005 ) .m. carfora and t. buchert : ricci flow deformation of cosmological initial data sets . in : _14th international conference on waves and stability in continuous media _ , eds . n. mangana , r. monaco , s. rionero , world scientific , pp .118127 ( 2008 ) ; arxiv:0801.0553 .m. carfora and a. marzuoli : model geometries in the space of riemannian structures and hamilton s flow . class. quant . grav .* 5 * , 659 ( 1988 ) .m. carfora and k. piotrkowska : renormalization group approach to relativistic cosmology .d * 52 * , 4393 ( 1995 ) .carter , b.m .leith , s.c.c .nielsen , and d.l .wiltshire : type ia supernovae tests of fractal bubble universe with no cosmic acceleration .arxiv : astro ph/0504192 ( 2005 ) .clrier : do we really see a cosmological constant in the supernovae data ?. astrophys .* 353 * , 63 ( 2000 ) .c clarkson , m. cortes , b.a .bassett : dynamical dark energy or simply cosmic curvature ?jcap * 0708 * , 011 ( 2007 ) .colberg , r.k .sheth , a. diaferio , l. gao and n. yoshida , mon . not .. soc . * 360 * , 216 ( 2005 ) .copeland , m. sami and s. tsujikawa : dynamics of dark energy .d * 15 * , 1753 ( 2006 ) .s. das , n. banarjee and n. dadhich : curvature driven acceleration : a utopia or a reality ?* 23 * , 4159 ( 2006 ) .ellis : relativistic cosmology its nature , aims and problems . in _ general relativity and gravitation _( d. reidel publishing co. , dordrecht ) , pp .215288 ( 1984 ) .ellis and t. buchert : the universe seen at different scales .a. ( einstein special issue ) * 347 * , 38 ( 2005 ) .ellis and w. stoeger : the ` fitting problem ' in cosmology . class .. grav . * 4 * , 1697 ( 1987 ) .enqvist , k. , mattsson , t. : the effect of inhomogeneous expansion on the supernova observations .jcap * 0702 * , 019 ( 2007 ) s.r .furlanetto and t. piran : the evidence of absence : galaxy voids in the excursion set formalism .mon . not .366 * , 467 ( 2006 ) .hamilton : the formation of singularities in the ricci flow . in_ surveys in differential geometry vol 2 _ , eds .c.c . hsiung and s.t .yau , international press , cambridge ma , pp .7136 ( 1995 ) .c. hellaby : volume matching in tolman models .* 20 * , 1203 ( 1988 ) . c. hellaby : the mass of the cosmos .mon . not .* 370 * , 239 ( 2006 ) .f. hoyle and m.s .vogeley : voids in the 2df galaxy redshift survey .j. * 607 * , 751 ( 2004 ) .j. jost , _ partial differential equations _ , springer ( 2002 ) ; chap . 8 .a. ishibashi and r.m .wald : can the acceleration of our universe be explained by the effects of inhomogeneities ?* 23 * , 235 ( 2006 ) .kolb , s. matarrese and a. riotto : on cosmic acceleration without dark energy .new j. phys .* 8 * , 322 ( 2006 ) .e. komatsu et al . : five year wilkinson microwave anisotropy probe ( wmap ) observations : cosmological interpretation .arxiv:0803.0547 ( 2008 ) .o. lahav : observational tests of flrw world models . class .* 19 * , 3517 ( 2002 ) .leith , c.c.s .wiltshire : gravitational energy as dark energy : concordance of cosmological tests .j. * 672 * , l91 ( 2008 ) .n. li and d.j .schwarz : on the onset of cosmological backreaction .d * 76 * , 083011 ( 2007 ) . n. li and d.j .schwarz : signatures of cosmological backreaction .arxiv:0710.5073 ( 2007 ) .n. li , m. seikel and d.j .schwarz : is dark energy an effect of averaging ? arxiv:0801.3420 ( 2008 ) .a. lichnerowicz , pub .de li.e.h.s .* 10 * , 5 ( 1961 ) ( in french ) .lu , c. hellaby : obtaining the spacetime metric from cosmological observations . class .quant . grav .* 24 * , 4107 ( 2007 ) .v. marra , e.w .kolb , s. matarrese , a. riotto : on cosmological observables in a swiss cheese universe . phys .d * 76 * , 123004 ( 2007 ) .v. marra , e.w .kolb , s. matarrese : light cone averages in a swiss - cheese universe .d * 77 * , 023003 ( 2008 ) .t. mattsson : dark energy as a mirage .arxiv:0711.4264 ( 2007 ) .t. mattsson , m. ronkainen : exploiting scale dependence in cosmological averaging .jcap * 0602 * , 004 ( 2008 ). l. mersini houghton , e. kafexhiu : nontrivial geometries : bounds on the curvature of the universe .astroparticle physics * 29 * , 167 ( 2008 ) .a. paranjape , t.p .singh : the possibility of cosmic acceleration via spatial averaging in lematre tolman bondi models .* 23 * , 6955 ( 2006 ) .a. paranjape and t.p .singh : explicit cosmological coarse graining via spatial averaging .. grav . * 40 * , 139 ( 2008 ) .a. paranjape and t.p .singh : structure formation , backreaction and weak gravitational fields .jcap * 0803 * , 023 ( 2008 ). s. rsnen : dark energy from backreaction .jcap * 0402 * , 003 ( 2004 ). s. rsnen : constraints on backreaction in dust universes* 23 * , 1823 ( 2006 ) .s. rsnen : accelerated expansion from structure formation .jcap * 0611 * , 003 ( 2006 ). s. rsnen : comment on ` nontrivial geometries : bounds on the curvature of the universe ' .arxiv:0705.2992 ( 2007 ) .s. rsnen : evaluating backreaction with the peak model of structure formation .jcap * 0804 * , 026 ( 2008 ). m. reiris : large scale ( cmc ) evolution of cosmological solutions of the einstein equations with a priori bounded space time curvature .arxiv : 0705.3070 ( 2007 ) .m. reiris : large scale properties of perturbed robertson walker cosmologies . class .* 25 * , 085001 ( 2008 ) .spergel et al . :wilkinson microwave anisotropy probe ( wmap ) three year results : implications for cosmology .. j. suppl . *170 * , 377 ( 2007 ) d.l .wiltshire : viable inhomogeneous model universe without dark energy from primordial inflation .arxiv : gr - qc/0503099 ( 2005 ) .wiltshire : cosmic clocks , cosmic variance and cosmic averages .new j. phys * 9 * , 377 ( 2007 ) .wiltshire : exact solution to the averaging problem in cosmology .lett . * 99 * , 251101 ( 2007 ) .wiltshire : dark energy without dark energy .arxiv:0712.3984 ( 2007 ) .
|
we discuss the effect of curvature and matter inhomogeneities on the averaged scalar curvature of the present day universe . motivated by studies of averaged inhomogeneous cosmologies , we contemplate on the question whether it is sensible to assume that curvature averages out on some scale of homogeneity , as implied by the standard concordance model of cosmology , or whether the averaged scalar curvature can be largely negative today , as required for an explanation of dark energy from inhomogeneities . we confront both conjectures with a detailed analysis of the kinematical backreaction term and estimate its strength for a multi scale inhomogeneous matter and curvature distribution . our main result is a formula for the spatially averaged scalar curvature involving quantities that are all measurable on regional ( i.e. up to 100 mpc ) scales . we propose strategies to quantitatively evaluate the formula , and pinpoint the assumptions implied by the conjecture of a small or zero averaged curvature . we reach the conclusion that the standard concordance model needs fine tuning in the sense of an assumed equipartition law for curvature in order to reconcile it with the estimated properties of the averaged physical space , whereas a negative averaged curvature is favoured , independent of the prior on the value of the cosmological constant .
|
when a telescope projects stars of the night sky onto points of a photograph , stars at large and small distances pile up on a single point of the photograph .indeed such a `` pile - up , '' which makes the distance to stars ambiguous , is a mathematical property of any mapping of a space of larger dimension to a space of lesser dimension . herewe report on a `` piling - up '' that occurs when quantum theory serves as mathematical language in which to describe experiments .how does one employ quantum theory to describe experiments with devices lasers and lenses , detectors , _ etc_. on a laboratory bench ?one assumes that the devices generate , transform , and measure particles and/or fields , expressed one way or another as linear operators , such as density operators and detection operators . in case of a finite - dimensional quantum description , these operators are matrices . herewe omit discussing how one arrives at the particles , in order to focus directly on the operators that end up expressing the devices .these operators are functions of the parameters by which one describes control over the devices .it is by making explicit the experimental parameters which we picture as _ knobs_that the ambiguity of a pile - up will become evident .it is important to recognize that quantum theoretic descriptions of experiments come in two parts : ( 1 ) statements of results of an experiment , expressed by probabilities of detections as functions of knob settings , and ( 2 ) explanations of how one thinks these results come about , expressed by linear operators , also as functions of knob settings .the two parts are connected by a mapping , namely the _trace_. as one learns in courses on quantum mechanics , given an explanation as a density operator and a positive operator valued measure ( povm ) , taking the trace of the product of the operators gives the probabilities that constitute a statement of results .of special interest here is the `` inverse problem '' that stems from the assumption in quantum mechanics that experimental evidence for quantum states is , at best , limited to probabilities of detections .the inverse problem amounts to finding the inverse of the mapping defined by the trace : given a statement of results , the problem is to determine all the explanations that generate it .it is here that the pile - up of the trace as a mapping impacts quantum physics .note that while our discussion gives knobs a prominent expression absent in text books on quantum mechanics , we employ the standard quantum mechanics of dirac and von neumann , augmented only by positive - operator - valued measures , now in widespread use .we speak of the parameters by which a description expresses control over an experiment as _ knobs _ , with the image in mind of the physical knobs by which an experimenter moves a translation stage or rotates a polarization filter .we think figuratively of hand motions by which we configure an experiment also as knob settings . in the mathematical languagein which we describe experimental trials , actual or anticipated , we express any one knob by a set of _ settings _ of the knob .we start with the simplest case in which each knob has a finite number of settings .let , , _etc_. denote knobs , each of which can be set in any of several positions . denotes the number of knob settings in , _etc_. when several knobs are involved , we call all of them together a _ knob domain_. for example if knobs and are involved , then we have a knob domain and an element of has the form with and . for the number of possible settings we then have the product : . if knob domain includes all the knobs that contribute to knob domain , then we write ; in other words knob domains form a distributive lattice under inclusion , illustrated in fig .[ fig:1 ] .similarly we consider detectors that display one of a finite number of outcomes .such a detector is a set , and is a particular outcome . as with knobs, we deal with sets of detectors which we call _ detector domains _ , written boldface _e.g. _ as .detector domains also form a distributive lattice .experiments come in families and so do descriptions , and so do the knob domains and detector domains that enter descriptions .the lattice of knob domains and the lattice of detector domains underpin expressing relations in these families .for example , a description involving a knob domain might be simplified by fixing one knob``taping it down '' so to speak . or a description involving a detector domain might be simplified by ignoring detector , leading to marginal probabilities . a statement of ( experimental ) results , as expressed in quantum theory consists of the probability of outcome for each setting of the knobs of , as illustrated in fig . [ fig:2 ] .we write for this probability , and the probability function ] is a probability measure on the set of detector outcomes .the quantum - mechanical form of experimental reports is that of a parametrized probability measure ..,width=288 ] for a given knob domain and detector domain , let ppm be the space of all parametrized probability measures . when the number of knob settings and detections is finite , so is the dimension of this space . as illustrated in sect .[ sect:3 ] for toy descriptions with finite numbers of knob settings and possible outcomes , parametrized probability measures constitute points of a function space that will play the part of a photograph onto which a larger space is mapped .any corresponds to a point on a photographic plate . a statement of results ] defined by = \{\bm{k}'|\mu(\bm{k}',-)=\mu(\bm{k},-)\}.\ ] ] an example is a violation of bell inequalities by which entanglement is demonstrated .in that example the probability of coincidence detection by two rotatable detectors , one turned through an angle , the other through an angle is .this and the other relevant probabilities depend on and only as their difference , so that a change defined by adding the same amount of rotation to each of these knobs leaves invariant . but here is a conceptual muddle .if changing the knob settings makes no difference to the results , on what basis can we judge that any change in knob settings has taken place ?a related question was put by one of our mathematician colleagues : why not just `` mod out '' the equivalence classes ?but it wo nt do for physics to `` mod out '' such an equivalence class ; the physicist wants not to make it disappear but to appreciate it .one way to appreciate changes of knobs that make no difference to the results is to recognize , side by side with the statement of results , a second statement of results at a finer level of detail , in particular a detector domain augmented by extra detectors to register changes in and separately .then is seen as obtained from by ignoring the `` extra '' detectors : here is the `` anything - goes '' or `` do nt care '' outcome of the extra detectors that respond to and separately , so that is seen as a _ marginal _ probability measure derived from ignoring `` knob - motion detectors '' in a more detailed statement of results that breaks invariance to show that and moved even if their difference was held fixed . a second way to make sense of invariance of results is to understand the invariant parametrized probability measure over as derived from a second parametrized probability measure over a larger knob domain that contains an extra knob . is then obtained from by fixing the extra knob at a special value .for example , to demonstrate rotational invariance we might place a disk on a table and rotate it to show that `` nothing detectable changes under rotation . '' but to see this invariance , whether one is aware of it or not , one must manage incompatible frames of reference . looked at one way nothing happens when we rotate the disk ; but to see that `` nothing happens when we rotate the disk '' one must see in the other frame , so to speak , that in fact `` the disk rotates . ''this suggests adding a knob that can move the center of rotation away from the center of the disk .when the disk is off center , one sees its rotation . as the center of the diskis moved closer to the center of rotation , one approaches invariance .something similar can be worked out for the preceding example involving quantum states that violate bell inequalities .when this is done , the equivalence class of knob settings show up as singular values in the mapping from knobs to probability measures , leading to another avenue for exploration .designs for quantum key distribution assert security against undetected eavesdropping , based on transmitting quantum states that overlap , with the result that deciding between them with neither error nor an inconclusive result is impossible . the most popular design , bb84 , invokes four states ( which we write as density operators ) .the claim of security invokes propositions such as this : if then , by a well known result of quantum decision theory the least possible probability of error to decide between them is : but how is one to rely on an implemented key - distribution system built from lasers and optical fibers and so forth to act in accordance with this explanation ? if a system of lasers and optical fibers and so forth `` possessed '' a single explanation in terms of quantum states , one could hope to test experimentally the trace distance between the pair of states .but no such luck .the trouble is that trace distance is a property not of probabilities _ per se _ , which are testable , but of some one among the many _ explanations _ of those probabilities .while the testable probabilities constrain the possible explanations , and hence constrain trace distances , this constraint on trace distance is `` the wrong way around''a lower bound instead of a sub - unity upper bound on which security claims depend . given any parametrized probability measure , proposition 2 in ref . assures the existence of an explanation in terms of a parametrized density operator metrically inequivalent to , such that , in conflict with eq .( [ eq : pe ] ) , the trace distance becomes , making the quantum states in this explanation distinguishable without error , so that the keys that they carry are totally insecure .the big question in key distribution is this : how will the lasers and fibers and detectors that convey the key respond to attacks in which an unknown eavesdropper brings extra devices with their own knobs and detectors into contact with the key - distributing system ?attacks entail knob and/or detector domains extended beyond those tested , with the possibility that extended explanations metrically inequivalent to that used in the design , but consistent with available probabilities , both imply a lack of security theoretically and accord with actual eavesdropping .physically , one way for insecurity to arise is by an information leak through frequency side - band undescribed in the explanation on which system designers relied .a more likely security hole appears when lasers that are intended to radiate at the same light frequency actually radiate at slightly different frequencies , as described in refs .as discussed in sect . [ subsect:4.2 ] and illustrated by fig . [ fig:3 ] , the roominess of the inverse trace forces an open cycle of expanding descriptions , encompassing both expansions of explanations and expansions of statements of results , along with expansions of their knob domains and their outcome domains .the discussion of invariance in sect .4.3 shows how understanding each description as an element of a family of competing descriptions resolves what otherwise is a conceptual obstacle . in the example of quantum key distribution of sect .[ subsect:4.4 ] , we see how isolating a single description as if competing descriptions were irrelevant confuses the role of quantum theory in cryptography , with negative implications for the validity of claims of security .the world of multiple , competing descriptions in which quantum engineering navigates is cartooned in fig .[ fig:4 ] .contains many explanations.,width=473 ]we are grateful for helpful discussions with howard brandt , louis kauffman , samuel lomonaco , and ravi rau .j. von neumann , _ mathematische grundlagen der quantenmechanik _ , springer , berlin , 1932 ; translated with revisions by the author as _ mathematical foundations of quantum mechanics _ , princeton university press , princeton , nj , 1955 .j. m. myers and f. hadi madjid , `` ambiguity in quantum - theoretical descriptions of experiments , '' submitted to k. mahdavi and d. koslover , eds . , ams , contemporary mathematics series , proceedings for the conference on representation theory , quantum field theory , category theory , mathematical physics and quantum information theory , university of texas at tyler , 2023 september 2007 . j. m. myers and f. h. madjid , `` what probabilities tell about quantum systems , with application to entropy and entanglement , '' in _ philosophy of quantum information and entanglement _ , a. bokulich and g. jaeger , eds . , cambridge university press , _ in press_. c. h. bennett and g. brassard , `` quantum cryptography : public key - distribution and coin tossing , '' _ proc .ieee int .conf.on computers , systems and signal processing , bangalore , india _ , pp . 175179 , ieee , new york , 1984 .j. m. myers , `` polarization - entangled light for quantum key distribution : how frequency spectrum and energy affect statistics , '' _ proceedings of spie _ , vol . 5815 , quantum information and computation iii , e. j. donkor , a. r. pirich , h. e. brandt , eds ., pp . 1326 , spie , bellingham , wa , 2005 .
|
we note the separation of a quantum description of an experiment into a statement of results ( as probabilities ) and an explanation of these results ( in terms of linear operators ) . the inverse problem of choosing an explanation to fit given results is analyzed , leading to the conclusion that any quantum description comes as an element of a family of related descriptions , entailing multiple statements of results and multiple explanations . facing this multiplicity opens avenues for exploration and consequences that are only beginning to be explored . among the consequences are these : ( 1 ) statements of results impose topologies on control parameters , without resort to any quantum explanation ; ( 2 ) an endless source of distinct explanations forces an open cycle of exploration and description bringing more and more control parameters into play , and ( 3 ) ambiguity of description is essential to the concept of invariance in physics .
|
the simplest class of fourth order metric gravity theories is based on an action where ; the speed of light , , is set to one ; is the determinant of a lorentzian 4-dimensional metric , and the associated curvature scalar , while is the matter lagrangian density .general relativity with a cosmological constant is obtained by setting .the vacuum part of these models , i.e. , , has recently achieved some popularity where certain forms of the function have resulted in geometric models of inflation or , more recently , dark energy , see e.g. and also for a historical background . although an assessment of cosmological viability requires a study of spatially homogeneous and isotropic robertson - walker ( rw ) models and perturbations thereof , we will restrict the analysis in this paper to flat rw cosmology .the vacuum equations of these models can be written as ( see e.g. ) : [ jordanfulleq ] , \label{ddotrf}\end{aligned}\ ] ] in the above equations an overdot represents the jordan proper time derivative , is the scale factor of the flat rw metric in the jordan frame , is the jordan hubble variable , is the jordan curvature scalar , and by regarding as an independent variable , equation forms a constraint that the evolution equations must satisfy .furthermore , equation for decouples , which leads to a reduced closed system of first order equations for , which , due to the constraint , yield a dynamical system describing a flow on a 2-dimensional state space .once the reduced system of first order equations has been solved , the decoupled equation yields ) .the above system of equations has some general properties which are worth mentioning .firstly , the system is invariant under the transformation , a property we will use below . secondly , the system is ill - defined whenever for some value(s ) of .is identically zero , a case we will not consider here . ] this is related to pathological properties as regards the characteristics of gravity , where , e.g. , the properties of gravitational waves will severely constrain the physical viability of such models .it is therefore natural to divide gravity into two main classes of models : those for which and those for which this is not the case .thirdly , as it is well known is associated with that one can introduce an einstein frame .however , is not , in general , an invariant subset in the jordan frame , since where we have used ( i.e. , is only an invariant subset if and are simultaneously zero for some value of ) .this suggest that it is also natural to divide models into two additional classes , those with everywhere , and those for which can change sign .the latter case yields solutions in the einstein frame that can be conformally extended in the jordan frame , but which ones depend on the explicit form of .we will later explicitly illustrate this result in the context of a specific model , which also exemplifies some other general features of cosmology .there are a number of dynamical systems formulations in the literature that are based on transformations from to some other variables ( see appendix [ app : systems ] for a discussion on several of these formulations ) . in this contextthere has been considerable activity concerning fixed points ( also called singular points , equilibrium points , critical points ) and their linear stability properties .it is therefore of interest to consider the fixed points of the basic reduced state space variables , which must satisfy as follows from and .then , takes the form ( assuming that is non - zero ) which is thereby automatically satisfied .moreover , is identically zero , if and only if , and only in this case there is a line of fixed points for which , while all other models have isolated fixed points .when transforming to other variables care has to be taken when it comes to the physical interpretation of fixed point results .as we will see , some fixed points in other formulations simply reflect a break down of those variables , i.e. , they correspond to a state space coordinate singularity .it is also essential to note that fixed points do not always give a complete asymptotic description .as will be emphasized in this paper , it is necessary to consider the dynamics on the entire state space of a given model to make an assessment of its physical content .what is then required in order to obtain a complete description of the solution space and the properties of those solutions for a given rw model ?we will illustrate some of the ingredients that are required to answer this question with a specific example , but for all rw cosmologies one needs to do the following : * state space analysis . * a complete state space adapted coordinatecover , including those state space boundaries for which the equations can be extended ( this e.g. excludes boundaries for which becomes zero ) . * local and global dynamical systems analysis . * physical solution space interpretation .let us now comment on the above in a little more detail .( i ) a state space analysis entails dimensional and scale considerations , and a study of the algebraic structure of the constraint equation , which includes global aspects such as state space topology .( ii ) this means that one needs to find state space coordinates that _ globally _ cover the state space of a given model , including the boundaries for which the equations can be differentiable extended .this may include limits where , and become unbounded , which motivates the introduction of new _ bounded _ variables .note that some models will even in principle require several coordinate patches , but there are classes of models for which one can find common _ local _ useful variables . even in cases where it is possible to find a bounded global state space coordinate system, it might still be useful to consider other variables since it is unlikely that a global system , except under very special circumstances , is the optimal one for all local structures , i.e. , there might exist complementary sets of variables .furthermore , different models have different state space structures , and in general this requires different choices of variables the common element is instead a state space analysis and an adaption to the structures that the analysis reveals .( iii ) to understand the solution space structure of a given model and the asymptotical behaviour of the solutions , which is essential for assessing its physical viability ( it is not enough to consider special solutions , e.g. , fixed points ) , one must , in general , apply linear _ and _ non - linear fixed point techniques , as well as _global _ dynamical systems analysis .furthermore , note that fixed points will not in general give a complete asymptotic description , e.g. , a problem might naturally give rise to limit cycles .that a global understanding of the solution space is required is illustrated by the fact that if one has found a solution with a desirable evolution , then the models will still only be of interest if this solution is in some sense an ` attractor solution . 'even so , this does not exclude that there exists an additional set of solutions that have a different evolution , which leads to issues concerning measures describing how ` typical ' a solution is .( iv ) solutions , e.g. fixed points , have to be physically interpreted since a solution might be an artifact of the variables one has used .for example , variables that do not cover the entire jordan state space result in coordinate singularities , which results in fixed points .thus fixed points may not correspond to physical phenomena , but may instead show that a formulation breaks down . to illustrate the above issues ( excluding the situation where passes through zero , which we will comment on in the final discussion ) , as well as allowing us to introduce some dynamical systems methods of quite wide applicability, we will consider a specific example , the vacuum equations for the flat rw metric with this model has attracted considerable attention in the past , see e.g. , , and it still remains as one of the more successful models of inflation .although some interesting results have been obtained , previous analyses have been severely hampered by formulations that do not give a complete , or sometimes correct , description of the global solution space and its properties .in contrast , we will here give a complete description of the entire solution space of these models , and we will also describe the solutions asymptotic behaviour .more importantly though is that this model allows us to explicitly address some aspects about how to obtain useful dynamical systems treatments of rw models , and to illustrate various dynamical systems methods .for example , we will situate the entire solution space of the einstein frame in the state space of the jordan frame , which allows us to explicitly show how some solutions in the einstein frame for these models correspond to entire solutions in the jordan frame , while other solutions can be conformally extended in the jordan frame . in other words ,a local dictionary between the two frames does not always entail global equivalence ( the curious reader can skip ahead and take a look at figures [ fig : vacuumjf ] , [ fig : einsteinstatespace ] and [ fig : jordan_einsteinbound ] below ) .the outline of the paper is as follows . in the next sectionwe make a state space analysis for the models , which is used to produce a new regular unconstrained dynamical system formulation on a compact state space for the jordan frame .we then use this system to perform a local analysis of the fixed points , focusing on non - linear aspects such as blow ups of nilpotent fixed points .this is followed by a global analysis that gives a complete description of the entire solution space of the models , which is depicted and summarized in figure [ fig : vacuumjf ] .we emphasize the importance of the global topological structure of the state space for a full understanding of the solution space . in section [ sec : ein ] we present a new regular unconstrained dynamical system formulation on a compact state space for the einstein frame .we then perform a local analysis of fixed points , again focusing on non - linear aspects such as center manifold analysis .it is also shown that the breakdown of the einstein frame variables at leads to fixed points in the einstein frame state space that correspond to coordinate singularities in the jordan frame , thereby emphasizing the importance of physical interpretation of fixed points .this is followed by global considerations , which yield a complete description of the solution space in the einstein frame .the section ends with situating the global einstein frame state space in the global jordan frame state space by means of the variable transformations that link the two approaches , given in appendix [ app : relations ] .this allows us to identify ( a ) the solutions in the einstein frame that can be conformally extended in the jordan frame , and ( b ) the solutions in the einstein frame whose evolution completely describes that in the jordan frame . in section [ sec : disc ] we comment on the relationship between our global jordan state space approach and other jordan state space formulations , which are briefly reviewed in appendix [ app : systems ] , where their advantages and disadvantages are discussed .we also give a fairly general discussion of cosmology , which situates the present models in this more general context .in this section we first perform a state space analysis of the , vacuum models with flat rw geometry in the jordan frame . the resultis then used to derive a new regular dynamical systems formulation on a global compactified state space , which , in contrast to other formulations , completely covers the entire physical state space of these models , and its asymptotic boundaries .we then use this state space picture to perform a local fixed points analysis , which includes using blow up techniques and center manifold analysis , followed by global considerations .this yields a complete description of the entire solution space , depicted in figure [ fig : vacuumjf ] . specializing to , ,the evolution equations and can be written as : [ jordanorigeq ] while the constraint takes the form for future reference , note that restricting the general discussion leading to eq . , which shows that the einstein frame boundary is not in general an invariant subset , to the present case , yields that as also follows from . as a consequence there are solutions with that come from the region with ( vice - versa if ) and pass through the surface in the jordan state space , i.e. , some solutions in the einstein frame can be conformally extended in the jordan frame ( we will show this explicitly below ) . our first step in the state space analysis is to consider dimensions .the dimensions of , , , , and are given by , , , , and , respectively , where stands for length ( recall that the speed of light has been set to one ) .in contrast to general relativity , the present models , which reflect a general feature of gravity , break scale invariance . as a consequence it will not be possible to use scale invariance to decouple an equation , as is often done in dynamical systems treatments of general relativistic problems .however , we can choose dimensionless variables that eliminate the explicit appearance of ( in general there can of course exist several dimensional parameters for which one can form dimensionless ratios leaving a single dimensional parameter , where only the explicit appearance of the latter can be eliminated by an appropriate choice of variables ) .our next step in our state space analysis is to study and simplify the constraint as much as possible . for the present case it is possible to globally bring the constraint to a quadratic canonical form where all variables have the same dimension .first note that if one chooses as a new variable , then this variable as well as are seen to be ` state space null variables . ' by appropriate scaling them with so that they obtain the same dimension as , and then making a linear transformation so that the constraint takes a canonical quadratic form , results in with [ tsxseq ] where and all have dimension .it is important to note that the variable transformation is globally valid since the jacobian determinant is given by .one can make a similar globally valid transformation which brings the constraint to the form , where is determined by .] thus the constraint equation makes it explicitly clear that the reduced vacuum state space is a 2-dimensional double cone with a joint apex , see figure [ fig : lightcone_ss ] .the flow on this state space is determined by the following evolution equations : it follows from that the two state space cones , defined by and , are disconnected invariant subsets with a fixed point , as their common apex .this fixed point , , represents the minkowski solution , since .note that is the _ only _ fixed point on the physical state space and that it is non - hyperbolic ( note that this is consistent with when specialized to the present case ) . since the original system is invariant under the transformation , the system is invariant under the transformation .it therefore suffices to investigate the dynamics on the invariant future state space light cone with in order to obtain a complete picture of the dynamics .furthermore , the definition in combination with the constraint implies that on the future state space light cone , i.e. , it is arguably the future state space light cone that is of cosmological interest . for these reasons we will only explicitly describe the dynamics on this part of the global state space .it is clear from the above system that the minimum on the future state space light cone only holds on the line , , but is not an invariant subset ( except at ) . indeed , since when and it follows that when then is passing through zero from negative to positive values . to understand the present models it is essential to investigate if there are solutions that come from the future state space null infinity . as a next step we therefore aim at producing a regular system of equations on a compact state space .furthermore , the variables need to be dimensionless so that we eliminate the specific appearance of , which thereby automatically shows that this parameter is not essential for the solution structure of the present models ( this should not come as a surprise since it is possible to use units to set e.g. ) . removing the minkowski fixed point from the analysis , we first introduce two new dimensionless variables which are bounded thanks to the constraint .we then use that is dimensionless and positive on the future state space null cone and introduce the bounded variable next , to obtain a regular dimensionless system of evolution equations we introduce a new dimensionless time variable , defined by which leads to [ dynsystxs ] ,\\ x^{\prime } & = s\left[t(3+x ) + ( 1-t)(1-x)s\right ] , \\ s^{\prime } & = -x\left[t(3+x)+ ( 1-t)(1-x)s\right],\end{aligned}\ ] ] where denotes the derivative with respect to , subjected to the constraint note that the above variable change from to amounts to a projection where all circles on the light cone with constant now become the unit circle given by , where the different circles are parameterized by the value of , i.e , the above variables cover all of the future state space light cone . has been chosen in order to simplify the comparison with the einstein frame state space .the reason for defining as a monotonically decreasing function of instead of a monotonically increasing one , e.g. , by setting , is that the minkowski state , as we will prove , is the future asymptotic state of all solutions , and that this definition also makes the transition to the variables we use to describe the einstein frame state space more convenient . ]the present state space , which is just the future state space light cone , is given by a finite cylinder determined by because the state space is relatively compact ( i.e. its closure is compact ) and the equations are completely regular , we can extend the state space to include the invariant boundaries , and to obtain an extended compact state space .this turns out to be essential since , as we will see , the asymptotic states for all solutions within the physical state space reside on these invariant boundary subsets , see figure [ fig : vacuumjf ] below . indeed , we will prove that there are no fixed points or periodic orbits in the physical interior state space .thus , all solutions in originate from fixed points on ( the future null infinity of the future state space light cone ) , and end at a limit cycle on ( which describes how all solutions asymptotically approach the minkowski space - time ) .thus the present variables represent a compactification of the future state space light cone , where they blow up the neighborhood of the non - hyperbolic minkowski fixed point in the dynamical system for , thereby yielding a correct description of how all solutions approach the future asymptotic minkowski state .it is of interest to express in terms of : these equations reveal that is not only zero on the invariant boundary , but also when , , which corresponds to the line , on the future state space light cone . since this is not an invariant subset of the dynamical system , the solution trajectories pass through , going from positive to negative .indeed , we will show later on that all solutions pass through , infinitely many times . finally ,although is a constrained system , the constraint is easily globally solved by introducing which results in the following unconstrained regular system of equations : [ jordan2ddynsys ] , \\\theta^{\prime } & = -t(3 + \cos{\theta } ) - ( 1-t)(1 - \cos{\theta})\sin{\theta}.\end{aligned}\ ] ] the above regular global dynamical system form our ` master equations ' for dealing with the present models in the jordan frame .however , since the present formulation differs substantially from the ones in the literature it is of interest to take a look at some other formulations for the jordan frame and make comparisons , which we do in section [ sec : disc ] and in appendix [ app : systems ] .finally , we stress that the above system was possible because we adapted the variables to the particular state space properties of the present models ; other models need different variables .however , to find one ( or more ) set(s ) of ( differentiably overlapping ) useful variables covering the entire state space and its possible infinite limits , one needs to go through the same steps of ( i ) state space analysis and ( ii ) state space adapted coordinates as for the present illustrative example .next we turn to illustrating ( iii ) : local and global dynamical systems analysis . in this subsectionwe perform a local analysis of the fixed points of our new regular dynamical system on the compactified global state space , with a focus on necessary non - linear aspects .as we will prove below , all fixed points are located on the boundary subset , associated with .considering this subset , we find that there are two fixed points : [ fpvacumjordan ] with an integer .the motivation for the nomenclature for these fixed points will be made clear below .the fixed point is a hyperbolic source , while has two zero eigenvalues .more precisely , it is nilpotent of first degree .such fixed points are dealt with by means of so - called blow up techniques , described in detail in and . in order to bring the problem to standard form for nilpotent fixed pointswe first scale the variables and and introduce the following notation ( without loss of generality , we choose the representation for the fixed point ) : this leads to a dynamical system on the form 2 x^ & = y + p(x , y ) ; & p(x , y ) & = a(x ) + b(x)y , + y^ & = q(x , y ) ; & q(x , y ) & = c(x)y + d(x)y^2 + e(x)y^3 , where 2 a(x ) & = -(1- ) , & b(x ) & = -(1-)(1- ) , + c(x ) & = ( 1-)^2 , & d(x ) & = -((x)+2(1-)^2 ) , + e(x)&= ( + ( 1-)^2 ) . & & next we introduce a new variable instead of : which leads to in the neighborhood of the origin this means that the dynamical system takes the form where we now proceed by making the following so - called blow - up transformation and change time variable by dividing the right hand sides by .this results in where the now refers to the new time variable and where which obeys and .it follows that on the subset there are two fixed points the fixed point is a hyperbolic saddle while is a non - hyperbolic fixed point with eigenvalues zero and . to deal with apply center manifold theory ( for examples of center manifold analysis in cosmology , see e.g. ) .the center manifold can be obtained as the graph near ( i.e. , use as an independent variable ) , where ( fixed point condition ) and ( tangency condition ) .this leads to + \frac{u^2}{16}(1 + h(u ) ) - u\varphi(u)\frac{d\varphi}{du } = 0.\ ] ] this differential equation can be solved approximately by representing as the formal power series solving algebraically for the coefficients we find the present case corresponds to figure 3.16 ( a ) on p. 112 in .the saddle is associated with orbits ( i.e. , solution trajectories ) that approach from the region while the center manifold of with corresponds to the only solution from that enters the physical state space . inserting the above expression for into leads to the expression ( note that ) .\ ] ] this is a series expansion that approximates the ` inflationary attractor solution ' that enters the physical state space from .the accuracy of this approximation compared to the numerical solution can be found in figure [ fig : jordancm ] .if one is so inclined , one can obtain further approximation improvements by means of so - called pad approximants , as described in e.g. , and references therein . with the numerically computed solution given by the solid line .the leading - order term in the center manifold expansion is given by the dotted line ; the leading - order correction to this by the dashed line , and the next order correction by the long - dashed line.,scaledwidth=50.0% ] finally , note that the deceleration parameter in the jordan frame , defined by , is given by it follows that , except at , where and is ill - defined , the deceleration parameter takes the value on the invariant boundary , including the fixed point .the reason for choosing this notation for the fixed point is due to the fact that this value of corresponds to a universe filled with radiation in general relativity . on the other hand, the fixed point does not describe the asymptotic features of the solution that originates from it into the physical state space , since the right hand side of diverges for .however , inserting the asymptotic expression for the center manifold of into leads to the following expansion in : which reveals that the center manifold solution has asymptotically , i.e. , ` the inflationary attractor solution ' originates from a ( quasi ) de - sitter state .furthermore , just as for the solutions originating from , this state is associated with , as follows from and . in the next sectionwe will prove that the one - parameter set of solutions that enter the physical state space from , and the single solution that comes from , all of them originating from a singular state at ( i.e. , at future null infinity with respect to the light cone state space ) , constitute all solutions in the physical state space .consider the function which obeys the equation it follows that is monotonically decreasing when and . furthermore , since it follows that only represents an inflection point in the evolution of . as a consequence , which implies that all orbits in the physical state space with originate from the subset , while when , which implies that all orbits in the physical state space end at the subset .there are thereby no fixed points or periodic orbits in the physical state space .the analysis of the subset is trivial and our previous investigation of the fixed points on shows that there is a single orbit that enters the physical state space from while there is a 1-parameter set that originates from .the above global considerations based on proves that these local fixed point results describe the origins of _ all _ solutions in the physical state space . the invariant subset yields the equation as follows from . since it follows that represents a periodic orbit where is monotonically decreasing . from our considerations of the function , this proves that this periodic orbit is a limit cycle that describes the future asymptotic behaviour of _ all _ solutions in the physical state space , i.e. , it constitutes the -limit set of all solutions with . as an aside, this provides a simple cosmological example that it is often not sufficient to just do fixed point analysis .we end this section by depicting representative solutions describing the entire solution space in the jordan frame in figure [ fig : vacuumjf ] .note that there is an open set of solutions that are not attracted to the inflationary attractor solution until the oscillatory regime at late times , where all solutions approach the future attractor , i.e. , the limit cycle at .thus , to argue that the inflationary attractor solution is in some sense an attractor requires the introduction of some measure . in this contextwe refer to the recent interesting discussion about scales and measures given in , and references therein . as a final remark, we note that heuristic approximations in the jordan frame for the inflationary attractor solution and for the oscillatory ` reheating ' regime at late times have been given in , and later reproduced in . nextwe will deal with einstein frame dynamics , and then we will present rigorous approximations schemes for the oscillatory regime at late times .such methods can also be applied to the jordan frame , or one can translate the approximations in the einstein frame to the jordan frame by means of the relations given in appendix [ app : relations ] , but for brevity we will refrain from doing this .the analysis in the einstein frame will serve as an illustrative example of ( iv ) : physical solution space interpretation .it exemplifies the situation where a state space only covers part of the jordan frame state space , which , e.g. , leads to coordinate singularities in the form of fixed points .thus fixed points in a given formulation may not correspond to physical phenomena , but may instead reflect that the formulation breaks down , thus necessitating a physical interpretation .( we will see several other examples of this in appendix [ app : systems ] . )when dealing with the einstein frame we will introduce a new regular unconstrained dynamical system on a compact state space in the einstein frame , which gives a complete description of the solution space in this frame .this will enable us to situate the entire solution space in the einstein frame in the state space of the jordan frame .the einstein frame formulation of gravity is based on the following conformal transformation of the jordan metric to the einstein frame metric ( see e.g. and ) : which thereby assumes that . thus constitutes the boundary between the einstein and the jordan frame state spaces , of which it in general will be a subset , but not an invariant subset in the jordan frame , as shown in eq . .as a consequence , as we will see , there are solutions with that come from the region with and pass through the surface in the jordan state space , i.e. , some solutions in the einstein frame can be ( conformally ) extended in the jordan frame .introducing the action in the jordan frame transforms to an action with the einstein - hilbert form for a scalar field minimally coupled to gravity , where is the curvature scalar of the einstein frame metric .specializing to the present vacuum models leads to ( see e.g. ) where the above potential is depicted in figure [ fig : potential ] . for the minimally coupled scalar field in the einstein frame corresponding to the model , .,scaledwidth=60.0% ] in a flat rw geometry , the present models in the einstein frame yield the following evolution equations and the constraint ,\ ] ] where denotes the einstein frame proper time variable , while and are the einstein frame scale factor and hubble variable , respectively . as in the jordan frame , the einstein frame scale factor decouples , leaving a reduced closed system of first order evolution equations for determined by and , which , due to the constraint , yield a dynamical system describing a flow on a 2-dimensional state spaceonce the reduced system of first order equations has been solved , equation yields .let us now follow the ideas presented in for scalar fields and introduce new variables , which give a global dynamical system formulation on the reduced einstein state space : [ depvar ] where and a new time variable which takes into account the different asymptotic scales of the model , as described in .this leads to the following evolution equations [ 3ddynsys ] , \\\frac{d\sigma_{\phi}}{d\tilde{\tau } } & = -\tilde{x}\left[3(1 - \tilde{t})\tilde{x}\sigma_\phi + \tilde{t}f\right],\end{aligned}\ ] ] subjected to the constraint where note that , since(\tilde{t}f),\ ] ] is an invariant boundary subset in the einstein frame , but not in the jordan frame .this difference is due to the fact that the relation between the time variables in the two frames is singular at .the relatively compact einstein state space is defined by the cylinder with and with the region cut out from it .the state space can then be regularly extended to include the invariant boundary subsets , , and when , yielding the extended state space ( see figure [ fig : einsteinstatespace ] ) .note that constant surfaces in the state space correspond to constant values of , while the invariant boundaries are associated with the asymptotic limits ( ) , ( ) , and ( ) .the constraint can be globally solved by introducing which leads to the unconstrained 2-dimensional dynamical system [ 2ddynsys ] finally , the above changes of independent and dependent variables yield , which leads to a quadrature for once has been found .if one wants to express the results in terms of the einstein frame proper time variable , one also needs to integrate eq . .the dynamical system admits fixed points on , all located on the boundaries and : where is an integer .the einstein frame deceleration parameter , defined by , is given by it follows that and have , which for the minimally coupled scalar field interpretation corresponds to a massless state ( ) , while corresponds to a ( quasi ) de sitter state in the einstein frame ( associated with ) , since for the solution that originates from this fixed point asymptotically .the notation stands for power law , since the asymptotic behaviour of the solutions that originate from this fixed point in the einstein frame are described by the self - similar power - law solution associated with an exponential potential .the fixed points are hyperbolic sources with eigenvectors tangential to the invariant subsets and .the fixed point is a hyperbolic saddle with a single solution entering the state space .finally the fixed point has one negative eigenvalue and a zero eigenvalue , where the latter is associated with the center manifold of .the center manifold in turn corresponds to the inflationary attractor solution in both the einstein and jordan state spaces , which is the single solution that enters from .we now use center manifold analysis to establish that there is only a single solution that enters the state space from , and to obtain an approximation for this solution in the vicinity of in .linearizing the equations in the neighborhood of yields the following stable , , and center , , tangential subspaces , respectively : to investigate the center manifold associated with the tangent space , we adapt the variables to the location of and the tangent space and replace and with so that is located at .the center manifold can be obtained as the graph near , where ( fixed point condition ) and ( tangency condition ) . inserting these relationships into eq . and using as the independent variable leads to = 0 , \end{split}\ ] ] where and .as before , we can solve the equation approximately by representing as a formal power series truncated at some chosen order . inserting this into eq . andalgebraically solving for the coefficients leads to it follows that the single solution that originates from into ( the ` inflationary attractor solution ' ) is described by the approximate expansion in this case is monotonically decreasing , except when , which corresponds to the following monotonicity properties of in our einstein frame state space setting . from its evolution equationwe see that is monotonically increasing in when .since by corresponds to , we have and thus only represents an inflection point in the evolution of in ( there are no invariant sets at in ) .the monotonicity of in combination with the expression for , shows that solutions in either come from or from . combining this with the previous local analysis show that there are two one - parameter sets of solutions entering from and , respectively , a single solution entering from , and one from . at the boundary subset obtain from .this shows that is a periodic orbit with monotonically decreasing .furthermore , the monotonicity of shows that this is the limit cycle of all orbits in , thus constituting their -limit set .the solution space in the einstein frame is depicted in figure [ fig : einsteinstatespace ] .we now present some approximation schemes for solutions at late times close to , which corresponds to , i.e. , we will give approximations for the oscillatory phase at late times .we first consider an averaging technique used in .we thereby take the average with respect to of the right hand side of ( since while slowly approaches one ) , which leads to [ dynsysaverage ] it follows that where , where is some initial point for the trajectory .this approximation is valid for all solutions when approaches one , including the center manifold attractor solution , see figure [ fig : average ] . in rendall gave rigorous results for late time behavior for scalar field models with a potential with a minimum that asymptotically can be described by a potential .since this covers the present models , we can translate the results in , which yields the following asymptotic approximation : where is a constant and is proper time in the einstein frame ( with normalized to one ) .the relations given in eq .describe a parameterized curve in the global state space , which is plotted in figure [ fig : oscback ] ; note that the oscillatory approximation becomes increasingly accurate toward the future , reflecting that it describes the asymptotic evolution at late times .the einstein frame is characterized by the conformal factor which has to satisfy , and therefore the einstein frame state space is characterized by a boundary in the jordan frame state space . in the present casethe conformal factor is given in terms of our jordan frame state space variables by the curve intersects the invariant boundary at and , i.e. , at the fixed points and .the variable has a maximum along the curve at , which happens when .moreover , at it follows that which is everywhere positive except at , i.e. , at the fixed point on the boundary of the jordan frame state space where it vanishes ( see figure [ fig : jordan_einsteinbound ] ) .this implies that there are no orbits that pass through from the region into the state space where .the relation between the solutions originating from the fixed points in the jordan and einstein frame state spaces can be obtained by noting that by inserting the fixed point into the above expression , it follows that the asymptotic ( quasi ) de sitter state in the einstein frame state space is also the asymptotic ( quasi ) de sitter state in the jordan frame state space . thus is asymptotically finite for the inflationary attractor solution in the einstein frame , while in the jordan frame .we also note that for and , while the above expression for is ill - defined for .however , since is hyperbolic and the dynamical system is analytic we can insert the solution of the linearized equations in the neighbourhood of to find a suitable approximation for .it turns out that near becomes a constant with the value determined by the ratio of the two arbitrary constants associated with the two eigenvectors .this describes a 1-parameter set of solutions passing through from negative to positive in the jordan frame state space .these results establish that and yield the solutions that originate from the fixed point in the jordan state space ( where the solution from corresponds to the solution that initially is tangential to in the jordan frame state space ) while the solutions that originate from correspond to a coordinate singularity associated with the breakdown of the einstein frame at .the solution that originates from divides the orbits that originate from in the jordan frame into two classes : ( a ) orbits , like itself and the single orbit from ( which is the same orbit as that coming from in the einstein frame ) , that throughout their evolution have ( i.e. their evolution in the jordan frame is entirely covered by that in the einstein frame ) , and ( b ) orbits that begin with in the jordan frame and then pass through and subsequently have throughout their remaining evolution .the last class , therefore , consists of solutions in the einstein frame that are past conformally extendible in the jordan frame ; see figure [ fig : jordan_einsteinbound ] where the shaded region corresponds to the region in the jordan state space that is conformal to the einstein frame ( cf .figure [ fig : lightcone_ss ] ; note that is easy to translate the present results to the original state space picture by means of the figures ) . if one is so inclined , one can obtain further details of the above nature by inserting approximate asymptotic solutions into eq . in appendix[ app : relations ] , which describes the transition between the jordan and einstein frame state space variables ., the shaded region in the figure above depicts the domain of the jordan state space that is conformal to the einstein frame , given by .note that the special space - dashed solution , originating from in the direction transverse to the invariant boundary , divides the solutions into those that enter from the region ( exemplified by the dotted and dash - dotted lines ) and those that are always in the region ( exemplified by the dashed and long - dashed lines and the inflationary attractor solution originating from ) ., scaledwidth=50.0% ]the global regular system we have presented for the jordan frame naturally conveys the global properties of the models at hand , as illustrated by figure [ fig : vacuumjf ]. nevertheless , it is by no means an optimal dynamical system for all aspects one might want to investigate : there are other dynamical systems that have complementary properties .this is already exemplified by the fact that our dynamical system for the einstein frame , among other things , simplifies the asymptotic analysis of the inflationary attractor solution originating from the fixed point and offers various approximation schemes for the oscillatory regime at late times , thereby complementing other heuristic jordan frame methods . another useful system is discussed in appendix [ app : systems ] ( where its close relationship to the works in is also commented on ) .it is based on a variable transformation from to the variables , defined by and the time variable , which leads to the following simple regular system of unconstrained equations : [ zqsys ] as discussed in appendix [ app : systems ] , the variable transformation breaks down at , which for and are located at infinity , i.e. , and are unbounded .straightforward compactifications such as a poincar compactification of and are inappropriate , since such compactifications result in an erroneous state space topology , which may result in wrong conclusions about the properties of the solutions .for example , such a compactification ruins , or at least complicates , a treatment of the oscillatory regime at late times ._ this illustrates that it is necessary to take into account the global topological properties of the physical state space in order to obtain a correct description of the solution space and its properties _ , which illustrates a non - local aspect ( apart from fixed points reflecting coordinate singularities ) concerning the relationship between dynamical systems formulations and ( iv ) : physical solution space interpretation . nevertheless , the above system has local advantages .the system admits two fixed points , both located on the invariant boundary ( ) : [ fp_qz ] the fixed point is a hyperbolic source and corresponds to in our global system , while is non - hyperbolic with one negative eigenvalue and one zero eigenvalue .this fixed point corresponds to the fixed point in the global system , obtained after a blow up .a center manifold analysis associated with the zero eigenvalue of yields the following approximation for the inflationary attractor solution ( see appendix [ app : systems ] ) : .\ ] ] note that obtaining an approximation for the inflationary attractor solution that comes from when is considerably easier in these variables than for the , variables , and the expansion is given as since , which might be regarded as preferable .the above brings the inflationary attractor solution into focus .usually the inflationary regime is understood in terms of slow - roll approximations . in the jordan framethis approximation can be found in and reads . since , this leads to , which gives , i.e. , it just gives the leading order term in the center manifold expansion for the inflationary attractor solution in the variables .a comparison with the global variables shows that to leading order .the above illustrates that not only are the variables a useful complement to the global variables , since they more straightforwardly give approximations for the inflationary regime , but they are also intimately linked to the hubble slow - roll approach .next we consider the einstein frame and the usual slow - roll approximation . in this setting the slow - roll approximation is obtained by inserting into which for the present scalar field potential gives expressed in terms of the variables and , this results in . \ ] ] in the neighborhood of , represented by , this yields which is the tangency condition for the center submanifold of , given by the leading order expression in .the slow - roll approximation is therefore just an approximation for the center manifold in the vicinity of in our einstein frame state space formulation . in this contextit should be pointed out that we can of course use variable relationships , given in eq . in appendix[ app : relations ] , to translate the various approximations from the einstein to the jordan frames and vice versa , and their series expansions can be improved by taking pad approximants , as discussed in e.g. .we end this discussion by emphasizing once more that the main purpose of the presently studied models was to specifically illustrate some general aspects of cosmology with a simple example , namely ( a ) the ingredients ( i ) ( iv ) in the introduction , and ( b ) some dynamical systems methods with a broad range of applicability , even though the particulars have been tailored to the specific properties of the rw models .although quite special , these models also capture very clearly some central issues in -gravity beyond the above methodological aspects .for example , as stated in the introduction , one way of classifying -gravity models is according to if for all or not . in the latter case the correspondence between the original model and its einstein frame formulation , or its brans - dicke ( ) version ( see e.g. for a description of this correspondence ) , only holds locally for the range of where . for such models the evolution in the jordan frame of some solutionsare incompletely described in these formulations , i.e. , a local formulation correspondence does not entail a global correspondence , as is clearly illustrated in figure [ fig : jordan_einsteinbound ] . in this context , note that if was an invariant subset in the jordan frame , then the part of the jordan state space would constitute an invariant subset . in this case, one could perhaps argue that the solutions associated with this part of the state space could be discarded on some claimed physical grounds , thus leading to a global physical correspondence between the solution spaces of the different frame formulations .however , as we have shown , is not in general an invariant subset , nor is therefore .thus if one wants to argue that a global physical correspondence exists for the different formulations , one is forced to come up with some arguments for why part of some solutions in the jordan frame should be discarded ( note that the existence of such solutions is ensured by that is not an invariant subset ; again , see figure [ fig : jordan_einsteinbound ] as an illustrative example ) .there are , of course , some things that the present models can not address . in particularthis holds for models where the condition is broken ( leading to e.g. tachyonic instabilities , see e.g. and references therein ) .the change of sign of is particularly problematic from a mathematical point of view since the constraint becomes degenerate when , and , moreover , the causal properties of the field equations change when changes sign ( hence the tachyonic instability ) .there has been some work in cosmology to extend solutions when the equations are ill - defined , notably . however , we here point out that the existence of state space boundaries mathematically resemble sonic shock waves for fluids .it is therefore worthwhile to note that such problems have been dealt with in e.g. the context of spherically symmetric self - similar perfect fluid models , where it was shown how to extend solutions through sonic shock wave surfaces .incidentally , these models also provide examples where it is useful to cover the state space with several coordinate patches in order to exploit special structures in different parts of the state space , a problem one will inevitably will have to deal with when it comes to most cosmological models .a. a. is funded by the fct grant sfrh / bpd/85194/2012 , and supported by the project ptdc / mat - ana/1275/2014 , and camgsd , instituto superior tcnico by fct / portugal through uid / mat/04459/2013 .s. c. was supported by the fundao para a cincia e tecnologia through project if/00250/2013 .s. c. acknowledges financial support provided under the european union s h2020 erc consolidator grant `` matter and strong - field gravity : new frontiers in einstein s theory '' grant agreement no .magrath646597 , and under the h2020-msca - rise-2015 grant no .strongrhep-690904 . c. u. thank the center for mathematical analysis , geometry and dynamical systems , instituto superior tcnico , lisbon , for kind hospitality .to translate between our global dynamical systems formulations for the jordan and einstein frame state spaces , we need the explicit relationship between our global dynamical systems variables , which are as follows : [ joreinvar ] 2 h & = m()(1 - _ ) f^-1/2 , & 2&= ( ) f^-3/2g_+ , + & = ( ) _ f^-3/2 , & x & = g_-g_+^-1 , + r & = ( ) f^-1 , & s & = -2f^1/2g_+^-1 , and 2 & = ( ) h_+ f^-3/2 , & & = 4()f^3/2h_+^-1 , + & = ( ) h_-f^-3/2 , & _ & = h_-h_+^-1 , + & = , & & = -2sf^1/2h_+^-1 , where we have defined and where we recall that as expected it follows that the variable transformation from the future light cone jordan state space to the einstein state space breaks down at the boundary , since the jacobian determinant for the variable transformation is given by .consider the dynamical systems formulation in the jordan frame based on the variable transformation from to the variables , defined in and which obey the evolution equation given in , while the constraint is used to solve for .this system is manifestly invariant under the discrete symmetry , and it is remarkably simple .so why not use this system of equations instead of the previous ones ? and are closely related to those in and .indeed , they are affinely related to and in , while is proportional to and is affinely related to in . ]firstly , note that the jacobian determinant of the variable transformation to is given by , i.e. , the variable transformation breaks down at , which for is located at infinity .further insight is obtained by using our definitions to express and in terms of and : ^ 2 , \quad q=1 + 2\left(\frac{t}{1-t}\right)\!\left(\frac{1+\tan^2(\theta/2)}{\tan^3(\theta/2)}\right).\ ] ] as can be seen , both and diverges when , i.e. , when , which is where the variables break down .furthermore , all solutions pass through ( where blows up ) infinitely many times during the oscillating era at late times .our jordan state space formulation in the main text has also the advantage of clearly showing that is not associated with any spacetime singularity , but instead reflects the fact that becomes zero during the cosmic evolution .secondly , all the present variables are unbounded , although is positive . in addition, it is possible to use the natural extension of the state space that includes the invariant subset boundary , since all interior orbits originate from fixed points on this boundary , which corresponds to .one can then use that is an invariant boundary _ and _ that is non - negative to produce a new bounded variable , defined by , and change the time variable appropriately so that the right hand sides of the equations become polynomial in the dependent variables .a similar procedure is not possible with .of course one can replace with a bounded variable in a number of ways , but there does not seem to exist a physical structure which one can tie to such a compactification , except by essentially going back to our original bounded system .for example , a poincar compactification of would enforce an erroneous topology on the state space , which would have resulted in wrong conclusions about the properties of the solutions , since such a compactification would compromise a treatment of the oscillatory regime at late times .although the dynamical system is inappropriate for global considerations , we have seen that it still has advantages , as illustrated by its simplicity and desirable local fixed point properties .the system has two fixed points located on the boundary , given in .the fixed point is a hyperbolic , while is non - hyperbolic with one negative eigenvalue and one zero eigenvalues . to deal with the zero eigenvalue of , we apply center manifold theory .the negative eigenvalue corresponds to a stable subspace given by the invariant subset , which thereby coincides with the tangential stable subset , while the center manifold has a tangential subspace , described by : which implies that in a neighborhood of the center manifold is described by the graph . from itfollows that obeys the first order differential equation which is solved approximately for by a formal series expansion , which results in ,\ ] ] as .let us now consider an example of another dynamical systems treatment of flat rw cosmology which can be found in , where the following variables ( when restricted to the present vacuum case ) are defined : viewing this as a variable transformation from to leads to the jacobian determinant the variables thereby break down at , , and , i.e. , at the boundary of the state space of the einstein frame .it brings parts of the future null infinity of the state space to finite values of the variables , but other regions , such as the generic one close to the minkowski fixed point , are now shifted to infinity in these unbounded variables . in the present caseit follows that the auxiliary quantity in is given by , which when using a time variable leads to the evolution equations subjected to the constraint which can be solved globally for one of the variables .it is clear that not only has the system an unbounded incomplete state space , but the equations are also irregular as the right hand side blows up on the line , .to conclude , the system is inappropriate for a global analysis of the problem , but it is still possible to do some local analysis . to accomplish this ,let us solve for one of the variables , e.g. , ( which variable we solve for does not change any conclusions ) , and look for fixed points .since appears in the denominator ( writing the right hand sides of and with as a common denominator shows that it is impossible to get rid of this denominator ) , the equations are not defined for and hence fixed points must have .there are two fixed points : : and : .the vacuum fixed point in is not defined for the present models , since it is associated with numerators and denominators in the equations that simultaneously are zero .furthermore , in only exists when it coincides with , for which .note that the definitions implies that corresponds to the asymptotic limit and .in fact which implies and .the fixed point on the other hand implies that and .this is because which implies that and hence .thus both fixed points correspond to part of future null infinity of the state space light cone .note that is associated with the boundary of the einstein frame .the fixed point corresponds to and is a non - hyperbolic fixed point with one negative value and one zero eigenvalue with an associated center manifold , corresponding to the fixed point for and .it therefore gives similar results , but the more complicated dynamical system leads to unnecessary technical complications .the situation for when , is , however , worse . in this case is a hyperbolic saddle , which is associated with a coordinate singularity due to the break down of at . because of this breakdown , yields a solution that comes from a particular direction from future null infinity of the physical state , namely , thereby missing the one - parameter set of solutions that originates from there into the physical state space .thus a fixed point analysis in these variables does not show that there actually is a one - parameter set of solutions that originate from the limit and , which covers all solutions except the single solution from .next we comment on a previous attempt to provide a compact state space , given in . in this workit was assumed that and , where the latter follows from the first condition in our case .the variables the authors introduced were given by where ^{1/2}$ ] .for the vacuum case one can solve for and to obtain a system of evolution equations for and .the authors also introduce an auxiliary quantity , which for the present case can be written as explicitly inserting this into the equations in ( which we refrain from giving because of their considerable complexity ) shows that they have in the denominator , which means that the equations are non - regular and break down at and at .this result , in combination with the fact that the variables only compactify the part of the state space , and that is not an invariant subset on the physical state space ( except for at the minkowski fixed point ) , unfortunately leads to complications and some erroneous conclusions ( due to a breakdown of the time variable in ) .for example , as we have proven , all solutions pass through infinitely many times , in contrast to what is claimed in .this example illustrates that compactifications must respect the structure of the state space ; if one chooses to compactify only part of it there will be coordinate singularities associated with the boundary one chooses for the compactification , unless it is associated with an invariant subset in the original jordan state space for .s. capozziello and m. francaviglia , _ extended theories of gravity and their cosmological and astrophysical applications _ , _ gen ._ * 40 * ( 2008 ) 357 [ arxiv:0706.1146 ] .t. p. sotiriou and v. faraoni , _f(r ) theories of gravity _ , _ rev .* 82 * ( 2010 ) 451 [ arxiv:0805.1726 ] .s. i. nojiri and s. d. odintsov , _ unified cosmic history in modified gravity : from f(r ) theory to lorentz non - invariant models_. _ phys .* 505 * ( 2011 ) 59 [ arxiv:1011.0544 ] .s. capozziello , s. carloni and a. troisi , _ quintessence without scalar fields_. _ recent res .astrophys . _* 1 * ( 2003 ) 625 [ astro - ph/0303041 ] .s. i. nojiri and s. d. odintsov , _ introduction to modified gravity and gravitational alternative for dark energy _ ,methods mod .phys . _ * 4 * ( 2007 ) 115 [ hep - th/0601213 ] .p. avelino et al . , _ unveiling the dynamics of the universe _ , _ symmetry _ ( 2016 ) [ arxiv:1607.02979 ] .h. j. schmidt , _ fourth order gravity : equations , history , and applications to cosmology _ , _ int .methods mod .* 4 * ( 2007 ) 209 [ gr - qc/0602017 ] .j. d. barrow and a. c. ottewill , _ the stability of general relativistic cosmological theory _ , _ j. phys .a : math . gen . _ * 16 * ( 1983 ) 2757 .a. a. starobinsky , _ a new type of isotropic cosmological models without singularity _ , _ phys .b _ * 91(1 ) * ( 1980 ) 99 .m. b. mijic , m. s. morris and w. m. suen , _ the cosmology : inflation without a phase transition _ , _ phys .d _ * 34 * ( 1986 ) 2934 .k. i. maeda , _ inflation as a transient attractor in cosmology _ , _ phys .* 37 * ( 1988 ) 858 .h. motohashi , _ consistency relation for inflation _ , _ phys .* 91 * ( 2015 ) 064016 [ arxiv:1411.2972 ] . c. q. geng , c. c. lee and and s. lin , _ constraints on the combined models with inflation and viable dark energy _ , _ astrophys .space sci . _* 360 * ( 2015 ) 1 [ arxiv:1506.03253 ] .planck collaboration : p. a. r. ade et al . , _planck 2015 results .constraints on inflation _ , ( 2015 ) [ arxiv:1502.02114 ] a. alho and c. uggla , _ global dynamics and inflationary center manifold and slow - roll approximants _ , _ j. math .phys . _ * 56 * ( 2015 ) 012502 [ arxiv:1406.0438 ] .a. alho , j. hell and c. uggla , _ global dynamics and asymptotics for monomial scalar field potentials and perfect fluids _ , _ class .quantum grav . _* 32 * ( 2015 ) 145005 [ arxiv:1503.06994 ] .j. miritzis , _ dynamical system approach to frw models in higher - order gravity theories _ , _ j. math .phys . _ * 44 * ( 2003 ) 3900 [ gr - qc/0305062 ] .j. miritzis , _ the recollapse problem of closed friedmann - robertson - walker models in higher - order gravity theories_. _ j. math .phys . _ * 46 * ( 2005 ) 082502 [ gr - qc/0505139 ] .n. tamanini , _ dynamical systems in dark energy models _ , phd thesis , university college , london , ( 2014 ) .d. sloan , _ as an attractor _ , ( 2016 ) [ arxiv:1602.02113 ] , j. d. barrow and s. cotsakis , _ inflation and the conformal structure of higher - order gravity theories _ _ phys .b _ * 214 * , ( 1988 ) 515 . j. d. barrow , _ the premature recollapse problem in closed inflationary universes _ , _ nucl .b _ * 296 * , ( 1988 ) 697 . c. uggla , _ global cosmological dynamics for the scalar field representation of the modified chaplygin gas _ , _ phys . rev .* 88 * ( 2013 ) 064040 [ arxiv:1308.5838 ] .a. alho and c. uggla , _ scalar field deformations of cosmology _ , _ phys .d _ * 92 * ( 2015 ) 103502 [ arxiv:1505.06903 ] .j. d. barrow and s. hervik , _ evolution of universes in quadratic theories of gravity _ , _ phys .* 74 * ( 2006 ) 124017 [ gr - qc/0610013 ] .a. d. rendall , _ late - time oscillatory behaviour for self - gravitating scalar fields _ , _ class .quantum grav . _* 24 * ( 2007 ) 667 [ gr - qc/0611088 ] .s. carloni , _ a new approach to the analysis of the phase space of -gravity _ , _ jcap _ * 1509 * ( 2015 ) 013 [ arxiv:1505.06015 ] .m. goliath , u. s.nilsson and c. uggla , _timelike selfsimilar spherically symmetric perfect fluid models _ , _ class .quantum grav . _ * 15 * ( 1998 ) 2841 [ gr - qc/9811065 ] .b. j. carr , a. a. coley , m. goliath , u. s.nilsson and c. uggla , _ critical phenomena and a new class of selfsimilar spherically symmetric perfect fluid solutions _ , _ phys .* 61 * ( 2000 ) 081502 [ gr - qc/9901031 ] .b. j. carr , a. a. coley , m. goliath , u. s.nilsson and c. uggla , _ the state space and physical interpretation of self - similar spherically symmetric perfect - fluid models _ , _ class .quantum grav . _ * 18 * ( 2001 ) 303 [ gr - qc/9902070 ] .
|
we discuss dynamical systems approaches and methods applied to flat robertson - walker models in -gravity . we argue that a complete description of the solution space of a model requires a global state space analysis that motivates globally covering state space adapted variables . this is shown explicitly by an illustrative example , , , for which we introduce new regular dynamical systems on global compactly extended state spaces for the jordan and einstein frames . this example also allows us to illustrate several local and global dynamical systems techniques involving , e.g. , blow ups of nilpotent fixed points , center manifold analysis , averaging , and use of monotone functions . as a result of applying dynamical systems methods to globally state space adapted dynamical systems formulations , we obtain pictures of the entire solution spaces in both the jordan and the einstein frames . this shows , e.g. , that due to the domain of the conformal transformation between the jordan and einstein frames , not all the solutions in the jordan frame are completely contained in the einstein frame . we also make comparisons with previous dynamical systems approaches to cosmology and discuss their advantages and disadvantages .
|
the realization of long - distance quantum communication schemes should resort to the distribution of entangled states between distant locations . although photons are the optimal quantum information carriers in long - distance quantum communication as the interaction between them and environment is weaker than others , the polarization degree of freedom of photons is incident to the noise in a quantum channel .noise will degrade the entanglement of a photon pair or even turn it into a mixed state .if the destructive effect of the noise is low , the two parties in quantum communication , say alice and bob , can first exploit entanglement purification to improve the entanglement of the quantum systems , and then achieve the goal of quantum communication with maximally entangled states .entanglement purification becomes a very important element in quantum repeater for long - distance quantum communication . in 1996 , bennett _ et al . _ proposed an entanglement purification protocol ( epp ) based on quantum controlled - not ( cnot ) logic operations , and subsequently it was improved by deutsch _ et al ._ using similar logic operations . in 2001 proposed an epp with linear optical elements . in 2002 ,simon and pan improved their protocol .they considered a currently available source , a parametric down - conversion ( pdc ) source , to prepare entangled photon pairs , and they first used spatial entanglement to purify polarization entanglement .both of these protocols should resort to sophisticated single - photon detectors , which is not a simple task in linear optics . in 2008 ,an epp based on nondestructive quantum nondemolition detectors was proposed . by far , all existing epps can not obtain maximally entangled states .they only improve the fidelity of an ensemble in a mixed entangled state . in order to obtain some entangled states with higher fidelity, they have to consume more and more less - entanglement ones. theoretically speaking , it is impossible to get a pair of photons in a maximally entangled state with conventional epps .recently , the applications of hyperentangled states have been studied by some groups .a state of being simultaneously entangled in multiple degrees of freedom is called `` a hyperentangled state '' .the most important use of hyperentanglements is in a complete deterministic local bell - state analysis . in 2008 , with the help of the hyperentangled state in both polarization and orbit angular momentum , barreiro _ beat the channel capacity limit for linear photonic superdense coding . with a type - i and type - ii barium borate ( bbo ) crystal , photon pairs produced by spontaneous parametric down - conversion ( spdc )can be in the hyperentangled state in polarization and spatial degrees of freedom polarization , spatial , energy , and time degrees of freedom , polarization and frequency degrees of freedom ; and so on . in 2009 ,vallone _ et ._ also reported their experiment with a six - qubit hyperentangled state in three degrees of freedom .if we substitute the spdc source in ref. for the pdc source in ref. , we can produce the hyperentanglement with the following form : here , ( ) represents the horizontal ( vertical ) photon polarization , represents the frequency of the signal ( idler ) photon , and represents the spatial mode of photons . in this article, we will present a deterministic epp with hyperentangled states in the form of eq.([hyperentanglement ] ) .the two parties in quantum communication , say alice and bob , can get a maximally entangled photon pair from each hyperentangled state in this epp , which is , in essence , different from all the existing conventional epps .the deterministic feature of our protocol will improve the efficiency of long - distance quantum communication exponentially as the conventional epps will consume entangled quantum resources exponentially for obtaining some maximally entangled states .also , this epp can accomplish the complete nonlocal bell - state analysis .the article is organized as follows . in sec.iia , we describe the principle of the deterministic entanglement purification for bit - flip errors with spatial entanglement . in sec.ii b , the purification for phase - flip errorsis discussed . in sec.iii, we discuss the method of nonlocal bell - state analysis with hyperentanglement . in sec.iv, we analyze the essence of entanglement purification . in sec.v, we discuss the present epp in a practical transmission . a discussion and a summary are given in sec.vi .it is well known that the most important application of epps is in constructing quantum repeaters for long - distance quantum communication in a noisy channel . in order to connect the adjacent nodes, the two parties in quantum communication should first transmit their photons in a noisy channel and then connect them with quantum entanglement swapping .usually , the channel noise will degrade the entanglement of photon pairs .also , the imperfect operations will disturb the entanglement of quantum systems .now , let us start the explanation of our entanglement purification scheme by discussing an ordinary example . during a quantum - signal transmission ,polarization degree of freedom suffers from the channel noise as both the spatial degree of freedom and the frequency degree of freedom are more stable than polarization .the previous experiments showed that the polarization entanglement is quite unsuitable for transmission over distances of more than a few kilometers in an optical fiber .for example , naik _ et al ._ demonstrated the ekert protocol over only a few meters .also , they observed the quantum bit error rate ( qber ) increase to 33% in the experimental implementation of the six - state protocol . for frequency coding , for example , the besancon group performed a key distribution over a 20-km single - mode optical - fiber spool .they recorded a qber contribution of approximately 4% , and estimated that 2% could be attributed to the transmission of the central frequency by the fabry - perot cavity .the experiment by min _ for phase - noise measurements showed that in a realistic environment , the phase in long fibers ( several tens of km ) remains stable , which is an acceptable level for time on the order of 100 .the phase stabilization is relevant for the quantum repeaters in installed optical fiber networks . in simon s protocol , they also performed an epp using spatial entanglement to purify polarization entanglement based on the good mode overlap on the pbs and phase stability that were achieved in previous experiments .in fact , that frequency and spatial entanglement absolutely do not suffer from the noise is only a hypothesis and is unpractical .here we only use it to show the basic principle for our entanglement purification process .we also will discuss the entanglement purification under a realistic environment .moreover , other degrees of freedom , such as time - bin , which is more robust than polarization , can also be used to implement this scheme . in the discussion section of this article, we will show that the entanglement purification essentially performs entanglement transformation between different degrees of freedom , that is , transforms robust entanglement in channel transmissions ( frequency and spatial ) into easily manipulatable entanglements ( polarization ) . under the hypothesis mentioned previously ,the entanglement purification in the present scheme is divided into two steps , that is , purification for bit - flip errors and that for phase - flip errors .we discuss them in detail in this section as follows .after the transmission , the hyperentangled state of eq.([hyperentanglement ] ) will become a mixed one in polarization : here and is the mixed part of eq.([hyperentanglement ] ) in polarization . and are the four bell states for an entangled photon pair : after the transmission , the initial state becomes with one photon belonging to alice and the other belonging to bob . here and .we also let and . and , respectively , and compare the results via classical communication . if their results are different , they need to perform a bit - flipping operation to correct this error .otherwise , there are no bit - flip errors .pbs : polarizing beam splitter.,width=302 ] there are admixtures of the unwanted states and .we note that the state becoming is a bit - flip error , becoming is a phase - flip error , and both a bit - flip error and a phase - flip error take place when becomes . from eqs.([polarizationentanglement ] ) and ( [ totalentanglement ] ) , the original state can be seen as a probabilistic mixture of four pure states : with a probability of the photon pair in the state , with a probability of the pair in the state of and in .the whole task of purification is to correct the bit - flip and the phase - flip errors .so this scheme includes two steps , one for bit - flip error correction and the other for phase - flip error correction .the principle of our scheme for bit - flip error correction is shown in fig. 1 , where represents a cross - kerr nonlinear medium which will make the coherent state pick up a phase shift when it and one photon couple with the medium . we now consider the combinations and .let us first discuss the state and obtain the same phase shift of , which can be detected by alice and bob with an homodyne measurement .finally , coupled by the two polarizing beam splitters ( pbss ) , they will emit from .the whole state becomes .following by the same principle , we can also get the state from if both alice and bob get no phase shifts . in the case of , it never leads to the same phase - shift case .if alice gets the phase shift of and bob gets no phase shift , it means that the proceeding state is changed to . the photon which belongs to alice is in the mode of , and which belongs to bob is in .there is another case for alice and bob .that is , alice gets no phase shift and bob gets .the corresponding state is also , but with the spatial modes of and . by applying our purification procedure , alice and bobcan easily check the bit - flip error as they get different phase shifts with their homodyne measurements on their coherent beams .the spatial modes are also different , corresponding to the different collapsed states , but which can be completely determined .therefore , by classical communication , if a bit - flip error occurs , alice and bob will get rid of the bit - flip error by performing a bit - flip operation .next we show our protocol works for the other cases .the case of will get the same result with latexmath:[|\psi^{-}\rangle|\psi_{f}\rangle so far , we have been talking about a bit - flip error for the mixed state in polarization . by correcting this error ,the initial state in polarization becomes as we know , a phase - flip error can not be purified directly , but it can be transformed into a bit - flip error by hadamard ( h ) operations . in an optical system, it can be finished by a -wave plate ( qwp ) . by performing the h operations on the two photons with two qwps , eq.(6 )evolves : that is , the initial state becomes it is interesting to find that the entanglement in frequency was not affected during the procedure discussed previously , but the spatial entanglement is consumed for correcting bit - flip errors .now we focus on the second step of our epp : correcting the phase - flip error . in fig . 2 , two photons in the from of eq.([statestep2 ] ) belong to alice and bob , respectively .two polarization - independent wavelength division multiplexers ( wdms ) are used to guide photons to different paths , according to their frequencies .for example , in alice s laboratory , it leads the photon in to the mode and the photon in to the mode .however , in bob s laboratory , it leads the photon in to the mode and the photon in to the mode . from eq.([polarizationstatestep2 ] ) , it follows that the original state of the pairs can be seen as a probabilistic mixture of two pure states : with a probability of the photon pair is in the state and with a probability of the pair is in the state .it is obvious that leads to the same phase shift for both and 0. they will be either in the mode with the state of or in the mode with the state of .this is the maximally entangled state we need .however , for , it never leads to the same phase shift .alice will get while bob will get 0 with the state in the mode or alice gets 0 and bob gets with the state in the mode .we can perform a bit - flip operation to get rid of the errors .finally , we will get the entangled state or with a deterministic spatial mode . with quantum frequency upconversion, we can erase distinguishability for frequency .now let us discuss the relationship between this entanglement purification protocol and a nonlocal bell - state analysis ( nbsa ) .a universal conclusion is that a completely local bell - state analysis with linear optics is not possible and one can get an optimal success probability of only .several works have shown that with additional degrees of freedom such as timing and momentum it is possible to distinguish the four bell states locally .however , compared with local bsa , nbsa can not be completed with the collective operations and they can only resort to local operation and classical communication ( locc ) . herewe will show that with hyperentangled states and quantum nondemolition measurement ( qnd ) , we can also perform a complete nbsa .the difference is that we have to need another two degrees of freedom in nbsa , but only one is needed in local bsa .the initial state is a hyperentangled state with the form or .the whole protocol is the same as our epp discussed previously .the first step for nbsa is shown in fig .if alice and bob get the same phase shift , that is , if both get 0 or , they can decide that the photon pair should be in one of the two states .subsequently , they add a h operation on each photon and then make the second check of the phase shifts ( shown in fig .2 ) . if the phase shifts are still the same , the state should be ; otherwise , it is . can be distinguished in the same way . in the first step ,if their phase shifts are different , it must be one of the two states .alice and bob perform a bit - flip operation on one of the photons and a h operation on each photon , which will complete the transformation ; otherwise , it is .in the previous works on entanglement purification , another entanglement of degree of freedom , such as spatial entanglement , has been used to purify the polarization entanglement of photon pairs . in simon s protocol , the spatial entanglement can be used to purify a bit - flip error . after consuming the resource of spatial entanglement , the phase - flip error has to be purified with the conventional method to repeat the same procedure . with another degree of freedom of photons, we can accomplish a deterministic entanglement purification .let us now discuss why our protocol can purify the mixed state completely . from eq.([polarizationentanglement ] ) , we know that there are two kinds of errors in the mixed state , that is , one is a bit - flip error and the other is a phase - flip error .the conventional epps are used to purify a bit - flip error .the phase - flip error can not be purified directly , but can be transformed into a bit - flip error . for the bit - flip purification , alice and bob can check whether one of the pairs has a bit - flip error . for instance , in ref . , the error corresponds to the cross - combinations of . in this case, alice and bob can not pick up these corrupt states , and always keep them in the quantum systems that remain for a next purification .that is , a bit - flip error can not be purified completely .neither can a phase - flip error .they can not make the remaining ensemble reach an indeed pure state . in simon s protocol , they revealed that another kind of entanglement can also be used to purify the polarization entanglement state . in their protocol , they can correct the bit - flip error completely as the spatial entanglement state is a maximally entangled perfect pure state and is not effected by the channel noise .their protocol does not lead to the case that each two - photon pair has bit - flip errors after their purification .following the same principle , we use the frequency degree of freedom for a phase - flip error correction .also , we can use another kind of degree of freedom to complete this task if it does not suffer from the channel noise .locc can not increase the entanglement of quantum systems .therefore , the process of entanglement purification is essentially the transformation of entanglement . in the previous works , the transformation is between the same kind of entanglement , that is , polarization entanglement .so they need to consume the less - entangled pairs largely. the previous work of simon and this protocol show that entanglement can be transformed between some different degrees of freedom .we let the initial state be a hyperentangled state , and it owns three kinds of degrees of freedom . during the purification step ,we consume the entanglement in frequency and spatial degrees of freedom to get a pure polarization entangled state .thus , the whole entanglement purification process does not need to consume the photon pairs but to consume other degrees of freedom of entanglement .we have discussed our deterministic entanglement purification scheme in the case where there are two degrees of freedom that are insensitive to channel noise .we use the spatial freedom and the frequency freedom of photons as an example to describe the principle of our scheme .of course , the main experimental requirement of this scheme is the phase stability if we use the spatial entanglement and the frequency entanglement to purify the polarization entanglement .this requirement may limit the distance of the quantum communication .however , this scheme can be adapted to the case of energy - time entanglement , which would allow the two parties inquantum communication to be a father apart .now , let us discuss the present entanglement purification scheme with a practical transmission based on the spatial entanglement and the frequency entanglement . in a practical transmission for long - distance quantum communication ,the relative phase between two different spatial modes is sensitive to path - length instabilities , which may be caused by the fiber length dispersion , or atmospheric fluctuation in a free - space transmission . in this way ,not only might part of the polarization of the hyperentangled state become a mixed state shown in eq.([polarizationentanglement ] ) , the entanglement in spatial mode may become after transmission . the relative phase between the different spatial modes is denoted by . here is the wave vector of the photons and is the path - length dispersion between the two spatial modes with .that is to say , the spatial entanglement will pick up a phase shift .approximatively , the frequency entanglement has similar features to the spatial entanglement .that is , it may become after transmission . here is the phase dispersion coming from the different frequencies . after a practical transmission ,the initial state may become where and with the first step for the purification of bit errors ( the same as the case where the entanglements in the spatial and the frequency degrees of freedom do not suffer from the phase fluctuation , as shown in fig .1 ) , if both alice and bob get the phase shift on their coherent beams , the photon pair is in the state with the probability of and in the state with the probability of , and they will emit from the spatial modes . if both alice and bob get the phase shift , the photon pair will emit from the spatial modes and will be in the state with the probability of and in the state with the probability of . when alice gets the phase shift and bob gets , the photon pair will emit from and will be in the state with the probability of and in the state with the probability of .when alice gets the phase shift and bob gets , the photon pair will emit from and will be in the state with the probability of and in the state with the probability of . with some unitary operations , alice and bob can make the state of their photon pair in the polarization degree of freedom be here also , the state can be rewritten under the basis as |\phi^{+}\rangle_{ab}\langle\phi^{+}|\nonumber\\ & + & \frac{1}{2}[1-(a+c - b - d)\text{cos}\delta\phi_s]|\phi^{-}\rangle_{ab}\langle\phi^{-}|.\nonumber\\\end{aligned}\ ] ] that is , all the bit - flip errors in the photon pair are corrected completely and there are only phase - flip errors in the quantum system . after the purification for bit - flip errors , alice and bob can transfer phase - flip errors into bit - flip errors with unitary operations again .that is , the state of the photon pair becomes |\phi^{+}\rangle_{ab}\langle\phi^{+}|\nonumber\\ & + & \frac{1}{2}[1-(a+c - b - d)\text{cos}\delta\phi_s]|\psi^{+}\rangle_{ab}\langle\psi^{+}|.\nonumber\\\end{aligned}\ ] ] with the setup shown in fig .2 , if both alice and bob get the phase shift , the photon pair will emit from and will be in the state . if both alice and bob get the phase shift , the photon pair will emit from and will be in the state latexmath:[\frac{1}{\sqrt{2}}(|h\omega_{1}\rangle_{a}|v\omega_{2}\rangle_{b}+e ^{i\delta \phi_{f } } phase shift and bob gets , the photon pair will emit from and will be in the state with some unitary operations . as the case discussed in sec.ii , alice and bob can correct the bit - flip errors in their photon pair completely . in the step for purification of phase - flip errors, the two different frequency modes will introduce a relative phase shift in the bell state . in theory , does not change if the channel lengths do not fluctuate with time .that is , alice can get approximatively the maximally entangled state . with a phase compensation , alice and bob will get the standard bell state discussion and summary ---------------------- there are two approximative methods used in our scheme in a practical transmission .one is the assumption that the phase dispersion in frequency degree of freedom is independent of that in spatial modes .the other is the invariability of with time .certainly , the case in a practical experiment is more complicated than that with these two assumptions . for the state each term will pick up a relative phase in a practical transmission .that is , the state will become .\end{aligned}\ ] ] here is the velocity of photons in quantum channel . , , , and are the channel lengths in the spatial modes , , , and , respectively .when can be rewritten as }|\omega_{2}\omega_{1}\rangle\}\nonumber\\ & & \times \{|a_{1}b_{1}\rangle + e^{\frac{i}{v}[\omega_{1}(l_{a_2}-l_{a_1 } ) + \omega_{2}(l_{b_2}-l_{b_1 } ) ] } |a_{2}b_{2}\rangle \}).\nonumber\\\end{aligned}\ ] ] that is , when , the phase dispersion in frequency degree of freedom can be regarded as being independent of that in spatial modes .if fluctuates with time in a small region , alice and bob will get an ensemble in the state after a phase compensation , and they can , in this case , purify this ensemble for getting some high - fidelity entangled states with conventional epps . here , , and .different from the case with a fixed phase dispersion , alice and bob can only correct completely the bit - flip errors in the photon pair in this case .on the one hand , they will remove the phase - flip errors in the polarization degree of freedom . on the other hand, they will introduce the phase - flip errors in the frequency degree of freedom .the latter comes from the phase dispersion between the two frequencies of photons .if the latter is smaller than the former , this scheme can be used to depress the ratio of phase - flip errors .if fluctuates acutely with time , and the ensemble maybe become a completely mixed state and the two parties can not distill maximally entangled states , the same as the conventional epps in the case that the initial fidelity is smaller than .in fact , a quantum channel fluctuating with time acutely is unsuitable for entanglement purification as the bell state required is mixed with other bell states uniformly . as the spatial and frequency degrees of freedom of photons are more stable than polarization , the conventional epps will not work if the present scheme does not work . in the process of describing the principle of our entanglement purification scheme, we exploit the cross - kerr nonlinearity to construct the qnds .in fact , we should acknowledge that , on the one hand , a clean cross - kerr nonlinearity is quite a controversial assumption with current technology .as pointed out in refs. , the single - photon kerr nonlinearity may do not help quantum computation , and a large phase shift via a `` giant kerr effect '' with single - photon wave packets is impossible . on the other hand , here a cross - kerr nonlinearityis only used to make a parity check for two photons and a strong kerr nonlinearity is not required . meanwhile, other elements can also be used to construct qnds for this scheme . in summary , we have presented a deterministic entanglement purification scheme for purifying an arbitrary mixed state in polarization with present technology .the biggest advantage of this scheme is that it works in a deterministic way in principle .that is , two parties can obtain a maximally entangled polarization state from each hyperentangled state , which will improve the efficiency of long - distance quantum communication exponentially .this protocol can also be used to do the complete nonlocal bell - state analysis , which reveals that this deterministic entanglement purification scheme is equal to a complete nonlocal bell - state analysis . in a practical transmission, this scheme can be used to correct all bit - flip errors approximatively and depress the phase - flip errors in the polarization degree of freedom , which will make it more efficient than conventional epps .we believe that this scheme might be very useful in the realization of long - distance quantum communication in the future as entanglement purification is a very important element in a quantum repeater and a quantum network .this work is supported by the national natural science foundation of china under grant no .10974020 , the foundation for the author of national excellent doctoral dissertation of p. r. china under grant no . 200723 , and the beijing natural science foundation under grant no .
|
entanglement purification is a very important element for long - distance quantum communication . different from all the existing entanglement purification protocols ( epps ) in which two parties can only obtain some quantum systems in a mixed entangled state with a higher fidelity probabilistically by consuming quantum resources exponentially , here we present a deterministic epp with hyperentanglement . using this protocl , the two parties can , in principle , obtain deterministically maximally entangled pure states in polarization without destroying any less - entangled photon pair , which will improve the efficiency of long - distance quantum communication exponentially . meanwhile , it will be shown that this epp can be used to complete nonlocal bell - state analysis perfectly . we also discuss this epp in a practical transmission .
|
projected emittances are quantities which are used to characterize transverse and longitudinal beam dimensions in the laboratory coordinate system and are invariants under linear uncoupled ( with respect to the laboratory coordinate system ) symplectic transport .eigenemittances are quantities which give beam dimensions in the coordinate frame in which the beam matrix is uncoupled between degrees of freedom and are invariants under arbitrary ( possibly coupled ) linear symplectic transformations .if the beam matrix is uncoupled already in the laboratory frame , then the set of projected emittances coincides with the set of eigenemittances , and if the beam matrix has correlations between different degrees of freedom , then these two sets are different . this fact , though looking simple , has interesting applications in accelerator physics and gives the theoretical basis for the round - to - flat transformation of angular momentum dominated beams invented by derbenev . in his schemethe beam with equal transverse projected emittances ( round beam ) but with nonequal eigenemittances is first produced in an axial magnetic field .then the correlations in the beam matrix are removed by a downstream set of skew quadrupoles and projected emittances become equal to the eigenemittances , which means that the beam transverse dimensions become different from each other .this work and further development of the advanced emittance manipulation techniques ( see , for example and references therein ) naturally raise the following question : what are the relations between projected emittances and eigenemittances ?as concerning already known results , in general situation they are limited to the so - called classical uncertainty principle , which states that none of projected emittances can be smaller than the minimal eigenemittance ( see , for example , ) . besides that , in the specific two degrees of freedom case , a number of useful results can be found in . the purpose of this article is to give the necessary and sufficient conditions which two sets of positive real numbers must satisfy in two and three degrees of freedom cases in order to be realizable as eigenemittances and projected emittances of a beam matrix .let us consider a collection of points in -dimensional phase space ( a particle beam ) and let , for each particle , be a vector of canonical coordinates and momenta . then , as usual , the beam ( covariance ) matrix is defined as where the brackets denote an average over a distribution of the particles in the beam . by definition ,the beam matrix is symmetric positive semidefinite and in the following we will restrict our considerations to the situation when this matrix is nondegenerated and therefore positive definite . for simplification of notations and without loss of generality, we will also assume that the beam has vanishing first - order moments , i.e. .let be the independent variable and let be the nondegenerated matrix which propagates particle coordinates from the state to the state , i.e let then from ( [ intr_2 ] ) and ( [ intr_4 ] ) it follows that the matrix evolves between these two states according to the congruence let us write the matrix in block - matrix form where the entries are matrices . because is symmetric , the blocks satisfy the relations for all .one says that the beam matrix is uncoupled if all its blocks with are equal to zero , and one says that the -th degree of freedom in the beam matrix is decoupled from the others if for all .if , similar to the matrix , we will partition the matrix into submatrices , then one can rewrite the transport equation ( [ intr_5 ] ) in the form of a system involving only submatrices of the matrices and in analogy with the matrix , one says that the transport matrix is uncoupled if all its blocks with are equal to zero , and one says that the -th degree of freedom in the transport matrix is decoupled from the others if for all . in the followingwe will assume that the beam transport matrix is symplectic , which is equivalent to say that it satisfies the relations where is the symplectic unit matrix .and of a beam matrix with fixed eigenemittances and .if , then the shaded half - strip turns into a ray ( half - line).,width=283 ] using partitioning into submatrices the two ( equivalent ) conditions for the matrix to be symplectic ( [ intr_8 ] ) can be rewritten in the form of the following set of equations : where and is kronecker s delta . because for an arbitrary matrix the equations ( [ intr_9 ] ) give us the following important identities which are valid for all .projected emittances are the rms phase space areas covered by projections of the particle beam onto each coordinate plane let us assume that in the matrix the -th degree of freedom is decoupled from the others .then from equations ( [ intr_18 ] ) one obtains that and because due to ( [ intr_11 ] ) the submatrix has unit determinant , we see that the projected emittance is conserved during the beam transport independently if the -th degree of freedom in the matrix is decoupled from the others or not .and ) shows all possible values of eigenemittances and of a beam matrix with fixed projected emittances and .if , then two shaded triangles merge into one.,width=283 ] using symplecticity of the transport matrix the congruence ( [ intr_5 ] ) can be transformed into the following equivalent form from this form of the equation ( [ intr_5 ] ) we see that the eigenvalues of the matrix are invariants , because ( [ vectz_7 ] ) is a similarity transformation .the matrix is nondegenerated and is similar to the skew symmetric matrix which means that its spectrum is of the form where all and is the imaginary unit .the quantities are called eigenemittances and generalize the property of the projected emittances to be invariants of uncoupled beam transport to the fully coupled case .the other approach to the concept of eigenemittances is the way pointed out by williamson s theorem ( see , for example , references in ) .this theorem tells us that one can diagonalize any positive definite symmetric matrix by congruence using a symplectic matrix and that the diagonal matrix has the very simple form where the diagonal elements are the moduli of the eigenvalues of the matrix .the matrix in ( [ a2_1 ] ) is not unique , but the diagonal entries of the williamson s normal form ( eigenemittances ) are unique up to a reordering .[ fig3 ] it is clear that not only eigenemittances themselves , but also an arbitrary function of them is an invariant . in particular , in the following we will make use of invariants / 2 = \epsilon_1^{2 m } + \ldots + \epsilon_n^{2m}. \label{dd_2}\end{aligned}\ ] ]in this section we summarize what it is possible to say about beam matrix and its emittances in the arbitrary degrees of freedom case . note that not all presented relations are new .for example , the inequality ( [ prop2_1 ] ) is the well known classical uncertainty principle .let be a beam matrix and let positive real numbers and be its eigenemittances and projected emittances , respectively .then , the following statements are equivalent : * the beam matrix is uncoupled . *the set of projected emittances coincides with the set of eigenemittances . *the product of projected emittances is equal to the product of eigenemittances + let us denote by and the minimums of the quantities and respectively .then , the following statements hold : the two degrees of freedom case the eigenemittances can be calculated according to the explicit formula and the exact relations between them and projected emittances are given by the following proposition : the positive real numbers , and , can be realized as eigenemittances and projected emittances of a beam matrix if and only if the following two inequalities hold : the geometrical interpretation of the inequalities ( [ prop_3 ] ) can be seen in fig.1 and fig.2 .in the three degrees of freedom case the eigenemittances can be found as positive roots of the bicubic equation the positive real numbers , , and can be realized as projected emittances and eigenemittances of a beam matrix if and only if the following inequalities hold :
|
we give necessary and sufficient conditions that two sets of positive real numbers must satisfy in order to be realizable as eigenemittances and projected emittances of a beam matrix . the information provided by these conditions sets limits on what one can to achieve when designing a beam line to perform advanced emittance manipulations .
|
respiration is a fundamental physiological activity and is associated with several muscular , neural and chemical processes within the body of living organisms .given the fact that respiratory diseases such as chronic obstructive pulmonary disease , asthma , tuberculosis , sleep apnea and respiratory tract infections account for about 18% of human deaths worldwide , assessment of multiple respiratory parameters is of major importance for diagnosis and monitoring .accordingly , respiratory parameters such as respiration rate ( rr ) , respiration pattern ( rp ) and respiratory flow - volume are routinely measured in clinical and primary healthcare settings .rr refers to the number of inhalation - exhalation cycles ( breaths ) observed per unit time , usually quantified as breaths per minute ( bpm ) .rp refers to a temporal waveform signifying multiple phases of the respiratory function such as intervals and peaks of inhalation and exhalation , relative amplitudes of different breath cycles and cycle frequency ( instantaneous rr ) .respiratory flow - volume measures the amount of air that is inhaled / exhaled in every breath .a simple application of rr measurement is in assessing whether a human is breathing or not .apart from this , deviation from the permissible rr range ( usually 6 - 35 breaths per minute in healthy adults ) signifies pulmonary and cardiac abnormalities .for example , abnormally high rr is symptomatic of diseases like pneumonia in children .further , estimation of rp has several applications such as detection of sleep apnea , gating signal generation for medical imaging and psychological state assessment .sleep apnea is characterized by disrupted breathing patterns ( cessation , shallowing , flow - blockage ) during sleep which can be detected if a reliable estimate of rp is available .rp is used in respiration gated image acquisition , where radiographic images of human anatomical regions are captured synchronously with certain significant points of the rp ( for instance , an image is acquired at every inspiration peak ) to facilitate accurate image registration and minimize exposure to harmful x - rays .a similar technique is used in therapeutic energy delivery methods such as lithotripsy where shock waves are administered to posterior lower back region at certain temporal triggers of the rp .further , different rps indicate different sympathetic and parasympathetic responses leading to potential analysis of human emotions such as anger and stress . given their aforementioned significance , accurate estimation of rp and rr has been considered an important problem in the biomedical engineering community for decades .several accurate and robust techniques such as spirometry , impedance pneumography and plethysmography can measure rr and some can also estimate rp . however , they employ contact - based leads , straps and probes which may not be optimal for use in situations such as neonatal icu , home health monitoring and gated image acquisition .this is due to several reasons such as sensitive skin , discomfort or irritation and interference of leads with the radiographic images acquired .owing to such needs , a recent trend in non - contact respiratory monitoring has emerged . in the following section , a brief review of non - contact methods for respiratory monitoring is provided .existing methods for non - contact rp estimation fall into one of the three categories - ( i ) estimation through indirect nasal air flow measurement , ( ii ) estimation by imaging volumetric changes in blood using remote photoplethysmography , and ( iii ) estimation by measurement of motion induced due to respiration . in the first category, the idea is to indirectly measure the amount of air inhaled and exhaled during each cycle using different modalities .one technique is phonospirometry , where the respiratory parameters are estimated from measurements of tracheal breath sounds captured using acoustic microphones placed near the trachea . based on the observation that the air exhaled has a higher temperature than the typical background of indoor environments , there are attempts to measure breathing function using highly sensitive infrared imaging .these two methods demand sensitive microphones and thermal imaging systems as additional hardware .also , it has been noted that subtle breathing is hard to measure using phonospirometry .the second category of algorithms are based on the observation that respiration information rides over the photoplethesmogram ( ppg ) signal as an amplitude modulation component .a gamut of recent works concentrate on camera - based ppg estimation .the basic idea in all these is to capture the subtle changes in skin color occurring from pulsatile changes in arterial blood volume in human body tissues .it is well recognized that these methods ( often called remote ppg or rppg ) are highly sensitive to subject motion , skin color and ambient light .a lot of effort has been put in improving the robustness of rppg against these artifacts and significant progress has been made using several signal processing and statistical modeling techniques including blind source separation , alternative reflectance models , spatial pruning , temporal filtering and autoregressive modeling . these methods albeit mature can only provide an estimate of rr but can not estimate rp .further , in some cases , they require a careful selection of a region of interest ( often facial region ) for processing .the third category of methods rely on measuring the motion induced in different body parts due to respiration .one proposed method is to use an ultrasonic proximity sensor ( typically mounted on a stand placed in front of the subject ) to measure the chest - wall motion induced by respiration .techniques based on ( a ) laser diodes measuring the distance between the chest wall and the sensor and ( b ) doppler radar system measuring the doppler shift in the transmitted waves induced by respiratory chest wall motion are also proposed .these methods demand dedicated sensors and in some cases have been reported to depend on the texture of the cloth on the subject . some methods employ depth sensing cameras ( such as kinect ) to directly measure the variations in the distance between a fixed surface ( such as wall ) and the chest - wall .there have been few attempts in estimating the rp using consumer grade 2d cameras : an attempt has been made by shao _ , where the upward and downward motion in the shoulders due to the respiration is measured using differential signal processing , which is highly sensitive to the selection of region of interest ( roi ) comprising the shoulder region .very recently , use of haar - like features derived from optical flow vectors computed on the chest region is proposed to estimate rr . proposes an automatic roi selection method for rp estimation based on the observation that the respiration - induced chest - wall motion is uncorrelated from the remaining sources .the idea is to extract the dense optical flow vectors in the entire scene followed by a robust feature representation exploiting the intrinsic properties of respiration .these features are then factorized to get the respiration signal .one of our recent techniques also falls into this category .these methods are shown to be accurate and robust , however , they require computation of optical flow field for multiple frames which is known to be computationally expensive . in this paper , we propose a method to estimate the respiration pattern and rate using a consumer grade 2d camera .the method is computationally inexpensive and does not critically depend on the texture of the cloth , angle of view of the camera and selection of roi .suppose a consumer grade camera is placed in front of a steady human subject such that its field - of - view comprises the abdominal - thoracic region of the subject .assume that the relative position of the camera with respect to the subject does not change and also that the luminance of the background lighting is fairly constant .under such conditions , if a subject s abdominal - thoracic region is imaged using a video during breathing , the changes in each pixel value measured will be a function of the motion induced by respiration and the surface reflectance characteristics of the region imaged .since each pixel response is distinct , the core problem of rp estimation can be posed as the following : _ how to process individual pixel responses to obtain the respiratory pattern ? _ this problem is solved by modeling every pixel as the output of a linear time invariant ( lti ) channel of unknown system response driven by a hypothetical generating respiration signal that is to be estimated . the problem of estimation of rp is cast as the following estimation problem : estimate the input signal , given the outputs of several independent noise - corrupted lti channels with unknown system responses that are driven by the same generating input signal .this is referred to as the blind deconvolution problem of the single - input multiple - output ( simo ) systems in the signal processing community which is often solved through an assumed parametric form for the input signal and/or the system responses followed by error minimization techniques defined on different cost functions . however , in this paper , we propose a solution for blind deconvolution of periodic signals with a certain class of system characteristics where we neither assume any form for the transfer functions of the individual systems nor rely on error minimization .as mentioned in the previous section , each pixel in the scene is modeled as response of a bibo stable , minimum phase lti measurement channel with unknown dynamics .each lti channel is assumed to be corrupted by an uncorrelated additive noise with an unknown distribution .no additional inter - relationship assumptions are required to be made based on geographic proximity between channels although in reality stronger correlation is expected between spatially proximal pixels .note that the spectral characteristics of the noise is scene specific and hence no distributional assumption is made .+ we term the periodic physical movements of the chest region caused by flow of air into and out of the respiratory system as the generating signal , a correlate of which ( rp ) we wish to estimate using a video stream from a 2d camera consisting of pixels in each frame . let the generating signal be denoted by .let the recorded pixel intensity at pixel , at a time be and the transfer function of the lti channel associated with that pixel be . the noise process associated with that channel shall be denoted by , with zero mean .mathematically , here denotes the convolution operator .let and denote the magnitude and phase response of the lti channel .we model the ensemble of over the variable as a random process of the variable .sampling at each frequency yields an iid random variable indexed by the variable .also ) is assumed to be sampled from a uniform distribution between the entire video now becomes a single input multiple output ( simo ) system with the outputs of an ensemble of several lti channels being driven by the same signal as depicted in fig .[ fig : simo ] . . also , every pixel has its own additive noise source .[ fig : simo],width=336 ] under this model , the mathematical problem of interest is : given , and that are unknown , obtain an estimate of , denoted by which is equal to up to an amplitude scaling factor .that is , estimate where is an arbitrary constant .this is intractable in general since no information regarding the transfer functions of the lti systems is available .however , we show that a recovery of is possible if certain assumptions are made about the characteristics of . specifically ,if is periodic , we show in the subsequent sections that it is possible to recover . to start with, we develop the theory for the case of a pure tone ( being a single frequency sinusoid ) and further extend it to the case of a general periodic signal .let . from the lti system theory , the output response of each lti channel ( denoted by ) will be of the following form : where = and which are both unique and unknown for each lti channel .now , from eq .[ eq : model ] , .the following lemma demonstrates the existence of an estimator for . if is a single frequency sinusoid , the ensemble average of lti output responses taken over a membership set , defined as converges to a scaled version of the generating signal .mathematically , here for any set , operator denotes the cardinality of set . * * from eq .[ eq : estimator ] , for very large , that is , the summation in eq .[ eq : lemma1sum ] may be replaced by an expectation operator ( ) at every time instant , by the law of large numbers .thus , \label{eq : lemma1exp}\ ] ] {\phi}[sin(wt+\phi_{i}+\theta)]+e_{n}[n_{i}]\label{eq : gcap}\ ] ] thus , where / { \pi} ] , then lies on the periphery of an ellipse in the solution space . here denotes a uniform distribution . 1 . * corollary 1 : * _ the major axis of the ellipse is the line corresponding to . _ * _ _ * 2 . *corollary 2 : * * _ _ * _ if with the solution space will lead to a filled ellipse with the distance of a point from center of the ellipse being proportional to the corresponding amplitude ._ it is easy to see that and similarly , for ] . by definition, ,if the phase delay offered by all the channels is assumed to be a constant at all [multiblock footnote omitted ] , will be independent of which can be represented as .as in lemma 1 , for very large , that is , the outer summation in the definition of can be replaced by an expectation operator by the law of large numbers .thus , \nonumber \\ & = & \sum_{k=1}^{n}g_{k}e_{f}[f_{i}(w_{k})]e_{\phi}[sin(w_{k}t+\phi_{i}+\theta)]+e_{n}[n_{i}]\nonumber \\ & = & \sum_{k=1}^{n}g_{k}e_{f}[f_{i}(w_{k})]\frac{sin(w_{k}t+\theta)}{\pi}\label{eq : lemma3conv}\end{aligned}\ ] ] note that in the above expressions as in the case with lemma 1 =\frac{sin(w_{k}t+\theta)}{\pi}\ ] ] over set . from eq .[ eq : lemma3conv ] , for if } = c = constant ] . in the subsequent section ,we show that projection of the signals on the aforementioned quadratic basis aids to select points satisfying both the conditions .let the output response of an random lti system described in sec .ii b , when excited by a periodic signal described by eq .[ eq : gg ] where , be represented by , given by eq .[ eq : fi ] . the ensemble of the quadratic fit coefficients for with ] .also , depending upon the values of the product terms , they converge to a different number , leading to a filled ellipse .the following are some of the major implications of lemma 4 which are to be noted . 1 .every point on the filled ellipse corresponds to an lti channel with a certain magnitude and phase response .the major axis corresponds to the that lti channel with a phase response .lti channels with all other phase shifts ( ) are symmetrically and uniformly distributed around the major axis .a set of lti channels with the same magnitude response but different phase response correspond to points lying on an elliptical ring inside the disk .this is evident from eq .[ eq : astarg ] and [ eq : bstarg ] , where , for the lti channels with same magnitude response , is independent of .further , since is fixed , for such a set defines an ellipse with a fixed length major and minor axis .4 . if the generating signal is assumed to be of a low - pass nature , that is , , the points closer to the periphery of the disk , correspond to the lti channels that emphasize fundamental frequency the most , over the harmonics .this is because , in this case , the summation terms in eq .[ eq : astarg ] and [ eq : bstarg ] are monotonically decreasing series with alternating sign .the points that are farther away from the periphery of the disk , correspond to the lti channels that attenuate the fundamental frequency while emphasizing the higher harmonics .[ fig : lemma4 ] demonstrates lemma 4 and some of its implications . ) and the elliptical disk generated from quadratic fit coefficients corresponding to the outputs of several random lti channels .( c ) depicts the output of a random lti channel with a given magnitude response , when excited by ( d ) is the disk in ( b ) , with the elliptical ring corresponding to set of lti channels with the magnitude response used to obtain the signal ( c ) , marked by green dots .[ fig : lemma4],width=326 ] the model proposed in sec .ii a , involves an additive noise component associated with each pixel ( lti channel ) that has not been considered in all the analysis so far . in this section, the impact of additive noise on the coefficients obtained from quadratic polynomial fitting is discussed .for a periodic excitation signal , from sec .ii a , we have the response of each individual lti system , with from sec .ii.c.2 , we know that the quadratic coefficients for the signal are given by , and because we are working with zero - mean signals , . since the inner products are linear , let and represent the solution for the no - noise case .given the aforementioned definitions , the objective is to relate , to .we have , from cauchy - shwartz inequality , where and , by definition .thus from eq .[ eq : acap ] , [ eq : bcap ] , [ eq : cs1 ] and [ eq : cs2 ] , from eq .[ eq : apert ] and [ eq : bpert ] , it can be inferred that with the addition of noise , gets perturbed within a cloud bounded by .since there is no natural comparative bound of the relative magnitudes of noise and coefficients , nothing can be inferred regarding the relation between the position of a given point in the coefficient space and the quality of the signal .however , useful insights can be obtained if all the signals are normalized ( forced to be unit norm ) prior to quadratic fitting .let the signal - to - noise - ratio ( snr ) corresponding to lti channel denoted by , be defined as with these notations , the following lemma relates , and when random noise is added to to yield , the quadratic coefficients corresponding to normalized get scaled by a factor less than unity and perturbed within a cloud whose area is inversely proportional to to yield the quadratic coefficients corresponding to the noisy signal .let be forced to have unit norm before quadratic approximation to yield . by definition , because .note that let denote the quadratic coefficients for . from lemma 2 , we have from eq .[ eq : acap1 ] , [ eq : bcap1 ] , [ eq : cs1 ] and [ eq : cs2 ] , using the definition of in eq .[ eq : ahat1 ] and [ eq : bhat1 ] , from eq .[ eq : ahat2 ] and [ eq : bhat2 ] , since the factor scaling is less than unity and the area of the cloud of perturbation is .one of the primary implications of lemma 5 is that for a given amount of noise power , the signals having a higher will have a higher snr . from lemma 4 , it is known that , for a low - pass signal , the lti channels that emphasize the fundamental frequency over the others will have a higher and hence a higher snr .this implies that such lti channels ( mapping to points closer to the periphery of the elliptical disk defined in lemma 4 ) are likely to be perturbed the least and have a smaller cloud of perturbation.this fact is illustrated in fig .[ fig : noisecloud ] with an example .= \{0.8,0.1\} ] and phase shifts defined by ] and $ ] .these responses , , are added with a noise process to yield .a part of each of length corresponding to is projected on to the quadratic basis ) described in lemma 4 to obtain the elliptical disks of coefficients .we report three experiments as follows - ( 1 ) different amounts of noise are added to each that is estimated using the method described and the normalized cross - correlation between and the estimated signals are studied , ( 2 ) the extent of validity of as the goe measure is studied against the normalized cross - correlation measured between and the estimated signals and ( 3 ) sensitivity of the method to the choice of the basis frequency ( ) is studied by comparing the correlation and the fundamental frequency of the estimated signals ( with an improper choice of ) with .[ fig : exp1 ] depicts the cross - correlation between the estimated signals and different with snr ranging between -15 to 25 db . with snr ranging between -15 to 25 db.[fig : exp1],width=345 ]the threshold for radius of exclusion was determined by the norm goe .it is seen that for all the signals , as snr increases the cross - correlation generally increases and saturates around 1 db .however , at snrs lower than -2 db , a signal with lower number of harmonics achieves a certain cross - correlation before a signal that has higher number of harmonics .it is seen that for all signals the cross - correlation reaches 0.9 around -2 db implying that this method can recover the signal to a fairly good extent even when noise power is more than that of the signal . in the next experiment, we fix the snr at 0 db and study the properties of the estimated signal by varying the choice of basis - frequency ( ) between and where is the actual fundamental frequency of a given . fig .[ fig : exp2 ] ( a ) depicts the normalized - cross correlation between the estimated signal and as a function of .it is seen that good estimates are obtained only around and estimates degrade on either sides .this implies that the method is very sensitive to the choice of .however , the method can be easily tweaked to circumvent this problem as evident from the following discussion : fig .[ fig : exp2 ] ( b ) depicts the fundamental frequency of the estimated signals as a function of the same as in fig .it can be seen that the fundamental frequency of all the estimated signals ( taken to be the frequency at which the magnitude fourier spectrum peaks ) are exactly the same as that of the corresponding ( as inferred from table i ) .this implies that the peak of the magnitude spectrum of the estimated signals is totally insensitive to the choice of basis - frequency and the lower cross - correlation is due to the aggregation of wrong phase lags ) of the lti channels that are selected in .this is also supported from the theory because to get a good estimate of the magnitude response it is enough to satisfy criteria 1 and 2 listed in section ii.f despite violating criteria 3 and 4 .a wrong choice of still leads to an elliptical disk but with an improper orientation of with respect to the actual phase of , . in this casethe proposed method still picks up the points required for an accurate estimation of the magnitude response albeit distorting the shape of the estimated signal due to the selection of lti channels with improper phase lags .this suggests that a simple way to circumvent the sensitivity of the method to the choice of is to adopt a two - step procedure where the initial step is to derive the actual ( with any initial choice of ) and in the next step is to use the proper to estimate the morphology of .( b ) fundamental frequency of the estimated signal . both ( a ) and ( b ) are plotted as a function of of .[ fig : exp2],width=345,height=144 ] ( b ) fundamental frequency of the estimated signal .both ( a ) and ( b ) are plotted as a function of of .[ fig : exp2],width=345 ] in the proposed method , the optimal choice of the threshold used for selecting the radius of exclusion ( discussed in sec .ii.f ) is decided based on the goe metric .in the last experiment , we validate the proposed metric ( norm of the magnitude spectrum of the estimated signal ) by comparing it against the cross - correlation measure . fig . [fig : goe ] depicts the values of inverse of norm ( goe ) of the estimated signals and cross - correlation between the estimated signal and for three signals : sawtooth , square and triangle wave at snr as a function of threshold for radius of exclusion .it is to be noted that the value of threshold corresponding to unity represents the selection of all points in one half of the ellipse .norm as the goe metric for the selection of optimal choice of the threshold used for selecting the radius of exclusion .values of inverse of ( goe ) of the estimated signals and correlation between the estimated signal and for three signals - sawtooth wave ( a ) and square wave ( b ) and triangle wave ( c ) at snr as a function of threshold for radius of exclusion.[fig : goe],width=355 ] the proposed method selects that threshold corresponding to the highest goe which estimates a signal with a very high correlation with as seen from fig.[fig : goe ] .this indicates that the defined measure for goe can be used as a proxy to determine the threshold in the practical cases where the correlation measure can not be computed due to the unavailability of .the real - life dataset comprises respiration videos acquired from 31 healthy human subjects ( for which institutional approval and subject - consent were obtained ) ( 10 female and 21 male ) between ages of 21 - 37 ( mean : 28 ) .six controlled breathing experiments ( fig .[ fig : groud_truth_depic ] ) ( i ) normal breathing , ( ii ) deep breathing , ( iii ) fast breathing , ( iv ) normal - deep - normal breathing ( sudden change in breathing volume ) , ( v ) normal - fast - normal breathing ( sudden change in breathing frequency ) , ( vi ) episodes of breath hold , ranging between 13 - 150 ( mean : 45 ) seconds were performed by each subject .the subjects wore a wide variety of clothing with different textures or no upper body clothing ( two subjects ) .videos were simultaneously recorded from two cameras with resolution of 640x480 pixels at a speed of 30 frames per second , one from the ventral view ( vga ) and other from lateral view ( 2mp ) of the subject under normal indoor illumination , each placed at a distance of 3 ft from the subject .the subjects were asked to sit and breath in patterns described above resulting in a total of 2.5 hours of recordings with approximately 2000 respiratory cycles for each side . for validation , an impedance pneumograph ( ip ) device was connected through electrodes on the chest of the subject , which estimates the rp and rr by quantifying the changes in electrical conductivity of the chest due to respiratory air - flow .this device is routinely used in patient monitors and other applications in which it is considered a medical gold standard . given a subject video , the proposed algorithm is applied to estimate the rp and rr retrospectively . a typical frame ( as shown in fig .[ fig : depcition - of - an ] ) also consists of regions like background wall , that contain pixels that are unaffected by respiration .an image gradient operation is applied over a large rectangular window on two arbitrarily selected frames at the beginning of the video that are spaced apart by the minimum possible rr .subsequent frames are pruned to contain only those pixels with very high values of the gradient .this selects a smaller region of the frame typically comprising the chest - abdomen region of the subject .note this operation is done only once on a pair of frames at the beginning of the video .this is to reduce the unnecessary processing of static pixels even though the proposed algorithm does not demand the same .the initial estimate of basis - frequency and the residual measurement phase are obtained using the proxy signal .quadratic coefficients are obtained for every pixel time series to form the elliptical disk from which the optimum membership set and thus the rp are estimated .once is assembled by tagging pts ( which only involves computation of inner products ) , the algorithm can be executed in real - time since the estimator only computes a pixel average over .once the rp is obtained , rr is estimated from the peak in the fourier magnitude spectrum of the rp taken over a window ( typically between 10 and 15 seconds ). marked in red .[ fig : depcition - of - an],width=288 ] the goal of this study is to estimate the morphology of the respiration airflow signal ( rp ) and not the actual airflow .further , the ip device also does not directly provide the volumetric information albeit it has been shown to provide the actual airflow information with proper calibration .thus we use the pearson correlation coefficient between the normalized signal obtained from the ip representing the ground truth , gt , and the normalized estimated rp as a measure quantifying the closeness of two signals .this measure lying between -1 and 1 quantifies the closeness of two temporal signals with unity referring to the maximum agreement .rr measurements are validated through the linear regression between gt and the estimated rr values .further since rr is a frequency measurement that can be exactly obtained from both the signals , the exact agreement is quantified using the bland - altman plots .figure [ fig : correlation - and - agreement ] depicts the correlation and agreement of the estimated signals with : ( a ) and ( c ) show the degree of linear relationship between rr measurements using webcam and ip device . for the ventral video acquisition , in fig . [fig : correlation - and - agreement ] ( a ) , it is observed that the correlation coefficient ( ) is 0.94 with , which shows a strong positive correlation between the measurements . also, bland - altman plots in fig .[ fig : correlation - and - agreement](b ) shows that the rr measurement through webcam has an acceptable average agreement ( very low bias of 0.88 ) with the ground truth with of the measurements within 3bpm of the ground truth .the median of deviation between the estimated values and the gt values is zero and the measurements that are outside of the confidence interval ( ci , defined as bpm of the ground truth values ) are often higher than the actual rr .these are the cases where there are high - frequency repetitive and densely patterned textures . for the case of lateral video acquisition , in fig .[ fig : correlation - and - agreement ] ( c ) and ( d ) , a correlation coefficient ( ) of 0.85 is observed with and of the measurements lie within 3 bpm of the ground truth .these numbers are lesser than those for the frontal view because the lateral view typically has much fewer members in . figure [ fig : histograms - of - signal ] depicts the histograms of the signal correlation measure between the estimated rp and gt for different cases .it is seen that the mode of the histogram for all cases is around 0.9 with a negative skew indicating that majority of the estimated signals agree well with the gt ., width=355 ] also , it is seen that the skewness of the histogram for experiments iv , v and vi is worse than that for experiments i , ii and iii as indicated in fig .[ fig : histograms - of - signal ] ( c and d ) .this is because the generating signals corresponding to experiments iv , v and vi have time - varying frequency components .however , note that for the case in which only spectral magnitudes vary with time but not frequency values , the theoretical results and performance of the proposed method remains unaltered .the aforementioned performance of the algorithm seems significant given that the experiments involve the following : ( i ) random textured clothing on subjects , ( ii ) camera of two different resolutions and positions , ( iii ) six different breathing patterns . in conclusion , it is observed that the proposed algorithm offers a good estimate of the rp ( and rr ) if the camera is placed in the ventral position with a clothing that has a texture with some region of similar patterns .[ [ section-2 ] ] 1 . 2 . 3in this paper , we proposed a generic blind deconvolution framework to extract periodic signals from videos .a video is modeled as an ensemble of lti measurement channels all driven by a single generating signal .no assumptions are made on the characteristics of the individual channels except for iid randomness .a simple ensemble averaging over a carefully selected membership set is proposed as an effective estimator which is shown to converge to the generating signal under minimally restrictive assumptions . a method for grouping the channels to obtain the optimal membership set based on the location of the coefficients of the quadratic fits of the lti channel responsesis described .this framework is applied on the problem of non - contact respiration pattern estimation using videos and it is shown to yield comparable results with a medical gold - standard device namely impedance pneumograph . our future work is aimed at extending this framework to ( i ) deal with signals having rapidly varying time - frequency components , ( ii ) estimate other relevant biomedical signals from video and ( iii ) deal with significant sources of motion other than the one caused by the desired source .we acknowledge the support provided by our colleagues dr .satish p rath , tejas bengali and himanshu j madhu pertaining to various aspects of the work . k. h. chon _et al . _ ,`` estimation of respiratory rate from photoplethysmogram data using time frequency spectral estimation , '' _ ieee transactions on biomedical engineering _ , vol . 56 , no . 8 , pp . 20542063 , 2009 .s. g. fleming and l. tarassenko , `` a comparison of signal processing techniques for the extraction of breathing rate from the photoplethysmogram , '' _ int j biol med sci _ , vol . 2 , no . 4 , pp .2326 , 2007 .m. mabrouk _ et al ._ , `` model of human breathing reflected signal received by pn - uwb radar , '' in _ engineering in medicine and biology society ( embc ) , 2014 36th annual international conference of the ieee_.1em plus 0.5em minus 0.4emieee , 2014 , pp . 45594562 .et al . _ ,`` noncontact respiratory measurement of volume change using depth camera , '' in _ engineering in medicine and biology society ( embc ) , 2012 annual international conference of the ieee_.1em plus 0.5em minus 0.4emieee , 2012 , pp . 23712374 .e. a. bernal _et al . _ ,`` non contact monitoring of respiratory function via depth sensing , '' in _ ieee - embs international conference on biomedical and health informatics ( bhi)_.1em plus 0.5em minus 0.4emieee , 2014 , pp .101104 . c. avishek _ et al ._ , `` real - time respiration rate measurement from thoracoabdominal movement with an inexpensive consumer grade camera , '' in _ engineering in medicine and biology society ( embc ) , 2016 38th annual international conference of the ieee _ , accepted for publication .l. zhang , a. cichocki , and s .-amari , `` multichannel blind deconvolution of nonminimum - phase systems using filter decomposition , '' _ ieee transactions on signal processing _ , vol .52 , no . 5 , pp . 14301442 , 2004 .e. moulines , p. duhamel , j .- f .cardoso , and s. mayrargue , `` subspace methods for the blind identification of multichannel fir filters , '' _ ieee transactions on signal processing _ , vol .43 , no . 2 ,pp . 516525 , 1995 .l. tong , g. xu , and t. kailath , `` blind identification and equalization based on second - order statistics : a time domain approach , '' _ ieee transactions on information theory _ , vol .40 , no . 2 ,pp . 340349 , 1994 .n. s. johnston , r. light , j. zhang , m. somekh , and m. pitter , `` 2d cmos image sensors for the rapid acquisition of modulated light and multi - parametric images , '' in _spie optics+ optoelectronics_.1em plus 0.5em minus 0.4eminternational society for optics and photonics , 2011 , pp .
|
non - contact estimation of respiratory pattern ( rp ) and respiration rate ( rr ) has multiple applications . existing methods for rp and rr measurement fall into one of the three categories - ( i ) estimation through nasal air flow measurement , ( ii ) estimation from video - based remote photoplethysmography , and ( iii ) estimation by measurement of motion induced by respiration using motion detectors . these methods , however , require specialized sensors , are computationally expensive and/or critically depend on selection of a region of interest ( roi ) for processing . in this paper a general framework is described for estimating a periodic signal driving noisy lti channels connected in parallel with unknown dynamics . the method is then applied to derive a computationally inexpensive method for estimating rp using 2d cameras that does not critically depend on roi . specifically , rp is estimated by imaging the changes in the reflected light caused by respiration - induced motion . each spatial location in the field of view of the camera is modeled as a noise - corrupted linear time - invariant ( lti ) measurement channel with unknown system dynamics , driven by a single generating respiratory signal . estimation of rp is cast as a blind deconvolution problem and is solved through a method comprising subspace projection and statistical aggregation . experiments are carried out on 31 healthy human subjects by generating multiple rps and comparing the proposed estimates with simultaneously acquired ground truth from an impedance pneumography device . the proposed estimator agrees well with the ground truth device in terms of correlation measures , despite variability in clothing pattern , angle of view and roi . non - contact bio - signal monitoring , respiration pattern estimation , blind deconvolution , respiration rate measurement , robust to roi , illumination and angle of view , ensemble aggregation .
|
the bragg reflection polarimeter ( brp ) was the student experiment on the nasa gravity and extreme magnetism small explorer ( gems ) mission .the experiment existed under the nasa education and public outreach program , and its primary purpose was to train students in the field of space exploration .the scientific goal was to examine the geometry of spacetime surrounding accreting black holes using x - ray polarization .a team of students at the university of iowa ( ui ) was tasked with the design , construction , testing , integration , and operation of a soft x - ray polarimeter under the guidance of ui faculty and nasa personnel .the brp instrument and the gems mission passed the preliminary design review at goddard space flight center ( gsfc ) .however , the mission was subsequently canceled for budgetary reasons . at the time of cancellation ,the team was developing flight hardware and preparing for the critical design review .this paper reports on the development of the multilayer reflector for this instrument , following an earlier conference proceedings publication on this topic . a conceptual diagram of the operation of the gems brp .the multilayer reflector is placed in the optical path of the main experiment .the proportional counter monitors the x - ray count rate that is modulated as a function of spacecraft roll angle , depending on the polarization of the incoming beam . ] the basic instrument concept is shown in fig .[ fig : brpconcept ] . glancing incidence wolter typei mirrors focus a cone of light toward the primary experiment ( xpi : x - ray polarimeter instrument ) .the multilayer reflector is placed in this optical path at a nominal incidence angle , reflecting a narrow energy band centered at about 0.5 kev to the multiwire proportional counter ( mwpc ) . due to the polarization dependence of reflection , the reflected intensity will vary based on the polarization of the incoming beam and the roll angle of the spacecraft . as the spacecraft rotates about the optical axis , the mwpc monitors the count rate to obtain a polarization measurement of an astrophysical source .three main components make up the brp : the multilayer reflector , the mwpc , and the electronics box .the reflector consists of a polyimide membrane substrate mounted on a metallic frame and coated with an ultra - short - period multilayer coating .the mwpc is loosely based on the design of the rosat pspc , and possesses discrete regions for x - ray detection and charged particle background rejection .the electronics box supplies high voltage to the mwpc , shapes and digitizes mwpc pulses , and communicates with the spacecraft .a common figure of merit for a polarimeter is the minimum detectable polarization ( mdp ) , defined as the 99% confidence limit that a polarization amplitude is not measured by chance . for a source rate , background rate , and observation time , the fractional mdp is given by where the modulation factor is the polarimeter response to a 100% polarized source . in the x - ray band ,brewster s angle for 0% reflection of p - polarized light is , and so for a bragg polarimeter operating at incidence .brp mdp estimates were made using the energy spectrum of the crab nebula , the only x - ray source to have its polarization reliably measured ( 15.7% and 18.3% at 2.6 kev and 5.2 kev , respectively ) in the past .theoretical reflectance curves ( see [ sec : modeling ] ) for various multilayer designs were convolved with the crab energy spectrum to predict the source rate , and a rough estimate for the detector background rate ( 0.006 cts / sec ) was obtained using the background rates , scaled for the detector size , on the oso-8 wisconsin experiment . the crab observation time was fixed to that in the gems observing plan ( sec ) .the multilayer integrated reflectance at a fixed incidence of was used to quantify the reflector sensitivity requirement , which was set at 0.08 ev based on the reflectance curve of the multilayer design with the highest permissible predicted mdp ( 10% ) .the rms slope of the reflector surface , calculated at spatial wavelengths between 1 mm and the full aperture , was required to be arcmin to ensure the reflected photons pass through the 11.2 mm diameter detector window .a larger window with sufficient mechanical strength and soft x - ray transmission was not commercially available . in order to not reduce the xpi sensitivity, the reflector must have high transmission in the 210 kev band .transmission curves were calculated and convolved with the xpi effective area to predict the reduction in sensitivity due to the brp . in this calculation , the polyimide substrate thickness was fixed at 2 m and the number of bilayers was varied for each candidate multilayer material pair .a 70% transmission requirement was set at 2.7 kev using the calculated transmission curve that produced the maximum allowable reduction in xpi sensitivity .this energy is at the low end of the xpi energy band , where the transmission is expected to be the worst .also , it is straightforward to measure transmission at this energy using an x - ray tube and a rhodium target the full transmission curve can then be extrapolated from this measurement .a clear aperture requirement was set at 32 mm so that the metallic frame would not occult the light cone seen by the xpi window .a thin film substrate for the multilayer coating was necessary to satisfy the transmission requirement .approximately m thick polyimide samples were obtained from luxel corp . for surface quality and x - ray transmission measurements .the polyimide membranes were mounted on aluminum rings with a 28.6 mm inner diameter .the ultimate reflectance achieved by a multilayer structure is reduced from its theoretical maximum due to imperfections in the individual interfaces .sources of imperfection include surface roughness , interdiffusion of adjacent layers , and even the size of the atoms themselves .typically , substrate surface roughness propagates through the layers of a multilayer .a rule of thumb is to choose a substrate with rms roughness , where is the period or bilayer thickness of the multilayer .the bragg equation ( ) leads to a bilayer thickness of nm for a peak reflectance at kev .therefore , we sought a thin film substrate with roughness rms . one dimensional psds of the luxel polyimide sample , calculated using 1 afm measurements and averaged over 512 scans .the blue line shows the psd of the surface of the film exposed to vacuum ( top ) during substrate fabrication , whereas the green line shows the psd of the surface making contact with the si wafer ( bottom ) . ]the spatial frequency range of surface imperfections that degrade soft x - ray reflectance is approximately 1100 .atomic force microscopy ( afm ) is the only metrology technique able to access this frequency range . using the asylum research mfp-3d located at the ui central microscopy research facility ,profiles of both surfaces of a polyimide sample were mapped out using 1 scans with 512 by 512 points ( 1 scan per surface ) .one dimensional power spectral densities ( psds ) were calculated and are shown in fig .[ fig : luxelpsds ] .the bottom surface refers to the side of the film making contact with the si wafer during substrate fabrication .the roughnesses obtained by integrating the psds in fig . [ fig : luxelpsds ] over the 1100 range are 6.76 and 1.34 rms for the top and bottom surfaces , respectively .the roughness of the bottom surface closely matches the ultra smooth si wafer onto which it was deposited , which has a typical 1 rms roughness in the same spatial frequency range . ] ] variation of surface heights in spatial frequencies has little effect on reflectance throughput , and instead determines the quality of the reflected image .a reflector with significant slope error ( i.e. waviness ) will act to smear the point spread function of the image at the detector window ( 11.2 mm diameter ) , resulting in a lower encircled energy fraction .combined with the pointing wobble of the spacecraft , this effect will increase the uncertainties in a polarization measurement . to limit this impact ,slope error is required to be arcmin in the spatial period range of 1 mm to the full aperture of the reflector .the clear aperture of the brp reflector for the flight instrument is required to be mm .for testing purposes , the polyimide substrates discussed in this manuscript have a diameter of 28.6 mm .using luxel supplied confocal microscope scans ( fig .[ fig : luxelscan ] ) of the polyimide substrate , psds were calculated over the spatial frequency range of ( fig .[ fig : confocalpsd ] ) .as discussed later in [ sec : ripples ] , ripples such as the one observed in the right side of the x scan in fig .[ fig : luxelscan ] are expected to nearly disappear under the environment of a low earth orbit .they would also be less pronounced if a material with a smaller coefficient of thermal expansion was used for the membrane support ring .therefore , we restricted the slope error calculation to ripple - free slices of the membrane surface as shown in fig .[ fig : luxelscan ] .the slope error was quantified by integrating the second moment of the resultant psds , and was 3.47 arcmin and 1.34 arcmin for the x and y scans , respectively .both of these numbers are well within the 5 arcmin requirement .the x - ray transmission of the substrate alone was tested at normal incidence prior to reflector development in order to ensure that the 70% transmission requirement for the multilayer - coated substrate could be met at 2.7 kev .the experimental setup is depicted in fig .[ fig : transsetup ] .a collimated beam generated by an oxford x - ray tube enters the chamber and impacts a target consisting of a copper plate covered with aluminum foil .the tube s rhodium anode is varied between trials over 2025 kv in order to produce a strong bremsstrahlung continuum for fluorescing aluminum and copper .an amptek xr-100cr silicon detector is positioned off - axis to monitor the resulting 8 kev copper and 1.5 kev aluminum lines .the polyimide sample is moved in and out of the path between the target and the detector with a linear vacuum feedthrough .the aluminum line count rate with the sample in the fluorescence path divided by that with the sample out of the fluorescence path is taken as the transmission .the copper line , which has transmission through the polyimide , is used to normalize spectra from run to run due to the varying x - ray tube flux .m maximum variation in the polyimide thickness . ] m maximum variation in the polyimide thickness . ] the results of the measurement are shown in fig .[ fig : subtrans ] .the black points indicate the 76.7% and 78.6% measured transmission values for two samples of luxel polyimide .theoretical transmission values were calculated over the gems energy range using the center for x - ray optics online database .the measurements are consistent with the theoretical transmission values within the uncertainty in the film thickness , lending confidence to the ability to predict substrate transmission using tabulated optical constants .the extrapolated transmission at 2.7 kev and a incidence angle is 94.0% , deemed sufficient for the reflector substrate .multilayer modeling using the imd software package was used to select the multilayer materials and optimal parameters .the primary considerations were the theoretically achievable reflectance and transmission , systematic errors induced by the spacecraft pointing wobble , and the potential stress put on the substrate by the multilayer coating , which should be minimized to ensure polyimide membrane survival .the candidate material pairs were wc / sic and al/v .wc / sic was specifically designed to have near - zero compressive stress , and al/v has high theoretical reflectance below the vanadium l edge at 512 ev .although the brp reflector s nominal incidence angle is , the converging x - ray cone hitting the reflector will have an inherent spread in incidence angles of approximately , dictated by the geometry of the wolter i mirrors .an approximate form of this incidence angle distribution was determined using a simplified 2d model of the mirrors .this angular distribution was used to weight reflectance curves calculated for incidence angles covering the range , which were then summed together to arrive at the true reflector response versus energy .[ fig : blendedcurve ] shows two wc / sic reflectance curves .the sharply peaked curve represents the predicted response for a nominal incidence angle .the smooth , broadly peaked curve is the result from the weighted averaging of reflectance curves over the range . while the peak reflectance of the weighted curve is significantly lower than in the ideal case , the broader energy response acts to preserve the integrated reflectance .because the optical constants of vanadium are rapidly varying in our energy range , integrated reflectivity is not preserved under this transformation for al/v multilayers .[ cols="^ " , ]we have developed , fabricated , and characterized a reflective / transmissive multilayer polarizer element for operation in the 515 ev photon energy region .it consists of an ultra - short - period wc / sic multilayer coating deposited on a 2 m polyimide substrate of 28.6 mm diameter , mounted on an aluminum frame .two prototype reflectors have been fabricated in this manner and meet nearly all of the gems brp requirements .the reflectance and transmission achieved are more than adequate and are reliably predictable .both reflectance and transmission properties were stable after vibration and thermal cycling tests .ripples were observed due to thermal expansion effects and raised temperature control concerns , but selection of an appropriate frame material ( invar , kovar , or titanium ) would alleviate this issue . were it not for the gems mission cancellation, the final step would be to obtain and multilayer - coat candidate reflectors supported by frames with ctes well - matched to that of the polyimide membrane .the reflectance and transmission , and the thermal and vibration stability of these elements would be verified using the processes discussed in this manuscript .the reflector element with optimum performance would then be selected for flight .the authors would like to thank bruce lairson of luxel corp . for providing helpful advice , many polyimide samples for analysis , and confocal microscope data .we also thank steve mcbride for use of his thermal cycling chamber , and john tomsick for making the arrangements .ryan allured and philip kaaret are grateful to takashi okajima and yang soong at gsfc for guidance during the early stages of reflector development .we acknowledge that nearly all of our multilayer modeling was made possible with david windt s imd software .this work was performed under the auspices of the u.s .department of energy by lawrence livermore national laboratory under contract no .de - ac52 - 07na27344 and by the university of california lawrence berkeley national laboratory under contract no .de - ac03 - 76f00098 .the advanced light source is supported by the director , office of science , office of basic energy sciences of the u.s .department of energy under contract no .de - ac02 - 05ch11231 .ryan allured and philip kaaret acknowledge partial support from nasa grant nnx08ay58 g .allured , r. , fernndez - perea , m. , soufli , r. , alameda , j.b . ,maxwell , a. , pivovaroff , m.j . ,gullikson , e.m . , & kaaret , p. proc .spie , 8502 ( 2012 ) .http://dx.doi.org/10.1117/12.930093 black , j. k. , deines - jones , p. , hill , j. e. , et al . proc .spie , 7732 , 25 ( 2010 ) briel , u.g . , & pfeffermann , e. nucl .instr . and meth .a242 , 376 - 381 ( 1986 ) bunner , a. n. , apj , 220 , 261 ( 1978 ) fernndez - perea , m. ultra - short - period wc / sic multilayer coatings for x - ray applications .instrum . & methods in physics research a ( 2012 ) http://dx.doi.org/ 10.1016/j.nima.2012.10.066 .henke , b. l. , gullikson , e. m. , & davis , j. c. , atomic data and nuclear data tables , 54 , 181 ( 1993 ) jensen , c.p ., madsen , k.k . , & christensen , f.e . , `` small d - spacing wc / sic multilayers for future hard x - ray telescope designs , '' experimental astronomy 20(1 - 3 ) , pp .93 - 103 ( 2005 ) soufli , r. , baker , s. l. , robinson , j. c. , et al .spie , 7361 , 7361u19 ( 2009 ) spiller , e. , soft x - ray optics , spie optical engineering press , bellingham , wa ( 1994 ) standard , general environmental verification .nasa goddard space flight center .gsfc - std-7000 ( 2005 ) swank , j. et al ., x - ray polarimetry : a new window in astrophysics by ronaldo bellazzini , enrico costa , giorgio matt and gianpiero tagliaferri . cambridge university press , p. 251( 2010 ) underwood , j.h .& gullikson , e.m ., `` high - resolution , high - flux , user friendly vls beam line at the als for the 50 - 1300 ev energy region , '' j. electron .rel . phenom .265 - 272 ( 1998 ) weisskopf , m. c. , elsner , r. f. , kaspi , v. m. , et al . , astrophysics and space science library , 357 , 589 ( 2009 ) weisskopf , m. c. , cohen , g. g. , kestenbaum , h. l. , et al . ,apj , 208 , l125 ( 1976 ) windt , d.l ., waskiewicz , w.k . , & griffith j.e . ,, 33 , 2025 - 2031 ( 1994 ) windt , d.l ., imd - software for modeling the optical properties of multilayer films , computers in physics , 12 , 360 ( 1998 ) zombeck , m. 2007 , handbook of space astronomy and astrophysics : third edition , cambridge university press , cambridge , uk ( 2007 )
|
a soft x - ray , beam - splitting , multilayer optic has been developed for the bragg reflection polarimeter ( brp ) on the nasa gravity and extreme magnetism small explorer mission ( gems ) . the optic is designed to reflect 0.5 kev x - rays through a angle to the brp detector , and transmit 210 kev x - rays to the primary polarimeter . the transmission requirement prevents the use of a thick substrate , so a 2 m thick polyimide membrane was used . atomic force microscopy has shown the membrane to possess high spatial frequency roughness less than 0.2 nm rms , permitting adequate x - ray reflectance . a multilayer thin film was especially developed and deposited via magnetron sputtering with reflectance and transmission properties that satisfy the brp requirements and with near - zero stress . reflectance and transmission measurements of brp prototype elements closely match theoretical predictions , both before and after rigorous environmental testing . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore
|
adhesion in the presence of roughness is usually destroyed very easily , as it was proved by fuller and tabor ( 1975 ) , even in low modulus materials like smooth rubber lenses against roughened surfaces .therefore , it is still surprising that some insects use adhesion for their locomotion , by using a series of mechanisms , the study of which has generated a very important area of research in the last decades .the mechanisms include splitting the contact into many spots and optimizing the shape and size of each contact ( hui _ et al ._ 2004 , kamperman __ , 2010 , gao & yao 2004 ) . however , even the best of the mechanisms , is unlikely to work with all possible rough surfaces , showing wavelengths and amplitude over different length scales ( huber _ et al . _ , 2007 ,pugno & lepore , 2008 ) , showing a truly efficient system for multiscale arbitrary roughness is extremely difficult to achieve .the understanding about when adhesion can be very strong or very weak depending on features of roughness , pre - load , and system architecture is so far very remote from being complete .at the opposite end of the classical finding of fuller & tabor ( 1975 ) , johnson ( 1995 ) demonstrated a mechanism for which roughness in the form of a sinusoidal wave , has a _ minimal effect _ after a sufficiently high pressure has been applied , because the contact naturally jumps into a state of full contact . indeed, this can happen even _ spontaneously _ ( at zero external load ) for sufficiently high work of adhesion .after this state has been reached , virtually the theoretical strength of the material is found , and one has to postulate either a tiny flaw at the interface , or air entrapment to escape this limit which is far from common experience .therefore , the role of roughness can be pressure - sensitive .guduru ( 2007 ) found a mechanism of enhancement which has some connection with johnson s model , in that he imagined a sphere with roughness in the form of axisymmetric waviness , and solved the problem assuming that roughness was effectively flattened during the deformation , so the contact was a simply connected area .this results in very large oscillations in the normal load as a function of indentation , which were also observed experimentally in guduru & bull ( 2007 ) , and gave rise to both large dissipation because of multiple jumps from unstable to stable branches , and to an enhancement of pull - off .kesari & lew ( 2010 ) further discussed how these unstable jumps could in the limit of small wavelength roughness define a continuous curve removing the oscillations of the original solution : ciavarella ( 2016a ) further remarked that the kesari & lew ( 2010 ) asymptotic expansion corresponds to splitting the classical jkr theory solution for spheres given in ( johnson _ et al . _ , 1971 ) into two branches , loading and unloading ones , which correspond to an decreased and increased values of work of adhesion , respectively , uniquely dependent on the johnson ( 1995 ) parameter for sinusoidal waviness contact .ciavarella ( 2016b ) further used the kesari & lew ( 2010 ) asymptotic expansion for multiscale roughness in the form of a weierstrass function , showing that the enhancement could be extremely large in this case , although the assumptions of the simply connected area solution become increasingly stretched .recently , mcmeeking _( 2010 ) have proposed a very simple model where two surfaces are gently brought into contact , one of which having a single small depression .this is in a sense a simplification of the single scale of waviness of johnson s ( 1995 ) model , with the additional significant advantage that in full 3d situations , sinusoidal roughness leads in intermediate regimes to a very complex problem with non - circular contact areas , whereas the dimple model preserves axisymmetry and permits a very simple solution , particularly with the shape chosen by mcmeeking _the model however preserves all of the features of the periodic waviness problem , in that there is a possibility of jump - into contact at some level of compression ( or it can be spontaneous for sufficient level of adhesion ) , and that there is an unstable pull - off at some value of external tension .this pull - off is no longer occurring on the crests , and therefore is not the known value for spheres given by jkr theory ( johnson _ et al . _ , 1971 ) , but depends on the shape of the depression .as johnson ( 1995 ) remarked , the single scale waviness model shows an extreme behaviour , which is likely to be affected by deviation due also to the presence of finer - scale roughness .guduru s model already answers some questions about the case of two scales of waviness , for contact near the crest , but the spherical geometry does nt admit a `` full contact '' limit , and therefore it does nt address the problem towards this regime . instead , the simple dimple model of mcmeeking _ et al . _( 2010 ) is ideal to a quantitative assessment of the full problem , by adding axisymmetric roughness .in other words , despite idealized , this geometry can be a model for two scales of roughness both in what happens in a single scale of waviness when encountering a depression in a surface , or specularly , for the depression itself having roughness .the advantage of the model is that we can study it in great details , especially in the asymptotic regime in which the wavelength of the roughness is much smaller than the non - contact area .we hope to elucidate qualitatively some features of the general behaviour of adhesive surfaces .we consider as smooth geometry that of a surface with a shallow depression , in the form of a dimple of amplitude and radius ( mcmeeking _ et al ._ 2010 ) , defined as k(\frac{b}{r})\right ] , \text { \ \ \ \ \ \ \ } \frac{r}{b}>1\end{aligned}\ ] ] where are complete elliptic integrals of first and second kind . to the smooth geometry ,we add an axisymmetric sinusoidal roughness of amplitude and wavelength .the geometry is clarified in fig.1 .{cc}{\includegraphics [ height=2.565 in , width=4.0274 in ] { dimplerough0.eps } } & \end{array } ] .it can be approximated by but only up to now , for small roughness , i.e. large , the loading curve will be close to the loading curve of the smooth case , and therefore the equilibrium crack size will be increased only marginally , while the unloading curve will have moved the point of pull - off to smaller sizes using the power law approximations for , we obtain the coincidence and pull - off is unique for a more exact calculation shows that varies by only few percent depending on . therefore , there is an interesting range below which show pressure - sensitiveness .we shall now investigate a little more this range .for the pressure - sensitive pull - off range , , a simple but good approximation ( provided the wavelength of the roughness is short enough ) is that we load up to a certain , and since we end up in the unloading unstable branch , we will have immediately pull - off . hence , loading up to results in a pull off we need , in fact , to explore only the range as otherwise , the upper bound pull - off is obtained , which we have already estimated .a set of results is shown in fig.7 .it is shown that the pull - off pressure grows at first _linearly with preload _ , and then saturates to a value which we have already estimated in ( [ sigma - pull - off - max ] ) and fig.6 .if we continue the curves above this value there would be a second transition towards full contact , which however we did not include in the figure , for simplicity . in fig7a , b , c , the case of are represented , and an horizontal gray line indicates the pull - off value for the smooth dimple .this permits to estimate the enhancement whose maximum value corresponds to the scale in fig.6 . for larger values of ,the maximum enhancement is smaller , however the sensitivity to the pre - load is higher , so it is possible to have higher pull - off , for a given preload .[ c]lldimplerough7a.eps & ( a ) + dimplerough7b.eps & ( b ) + dimplerough7c.eps & ( c ) fig.7 .pressure - sensitive pull - off as a function of preload for three cases of ( a , b , c , respectively )regarding the validity of the solution , some of the conclusions reached in guduru ( 2007 ) , but also ciavarella ( 2016a , 2016b ) translate into this problem , and viceversa , the present solution clarifies some limitations of the guduru solution . in other words, we are assuming from the outset that a continuous single connected ( non - contact ) area is obtained .when roughness is very large , as is the case when is low , we can imagine that this is not the way the solution proceeds .indeed , contact could be established only in the crests and proceed from crest to crest , before a continuous contact is possible .this explains why , _ counterintuitively _ , we have reached the conclusion that unbounded enhancement occurs for conditions of large roughness i.e. when johnson parameter is very low .guduru s geometry ( the sphere , in fact approximated as a paraboloid ) is such that one could postulate that any enhancement should occur after sufficiently large pressure has been applied although this at one point would involve finite strains in the sphere . in gudurus problem , corresponds to an enhancement of the sphere pull - off of a factor 4 see ciavarella ( 2016a ) , whereas in our dimple case , from ( [ sigma - pull - off - max ] ) we have a value which is slightly smaller , for .however , guduru & bull ( 2007 ) have actually demonstrated values of enhancement higher than 15 in experiments and there is no reason not to expect similar order of magnitude also in our dimple case , justifying our figures above which contain also the case .more precise estimates about this limit are not obvious : as guduru ( 2007 ) remarked the limit is not the monotonicity of the profile .another limit to the enhancement is that our analysis is limited to the jkr regime of large soft materials with high adhesion .we have not discussed in great details the detachment from full contact .johnson ( 1995 ) also does not discuss in detail for his sinusoidal waviness case , referring to the fact that we need a tension of the order of theoretical strength , and suggests air entrapment , contaminants , or indeed finer scale roughness , may reduce this . in fact , even for a single scale of waviness , a maugis solution ( hui _ et al _ , 2001 , jin _et al _ , 2016a , jin _ et al _ , 2016b ) shows that detachment will start when the _ peak stress in the full contact state _reaches the theoretical strength , and in our case this means although the actual critical condition to open the contact will depend also on the cod ( crack opening displacement ) which has to reach the maugis range of attraction forces .however , this shows that when the sum of is comparable to the theoretical strength , we can start opening already at values much smaller than theoretical strength .this limit is not so remote , since it occurs when the height of the dimple compared to its width ( and/or the amplitude of waviness compared to the wavelength ) , become of the order of 0.1 . indeed , as theoretical strengthis of the order of , even for we have . in the presence of roughness and the shallow depression ,the `` strong adhesion '' regime due to full contact disappears in realistic cases , and it will depend also on some appropriate tabor parameter .also the enhancement of the `` weak adhesion '' regime will depend on appropriate tabor parameter , but a full solution to the problem requires a full numerical investigation , outside the scopes of the present paper .originally , mcmeeking _ et al . _( 2010 ) introduced the `` dimple '' model as a simple geometrical model to explain a bistable system realized with just elastic materials and van der waals adhesive forces , showing the possible switch from `` strong adhesion '' realized when pushing in full contact from the stable intermediate equilibrium , to `` weak adhesion '' , when this pressure is not impressed , and one has the pull - off from the partial contact state .the analysis we conducted shows that , with roughness , the dimple model shows a much higher resistance to jump into full contact , and therefore the `` strong adhesion '' is obviously an ideal limit on two grounds : first , it may be difficult to achieve due to geometrical imperfections , depressions , air entrapment , contaminants , and roughness indeed ; second , the separation from this state may not be as difficult as expected , for the same very reasons .however , we showed that , partially balancing this effect , we have an `` enhancement '' of the `` weak - adhesion '' regime , which may serve the purpose if one can calibrate the geometry .we have shown a reduced parametric dependence of the system in the asymptotic expansion for small wavelength roughness on only two dimensionless parameters , one being the johnson parameter for the sinusoid , and the other the corresponding johnson parameter for the dimple .we obtained that when roughness is relatively large , a pressure - sensitive region is expected , and in this region , the actual pull - off depends monotonically on the pre - load , and indeed in a significant regions of parameters , linearly on pre - load .the model adds to our understanding of the effect of multiscale roughness on adhesion , which remains a complex problem in the general case .a.p . is thankful to the dfg ( german research foundation ) for funding the project ho 3852/11 - 1 .ciavarella , m. ( 2016a ) , on roughness - induced adhesion enhancement , j. strain analysis , in press , arxiv preprint arxiv:1602.06089 .ciavarella , m. ( 2016b ) .an upper bound to multiscale roughness - induced adhesion enhancement .tribology international , 102 , 99 - 102 .fuller , k. n. g. , & tabor , d. the effect of surface roughness on the adhesion of elastic solids .proc roy soc london a : 1975 ; 345:1642 , 327 - 342 gao , h. , & yao , h. shape insensitive optimal adhesion of nanoscale fibrillar structures .proceedings of the national academy of sciences of the united states of america,2004 : 101(21 ) , 7851 - 7856 .guduru , p.r .detachment of a rigid solid from an elastic wavy surface : theory j. mech .solids , 55 , 473488 guduru , p.r . , bull , c. ( 2007 ) detachment of a rigid solid from an elastic wavy surface : experiments j. mech .solids , 2007 : 55 , 473488 huber g , gorb s , hosoda n , spolenak r , arzt e. influence of surface roughness on gecko adhesion .acta biomater 2007;3:60710 .hui , c. y. , lin , y. y. , baney , j. m. , & kramer , e. j. ( 2001 ) .the mechanics of contact and adhesion of periodically rough surfaces .journal of polymer science part b : polymer physics , 39(11 ) , 1195 - 1214 .hui , c. y. , glassmaker , n. j. , tang , t. , & jagota , a. ( 2004 ) design of biomimetic fibrillar interfaces : 2 .mechanics of enhanced adhesion .journal of the royal society interface : 1(1 ) , 35 - 48 .jin , f. , guo , x. , & wan , q. ( 2016a ) . revisiting the maugis dugdale adhesion model of elastic periodic wavy surfaces .journal of applied mechanics , 83(10 ) , 101007 .jin , f. , wan , q. , & guo , x. ( 2016b ) . a double - westergaard model for adhesive contact of a wavy surface .international journal of solids and structures , 102 , 66 - 76 .johnson kl , kendall k. , and roberts a. d. ( 1971 ) .surface energy and the contact of elastic solids .proc royal soc london a : 324 .johnson k.l .the adhesion of two elastic bodies with slightly wavy surfaces , int j solids and struct 32 ( 3 - 4 ) , , pp .423 - 430 kamperman , m. , kroner , e. , del campo , a. , mcmeeking , r. m. , & arzt , e. functional adhesive surfaces with gecko effect : the concept of contact splitting .advanced engineering materials , 2010 : 12(5 ) , 335 - 348 .kesari , h. , & lew , a. j. ( 2011 ) .effective macroscopic adhesive contact behavior induced by small surface roughness .journal of the mechanics and physics of solids , 59(12 ) , 2488 - 2510 .maugis , d ( 2000 ) .contact , adhesion and rupture of elastic solids ( vol .springer , new york .mcmeeking , r. m. , ma , l. , & arzt , e. ( 2010 ) .bi - stable adhesion of a surface with a dimple .advanced engineering materials , 12(5 ) , 389 - 397 .pugno nm , lepore e. ( 2008 ) observation of optimal gecko s adhesion on nanorough surfaces .in ( [ plane - strain ] ) , we used a plane strain approximation , under the assumption that except perhaps for the first 2 - 3 oscillations , the problem really is plane strain .however , here is a numerical proof .obviously we meant ( [ plane - strain ] ) for a sinusoidal wave roughness of amplitude and wavelength , if we take the standard cumulative superposition approach used by guduru ( 2007 ) and take the contact area , we get which we have expanded for small . taking only the leading term in the derivative and using we find that when we take ( fig.a1a ) we find that the error is negligible but perhaps still visible on the first oscillations , but this reduces further with ( see fig.a1b ) , and as we are considering here the full contact pressure when , we do not need to worry about this approximation .
|
recently , a simple and elegant `` dimple '' model was introduced by mcmeeking _ et al . _ ( adv eng mat 12(5 ) , 389 - 397 , 2010 ) to show a mechanism for a bistable adhesive system involving a surface with a shallow depression . the system shows , at least for intermediate levels of stickiness , that external pressure can switch the system into a `` strong adhesive '' regime of full contact , or into weak adhesion and complete pull - off , similarly to the contact of surfaces with a single scale of periodical waviness . we add to this model the effect of roughness , in the simple form of axisymmetric single scale of waviness , permitting a very detailed study , and we show that this induces a resistance to jumping into full contact on one hand ( limiting the `` strong adhesion '' regime ) , and an enhancement of pull - off and of hysteresis starting from the partial contact state on the other ( enhancing the `` weak adhesion '' regime ) . we show the system depends only on two dimensionless parameters , depending on the ratio of work of adhesion to the energy to flatten the dimple or waviness , respectively . the system becomes pressure - sensitive also in the intermediate states , as it is observed in real adhesive rough systems . the model obviously is specular to the guduru model of rough spheres ( guduru , jmps , 55 , 473488 , 2007 ) , with which it shares the limitations of the analysis assuming a connected contact ( crack ) area , and serves also the purpose of showing the effect of a depression into an otherwise periodic rough contact , towards the understanding of adhesion with multiple scales of roughness . roughness , adhesion , dimple model , jkr adhesion
|
the initial engagement of the condensed matter community with ultracold atomic physics during the first half of the past decade was due to the apparent _ similarity _ between the newly - created atomic systems and familiar ( electronic ) counterparts in the solid state .the two most notable examples of this trend were the observation of the superfluid to mott insulator transition in an optical lattice , and the creation of superfluid states of paired fermions , the neutral analogs of s - wave superconductors .one feature of the latter half of the decade was a growing appreciation for the _ differences _ between these two settings for quantum many body physics. this chapter concerns one of the most prominent , intriguing , and conceptually far - reaching of these : the out of equilibrium nature of many experiments in ultracold physics . mechanisms of equilibration in solid state systems are typically fast on the picosecond timescale with the resulting equilibrium states essentially immortal .in contrast , the lower temperatures and energy scales of ultracold gases means that the corresponding processes can be observed at the millisecond level , and non equilibrium physics is a fact of life during the few second lifespan of most ultracold systems .furthermore , there is good reason to believe that the _ mechanisms _ of equilibration in the ultracold domain are distinct from those that are generically important for electrons in the solid state .there are no phonons that can transfer energy into the lattice , ( typically ) no impurities to allow momentum to dissipate , and no spin - orbit interaction to mediate spin relaxation .we find ourselves therefore in an enviable ( if unfamiliar ) position : the simple hamiltonians that we write for the degrees of freedom of interest are , to a good approximation , all there is . to take a simple example, an applied zeeman field will typically polarize the spins of an electron system , while in an atomic system it leads only to larmor precession ( the quadratic zeeman effect can elicit interesting effects , however ) . the study of equilibration in _ isolated _ systems ( though this term should be used carefully , see below ) described by strictly hamiltonian dynamics is of course as old as statistical mechanics itself , dating back at least to boltzmann .it is the possibility of studying non equilibrium _phenomena in a very simple setting that has led to a resurgence of interest in this problem of fundamental physics . additionally , the way in which a system comes to equilibrium is believed to be sensitive to a number of factors that are already the focus of many experimental investigations in ultracold systems , namely _ dimensionality _ , _ disorder _ , _ integrability _ of the underlying dynamics , and of course the _ initial conditions_. in trying to draw together some of these threads for this chapter , we are faced with the familiar problem : equilibrium systems are all alike , while every non - equilibrium system is out of equilibrium in its own way .nevertheless , most phenomena of current interest can be associated with one or more of the above four aspects .let us take as an example an experiment that has acquired an iconic status as an illustration of out of equilibrium behavior in an atomic gas . in the ` quantum newton s cradle ' of kinoshita_ et al . _ , the evolution of the momentum distribution of arrays of harmonically confined 1d bose gases was studied over many periods of oscillation , after initially splitting the gas into two counterpropagating clouds . even though each atom undergoes many thousands of collisions during this time , the momentum distributions do not relax to the equilibrium distribution ( determined to be gaussian for thermal clounds of the same rms momentum ) .this is to be contrasted with thermalization in a three - dimensional gas , which occurs after a very small number of collisions .a plausible argument for this behavior is easily made if one assumes the applicability of the boltzmann equation in its naive form ( which is by no means clear ) , with a collision integral describing two - particle collisions . in a gas of particles of equal mass ,such collisions do not lead to a change in the distribution function , as particles either retain or exchange their momenta . within the boltzmann picture ,a change in the distribution function requires at least three - body collisions to be accounted for , as was done in a number of recent works . according to the discussion of ref . , once the suppression of three - body scattering due to interparticle repulsion is accounted for , the resulting damping rates are consistent with the long times observed by kinoshita _et al._. the example of the quantum newton s cradle vividly illustrates the role played by reduced dimensionality . additionally , the experimental conditions closely approximate one of the best - known integrable many - body systems : the 1d bose gas with -function interaction . in this integrable system ,three - body collision are absent , or more correctly factorize into successive two - body collisions .the three - body collisions that will eventually cause the system to equilibrate arise because the 1d model is only an approximation to experimental reality .the potential confining atoms to a tube naturally has higher modes of transverse excitation , and it is virtual processes involving these modes that gives rise to the three - body interactions . in section [ sec : integrability ]we will see another example of thermalization due to the breaking of integrability .equilibration is also often deemed to be particularly precarious in _ disordered _ systems , which appear at first to be the polar opposites of integrable systems .the phenomenon of localization due to quantum mechanical interference has been studied constantly since anderson s work more than fifty years ago .recently it was suggested that an _ isolated _ disordered interacting system can undergo a finite temperature localization transition . in the low temperature ( localized )phase the system is unable to come to internal equilibrium .this fascinating idea would seem to have a natural home in ultracold systems and will be discussed in section [ sec : loc ] .we begin our survey with the last of the themes identified above : the choice of initial conditions .a natural way to initiate out of equilibrium dynamics is to abruptly change some system parameter . in the classical description of phase ordering one speaks of a _quench _ if the variation of this parameter would cause the equilibrium system to pass through a phase transition . by analogy , abruptly crossing a quantum phase transition has become known as a _ quantum quench _ , an alliterative coinage apparently due to calabrese and cardy , though recent papers use the term to refer to any abrupt change in the system .section [ sec : quench ] gives an introduction to some of the general theoretical issues with reference to a simple model , the ` ' field theory .as indicated above , this survey is necessarily incomplete and many interesting aspects of non - equilibrium behavior are missing .for example , we will have nothing to say about non - equilibrium steady states , which seem less relevant to ultracold physics than to ( say ) mesoscopic physics , where conditions of constant drive can be sustained for long periods .we will only have a few brief comments about connections between quantum non - equilibrium physics and quantum information concepts such as entanglement .it is hoped that readers will find the chapter a useful effort to motivate and explain some key current questions in non - equilibrium physics that are ripe for investigation .the study of _ phase ordering kinetics _ has an extremely long history in classical statistical physics .the main theoretical problem is to understand how a new phase appears after an abrupt change in system parameters . for a symmetry - breaking transition ,this entails an understanding of the growth of the order parameter , and the dynamics of topological defects formed in the process .it is natural to ask what changes when we cross a quantum rather than classical phase transition , and more generally how quantum mechanics changes things .we would like to argue that for a symmetry - breaking transition , it is only in the early stages of phase ordering that the difference will be acute .the reason is that the initial dynamics establishes a local fluctuating order parameter distribution that soon becomes macroscopic in character and leaves the quantum domain .the subsequent evolution of this distribution ( often called _ coarsening _ in the literature ) will then be described by classical equations of motion .thus the following discussion will focus on the quantum dynamics of the early stages of phase ordering and the emergence of the classical description within a simple model .we should point out that this issue was discussed many years ago in a cosmological context , predating the more recent work motivated by the experiment of ref . .let try to describe the effect of a quench in a system described by the ` ' hamiltonian ,\ ] ] where and are complex canonically conjugate fields =i\delta({\mathbf{r}}-{\mathbf{r}}') ] , but this lack of commutativity is increasingly irrelevant as time progresses , so that in the long time limit one can take , with having a gaussian distribution with the above variance .it s also useful to consider the schrdinger picture counterpart of the evolution in eq . .the initial gaussian is annihilated by , so the state at time is likewise annihilated by , corresponding to the wavefunction { } a(t)\exp\left(-\frac{ix^{2}|\omega_{f}|}{2}\left[1 - 2e^{i\theta}e^{-2|\omega_{f}|t}\right ] \right)\ ] ] where , and is a normalizing factor .the wigner distribution of the associated density matrix is ^{2}}{\alpha(t)}\right),\text { as } t\to\infty \end{split}\end{aligned}\ ] ] with , .the form of this distribution is illustrated in fig .[ fig : wigner ] .because of the decay of , the width of the distribution is increasing exponentially , while at long times the distribution is effectively . in other words , it is nonzero only on the phase space trajectory of a classical particle that starts at ., .the dashed lines indicate the contours of the hamiltonian .the initial distribution is axially symmetric but is ` squeezed ' along the classical trajectory with the passage of time . ]the exponential growth in the width of the distribution also reflects the classical trajectory of the particle , so that the probabilistic character may be attributed to the initial state by writing where is a classical gaussian random variable with variance ] and ] .the eigenstates of this model are equivalent through the jordan wigner transformation to slater determinants of free fermions with nearest - neighbor hopping and random on - site potentials .since every single - fermion eigenstate is localized by those potentials as long as , we expect that the dynamics of this spin hamiltonian are localized as well .the response of the system to a local perturbation is confined for all times with exponential accuracy to a radius set by the single - particle localization length .now if instead we had started with heisenberg rather than couplings between the spins , i.e. , added a term , then the model , at infinite temperature , is thought to have a dynamical transition as a function of the dimensionless disorder strength .this transition should be manifest in several physical quantities , including level statistics and dynamical correlation functions .level statistics should transition from wigner - dyson statistics and level repulsion for small to the poissonian distribution characteristic of a disordered system ( since spatial regions separated by the localization length are effectively independent ) for large .correlation functions of the conserved quantity ( equivalent to particle number ) should show , as originally emphasized by anderson , a transition in the long - time behavior between particles escaping to infinity or remaining in a bounded region .a system in the localized phase may fail to come into mutual thermal equilibrium among its constituent pieces . to understand why this is so , consider dividing the above spin chain into two equal halves , a and b. though the disorder is not the same on each side , we expect the two halves to be macroscopically identical .conventional thermodynamic wisdom would have that in the long time limit , the expectation value of total spin is the same on both sides .let us take the initial state of the system to be where are states of the a and b subsystems with .evidently there are many such states . in terms of the eigenstates of the whole system ,the long time average of is then in this sum the first factor reflects the distribution of in the eigenstate , while the second factor reflects the overlap of this state with the initial state .let us consider the uncoupled system , where the appropriate hopping term in eq .that connects the two regions is set to zero .the eigenstates are then eigenstates of , and no relaxation occurs .the vast majority of states have eigenvalue , and the distribution of generally has the form $ ] : deviations from the equilibrium value are exponentially rare .the subscript is to remind us that the function is just the entropy ( per site ) .when a small coupling is switched on , states with different are mixed together , but we expect them to retain their character , meaning that the histogram of probabilities associated with a particular state will be strongly peaked around the value that it had with certainty for . similarly , the probabilities will be largest for those states peaked around , the initial value .let us group the eigenstates in sets labelled by the value of that they had when . then defining , , eq. has the form since states in only overlap with at the order of perturbation theory in , it is reasonable that is exponentially small is this deviation .thus eq . is seen to involve a competition between two exponential factors that in the thermodynamic limit can pick out a value of different from zero , contradicting the assumption that the two parts of the system come to mutual equilibrium .the crucial part of the picture is that the states ` retain their character ' at , and this is just what is implied by the recent work discussed above . when there is no longitudinal part to the interaction , this is just anderson localization of noninteracting jordan wigner fermions .what is new is the claim that the same remains true in the interacting problem over some range of .however , the location of the transition and its critical properties have been difficult to obtain even numerically .a search based on level statistics was inconclusive , while studying correlation functions suggests a phase transition in both cases exact diagonalization of the hamiltonian was used , which strictly limits accessible system sizes .the phase transition in the correlation functions was interpreted in terms of an `` infinite - disorder '' critical point , analogous to the critical points in ground - state quantum phase transitions in disordered systems accessible via the real - space renormalization group ( rsrg ) .a approach to many - body localization based on numerical solution of rsrg equations predicts a sharp transition and a specific location , but it is currently difficult to compare these predictions to numerics on the microscopic model .specific proposals for realizing a many - body localization transition with ultracold atoms are discussed in ref .another way of searching for the putative many - body localization transition , at least numerically , involves the entanglement between different spatial regions . in the disordered phase, one would naively expect the entanglement entropy , defined in a moment , to saturate , as indeed happens in the non - interacting case , while in the extended phase entanglement would spread throughout the system .we discuss an interesting behavior in the dynamics of entanglement in the disordered phase below , after explaining some general features of entanglement in many - body systems .one of the most significant potential advantages of interacting many - atom systems is the ability to maintain quantum coherence for relatively long times than in the electron subsystem of solids .considerable effort has been devoted in recent years to understanding how entanglement , one of the basic notions of quantum information , behaves in many - particle systems , both for intrinsic interest and for potential application to quantum computing . since atomic systems at the moment may offer the best hope for observing many - particle entanglement in a quantitative way , and since entanglement underlies many of the questions touched on in preceding sections , we now review the basic notions of entanglement and its dynamics in model atomic hamiltonians . in our discussion of entanglement , for simplicity we will limit ourselves to the case of a pure state of a bipartite system . while the hilbert space of the full system is spanned by product states of basis states of and ,there are superpositions of the basis states that can not be factored into _ any _ pure states of and .a familiar example is the singlet state of two spins : an entangled state is simply one that is not a product state .the entanglement entropy , which has been the quantity most studied in the context of many - particle systems , is defined as the von neumann entropy of the reduced density matrix for either subsystem , where ( ) is the reduced density matrix of subsystem ( ) obtained by tracing over degrees of freedom in the other subsystem .the basic behavior of entanglement entropy in ground states of standard atomic or condensed matter is generally well understood , especially in one dimension .we will not discuss ground - state entanglement entropy in detail here but simply state a few results before referring the reader to recent in - depth reviews . in gapped systems ,the general expectation , which can be proved in some cases , is that entanglement entropy satisfies an `` area law '' , i.e. , scales as the volume of the boundary . for regular geometries ,the scaling is thus as for a subregion of linear size cut out of an infinite -dimensional system .topological phases can have subleading terms in the entanglement entropy that are of order unity ( i.e. , scaling as ) , in addition to the area law , and probe aspects of the topological order . for critical systems , the behavior of ground - state entanglement entropy is more complicated . in one dimension , for quantum critical pointsdescribed by 2d conformal field theories , the entanglement entropy diverges logarithmically , with a universal coefficient determined by the central charge : for a block of length cut from an infinite chain , where is the central charge and a short distance cutoff .there are similar logarithmic divergences at infinite - randomness critical points in one dimension , although differences between the pure and random cases become manifest in the entanglement spectrum . in higher dimensions ,critical points with a fermi surface can violate the area law , while others can obey the area law but also have subleading corrections of interest .to date it has been difficult to observe entanglement directly except in small systems . since many - particle systems with local hamiltonians are expected to have entanglement described by the area law , or possibly larger by a logarithmic factor , thermal entropy ( which scales as volume under the circumstances above ) will rapidly dominate except at the lowest temperatures .however , there are recent proposals for how relatively standard measurement processes could probe many - particle entanglement .entanglement also determines the ability of certain numerical methods based on matrix product states and their generalizations to capture accurately the ground state of a given hamiltonian . the growth of entanglement after a quench or sweep of parameters in the hamiltonian has been studied for one - dimensional systems using similar methods to those studied above in sections [ sec : quench ] . for a quench in a conformally invariant system , one can distinguish two simple scenarios : a `` global quench '' in which the parameters are changed instantly everywhere in space , versus a `` local quench '' in which the parameters are only changed at one point . in both cases , entanglement propagates along `` light cones '' ;this can be understood intuitively by noting that conformal invariance implies that all excitations propagate with a single velocity . for a global quench, this leads to entanglement growing linearly in time after the initial the same argument can be applied to understand the effects of sweeping across a conformally invariant critical point .we focus on the case of a translation - invariant problem , which is similar to the global quench as far as how entanglement propagates : after the sweep , if the hamiltonian is held constant then entanglement entropy increases linearly in time .the rate of this linear increase , however , is now determined in simple models by the number of excitations that were created above the ground state when the energy gap was small ( we make the same assumptions here as in section [ sub : kibble_zurek_scaling ] ) . this number is determined as a function of the sweep rate by the same scaling arguments as presented above . right after the sweep , the entanglement scales as a logarithm of the sweep rate with a universal coefficient combining central charge and correlation length exponent ; for the half - chain entropy , in more complicated models with interactions between the `` excitations '' in the final gapped region of the phase diagram , the behavior of entanglement during a sweep is more complex . without conformal invariance, there is no general rule known for the growth of entanglement with time , and indeed entanglement seems to grow quite rapidly once excitations begin to interact .( numerical studies in this regime become technically challenging as the entanglement quickly saturates the amount that can be captured with matrix product states of a given matrix size . )the growth of entanglement can be understood as a consequence of the `` entanglement thermalization hypothesis '' : if the interactions lead to apparent equilibration for local measurements with an effective temperature determined by the ( constant ) energy density , then the entanglement entropy of a subregion must reproduce the thermal entropy at that temperature , since this entropy can in principle be measured .but the thermal entropy will satisfy a volume law rather than an area law , and hence will be parametrically larger for a large subregion than the initial entanglement entropy , as the system began close to a ground state satisfying the area law ( possibly with logarithmic corrections ) . in closing, we note that the putative many - body localized phase discussed above also has a signature in the dynamics of entanglement that is not yet well understood . in a localized phase of a non - interacting system , an initial product statehas entanglement that grows in time but saturates once the individual particles have diffused over a length scale set by their localization length .in an extended phase , one expects entanglement to increase more rapidly , presumably as some power law of time depending on whether the transport is diffusive or ballistic . in interacting disordered systems, numerics suggest a regime where the entanglement entropy increases _ logarithmically _ in time without limit . whether this is generic behavior and indicates that the phase is not truly localized , or instead is a qualitative difference between localized phases with and without interactions , are questions for future theoretical work .a.l . acknowledges the support of the nsf through grant dmr-0846788 and research corporation through a cottrell scholar award , and useful discussions with peter arnold , alex kamenev , cass sackett , and dan stamper - kurn .j.e.m . was supported by the aro ole program .105 natexlab#1#1[2]#2 , , , , , , ( ) . , , , , ( ) ., , , ( ) . , , , , , ( ) . , , , , ( ) ., , , , ( ) . , , , , ( ) ., , , ( ) . , , , ( ) ., , , , ( ) . , , , ( ) ., , , ( ) ., , , , , , ( ) ., , , , ( ) . , , , , ( ) ., , , , ( ) . , , , , , ( ) . , , , , , , , ( ) ., , , , ( ) ., , , , , ( ) . , , , , , , , , , ( ) . , , in : , , ( eds . ) , , , . , , , ( ) . , , ( ) . , , , , , ( ) . , , , , , , , ( ) . , , ( ) . , , , , , ( ) . , ,( ) . , , ( ) . , , , , ( ) . , , , ( ) . , , (, , , , , ( ) . , , , ( ) ., , , , , , ( ) . , ,( ) . , , , , , , , ( ) . , , , ( ) . , , in : , volume , p. . , , , ( ) ., , , ( ) . , , , , ( ) ., , , , , ( ) ., , , ( ) . , , ( ) ., , , , ( ) . , , , , , ., , , , ( ) . , in : , , ., , , ( ) . , , , , ( ) ., , , , , , ( ) ., , , ( ) . , , , , , , , ( ) . , , , , , , , ( ) . , , in : , , ., , , , ( ) . , , , , ( ) ., , , ( ) . , , , ., , , ( ) . , , , ( ) ., , , ( ) . , , , ( ) ., et al . , , ( ) ., , , , ( ) . , , , ., , , , ( ) . , , , , , , , , , , , ( ) . , , , ( ) ., , , ( ) ., , , ( ) . , , , , ( ) . , ( ) ., , , ( ) . , , , ( ) ., , , , ( ) . , , , ( ) ., , , ( ) . , , , , ( ) ., , , , , ( ) . , , , , ., , , , ( ) ., , , ( ) ., , , , , , , , , ( ) . , , , ( ) . , , ( ) ., , , , , ( ) . , , ( ) ., , , , ( ) . , , , , ( ) .
|
this chapter seeks to outline a few basic problems in quantum statistical physics where recent experimental advances from the atomic physics community offer the hope of dramatic progress . the focus is on nonequilibrium situations where the powerful concepts and methods of equilibrium statistical physics and `` linear response '' theory ( for small deviations from equilibrium ) are not applicable . the problems discussed here are chosen in part because they have a high degree of `` universality '' or generality across different microscopic situations , as the major challenge in nonequilibrium statistical physics , both quantum and classical , has been to find principles as general as the basic principles of equilibrium statistical physics or linear response . = 1
|
in this paper we propose a new approximate map decoding technique on tail - biting trellises that exploits the subtrellis structure of the tail - biting trellis to compute approximate aposteriori probabilities ( apps ) of codeword symbols .our algorithm is best described as a best - search algorithm , meaning thereby , that the algorithm tries to track those subtrellises which are most likely to contain the transmitted codeword and computes marginals over these only .our approximate decoder works rather efficiently at higher values of signal to noise ratios .we compare our results with those obtained by the _ wrap _ version of anderson - hladik map decoder , which we refer to as the ah - decoder .the rest of the paper is organized as follows .section [ sec : background ] gives some background .section [ sec : decoding ] describes the decoding algorithm .section [ sec : modification ] describes a simple modification to the scheme .section [ sec : simulations ] presents the results of simulations on an awgn channel on a state tail - biting trellis for the extended golay code and a state tail - biting trellis for a rate , memory , convolutional code .section [ sec : conclusion ] concludes the paper .tail - biting trellises for convolutional codes were introduced in and those for block codes in .for ease of notation we view the tail - biting trellis , of depth , to be defined on a sequential time - axis with and all valid paths restricted to those that begin and end at the same state .we call and respectively the start and final states of . in it was shown that a tail - biting trellis could be viewed as the superposition of subtrellises obtained from a coset decomposition of the code represented by it with respect to a subgroup . corresponding to each start state of the tail - biting trellis we define a subcode consisting of all codewords that begin and end at the same state . these subcodesall have identically structured subtrellises ( though with different labels ) , and share states at various time indices .this subtrellis structure is exploited to give an approximate map decoding algorithm .in order to explain the approximate map algorithm we define the weight of a subtrellis conditioned on the received vector as where is the subcode represented by . the weight of a subtrellis is the aposteriori probability that the transmitted codeword lies in the particular subtrellis .the weight of a subtrellis is also the maximum contribution a subtrellis can make to the decision sums for computing the app of any digit of the codeword .the basic idea in map approximate algorithm ( maa ) is to start off by obtaining initial estimates to the weights of the subtrellises .the initial estimates are all overestimates .we begin with the best subtrellis with respect to the current estimates and begin a forward pass on that subtrellis .we define our updating function for estimates as we progress along sections of the current trellis such that the overestimates become more and more accurate as we go along , always converging to the exact value at the end .each time we move from one section to the next , we check the updated estimate against those of other subtrellises and switch to another subtrellis if it appears more likely to be the correct one .thus the algorithm may switch from one subtrellis to another during the course of execution .however , it is always guaranteed to finish on the most likely subtrellis .the same is done for the backward pass .now after the two passes are over , there will be subtrellises for which certain sections have been opened only in the forward or backward pass but not both . in the final marginalization phase we ignore these sections of subtrellises which have not been opened by both the passes . given a one - to - one tail - biting trellis of depth , we use the following notation .we denote the label of an edge by . for an edge define predecessor and successor operators as and . by nodes of we refer to the vertices of .the start node of a subtrellis is denoted by and the final node by .the set of all paths in the tail - biting trellis from node to node is denoted by .further let denote the set of paths from nodes in to nodes in . given a received vector we annotate the edges of the tail - biting trellis suitably using the channel information so that the codeword app gets decomposed along the edges of the codeword path .the weight of an edge is denoted by .we then define the weight of a path as the product of the weights of the edges constituting the path .the weight of a subtrellis , as defined previously , is then the sum of the weights of all paths in .the approximate algorithm computes a set of node - objective functions on the nodes of the tail - biting trellis .these function definitions are identical to the forward - backward passes defined in . and are functions whose domain is the nodes of the tail - biting trellis .they are defined recursively as with .+ this is just a forward - backward pass on , initialized according to the boundary conditions . also corresponding to each subtrellis we define two functions and domain is the set of nodes belonging to . and at a node capture the computational effects of exclusive forward and backward passes respectively on at . at a node is defined inductively as with .+ similarly at a node is defined inductively as with .+ it can be seen that at a node is the sum of the weights of all paths in , while at is the sum of the weights of all paths in .similarly at a node gives the sum of the weights of all paths in whereas at gives the sum of the weights of all paths in .we now give an informal description of the map approximate algorithm ( maa ) .+ map - aa + input : a one - to - one tail - biting trellis of depth , with edge weights suitably defined using the received vector and channel information + output : a vector of approximations to for and this phase computes the node - objective functions and with respect to the tail - biting trellis by executing a forward - backward pass on with boundary conditions as suggested by recursion [ eqn:1 ] .this phase computes a set of node - objective functions and with respect to the subtrellises that share the node .it consists of a forward and a backward pass .a node can be shared among many subtrellises and will belong to the domain of the functions defined with respect to these subtrellises .maa computes a subset of these functions either partially or completely .[ [ forward - pass ] ] forward pass + + + + + + + + + + + + at each step in the forward pass the approximate algorithm first chooses a winning subtrellis .it then computes for the next section of using recursion [ eqn:2 ] .the winning subtrellis is one at which a suitably defined heuristic function , , is maximized .we associate a working index with each subtrellis .the working index of a subtrellis gives the last section of the tail - biting trellis at which the node - objective function has been computed .the forward pass works along the following lines .the working indices are initialized to the start section and the boundary conditions of recursion [ eqn:2 ] are enforced .we start by choosing a winning subtrellis from the set of subtrellises defined on such that is maximized at .the heuristic function at a subtrellis with a working index is a function of and at section in .we describe the heuristic function in detail later .if the working index of is the final section , we successfully exit from the forward pass .otherwise we increment the working index to the next section . using the values of at the nodes of previous working index of ,we then compute at the nodes of the current working index according to recursion [ eqn:2 ] . after computing for the current working index , we re- evaluate the heuristic function at .we then go back to the process of choosing the winning subtrellis and computing the corresponding node - objective function at the next working index of the winning subtrellis .the heuristic function at with working index is defined as we now motivate this definition of the heuristic function .+ let be the set of paths in whose first edges lie in .it can be seen that is the sum of the weights of paths in .note that is the sum of weights of paths in . +now observe that .since the edge - weights are probabilities and therefore non - negative , this implies that also by definition and as a consequence .+ it follows that the heuristic for keeps falling after each revision and finally converges to the weight of .thus at any instant the heuristic for is an over - estimate to the weight of . if a forward pass has been completed on a subtrellis , the approximate algorithm guarantees that a forward pass will be completed on all subtrellises with .[ [ backward - pass ] ] backward pass + + + + + + + + + + + + + in the backward pass we compute a set of node - objective functions with respect to subtrellises as dictated by recursion [ eqn:3 ] .the backward pass is similar in spirit to the forward pass except for the definition of the heuristic function and the backward direction of computational flow on the trellis .the computation starts by initializing the working indices of subtrellises to the final section and ends when the working index of the winning subtrellis is the start section .the heuristic function at with working index for the backward pass is defined as it can be seen that is the sum of weights of all paths in whose last edges lie completely in .it follows that all the properties noted for the heuristic function along the forward pass carry over to the heuristic function for the backward pass .this phase computes the approximate marginals corresponding to each symbol and each position .we compute the approximate aposteriori probabilities as the product in the inner - sum is taken over only those node - objective functions which have been computed .by restricting the phase 2 of maa to work with a fixed number of subtrellises say , we can reduce the storage requirements of the algorithm at the cost of incurring a further penalty in the accuracy of the apps computed . in order to decide the subtrellises to work with , we evaluate at each subtrellis and choose the first subtrellises at which this quantity is the largest .+ the rationale behind this choice is that both and are overestimates to the and the minimum of the two is nearer to the true weight of .we call this modified scheme as the ( ) .surprisingly this scheme gives pretty good results for the codes on which we have run experiments .we have coded the maa , 4-maa and ah - decoder and shown the results of simulations on the minimal 16 state tail - biting trellis for the extended ( 24,12,8 ) golay code and a 64 state tail - biting trellis for a rate 1/2 convolutional codes with memory 6 ( equivalent to the ( 554,744 ) convolutional code of ) , with circle size 48 .this is the same code experimented on in .the channel model used is an awgn channel with antipodal signaling .the source bits were assumed to be equally - likely . for the convolutional codewe show the variation of the average number of forward - backward updates with the signal to noise ratio for the maa and compare it with the number of forward - backward updates required by the ah - decoder .+ we also show the variation of average number of subtrellises explored by the phase 2 of maa with signal to noise ratio .the tail - biting trellis representing the code has states and subtrellises .each subtrellis has states .the result is displayed in table [ tab : expansions2 ] .it can be seen that at moderate to high snr , the phase 2 of maa seems to work on only a single subtrellis .we also display the performance of the maa , 4-maa and ah - decoder in figures [ fig : golay - ber ] and [ fig : conv1-ber ] and find that there is virtually no difference in the bit error rates for the three algorithms for the convolutional code . for the golay code the maa and the 4-maa seem to do slightly better than the ah - decoder .subtrellises * + & * by maa * & * ah - decoder * & * examined by maa * + 0.0 & 91867 & 22528 & 7.60 + 0.5 & 53737 & 22528 & 4.04 + 1.0 & 34113 & 22528 & 2.16 + 1.5 & 25984 & 22528 & 1.38 + 2.0 & 23087 & 22528 & 1.11 + 2.5 & 22230 & 22528 & 1.02 + 3.0 & 22049 & 22528 & 1.00 + 3.5 & 22014 & 22528 & 1.00 + 4.0 & 22008 & 22528 & 1.00 + 4.5 & 22008 & 22528 & 1.00 + 5.0 & 22008 & 22528 & 1.00 +we have shown that at the expense of some extra space we can obtain approximate algorithms with good performance for map decoding on tail - biting trellises .simulations on tail - biting trellises for the ( 24,12 ) extended golay code and a rate , memory convolutional code used in have been carried out and the results on an awgn channel are reported .
|
we propose two approximate algorithms for map decoding on tail - biting trellises . the algorithms work on a subset of nodes of the tail - biting trellis , judiciously selected . we report the results of simulations on an awgn channel using the approximate algorithms on tail - biting trellises for the extended golay code and a rate convolutional code with memory . [ section ]
|
when a weak structural element in a material with stochastically distributed strengths fails , the increased load on the remaining elements may cause further ruptures , and thus induce a burst avalanche of a certain size , i.e. one in which elements fail simultaneously .when the load is further increased , new avalanches occur . the distribution of avalanche sizes , either at a fixed load , or the cumulative distribution from zero load until complete break - down of the material , depends on several factors , in particular the threshold strength distribution and the mechanism for load sharing between the elements . due to the complex interplay of failures and redistributions of local stresses ,few analytical results are available in this field ; computer simulations are commonly applied see herrmann and roux for a review . however , firm analytical results , albeit on simplified models , are important in order to develop a deeper understanding for universal properties and general trends . in the present articlewe therefore review and study burst events in models of fibrous materials that are sufficient simple to allow theoretical treatment .the models we consider are bundles of parallel fibers , clamped at both ends , and stretched by a force ( fig . [ fig1 ] ) .the individual fibers in the bundle are assumed to have strength thresholds , , which are independent random variables with the same cumulative distribution function and corresponding density function : whenever a fiber experiences a force equal to or greater than its strength threshold , it breaks immediately and does not contribute to the strength of the bundle thereafter .the models differ , apart from differences in the threshold distribution , in how stress is redistributed on the surviving fibers when a fiber fails . a central quantity to be studied in the followingis the expected number of bursts of size when the fiber bundle is stretched until complete breakdown .the model of this kind with the longest history is one in which it is assumed that the fibers obey hookean elasticity right up to the breaking point , and that the load distributes itself equally among the surviving fibers .the model with this democratic load redistribution is similar to mean - field models in statistical physics , and is called here the _global model_. for large daniels was able to determine the asymptotic distribution for the bundle strength , a result that been refined later .the distribution of burst avalanches was first studied by hemmer and hansen .their main result was that for a large class of threshold distributions the bursts were distributed according to an asymptotic power law , with a universal exponent in sec .[ sec2 ] we show that for special threshold distributions the power law ( [ 2 ] ) is not obeyed .the assumption of global load - sharing among surviving fibers is often unrealistic , and it is natural to consider models in which the extra stresses by a fiber rupture are taken up by the fibers in the immediate vicinity . the extreme version is to assume that only the _ nearest - neighbor _ surviving fibers take part in the load - sharing . in a one - dimensional geometry , as in fig .[ fig1 ] , presicely two fibers , one on each side , share the extra stress .when the strength thresholds take only two values , the bundle strength distribution has been found analytically .one interesting result is that the average bundle strength has a logarithmic size effect .the distribution of burst avalanches for such models with local load - sharing has not yet been determined , but simulations show that this model is _ not _ in the same universality class as the global model . the challenge to determine the burst distribution by other means than simulations remains , andthat this is possible , at least in a special case ( sec .[ sec3 ] ) , is one of the main results of the present article .in the global model the total force on a fiber bundle is distributed evenly on the surviving fibers . with a given load per fiber , all fibers with threshold less than have failed , while fibers with thresholds above survives . for large the average number of fibers with thresholds exceeding the value will be ] . andthe thresholds in the interval , of which there are , will be poisson distributed .when is arbitrary large , the burst sizes can be arbitrary large in any finite interval of .assume that an infinitesimal increase in the external force results in a break of a fiber with threshold .then the load that this fiber suffered , will be redistributed on the ] .the region to the right of fiber number has broken fibers , and the probability of this is .this gives the recursion relation (l;x ) .\label{pf}\ ] ] the last factor is the probability that fiber number , which has failed neighbors , holds .insertion of the product form into ( [ pf ] ) yields the following equations for the : (l;x)t(x)^{-l-1}\;p_f(i;x ) .\label{pf}\ ] ] it is consistent to let .since may take the values , ( [ pf ] ) is a set of homogeneous equations for the quantities .the system determinant of the equation set must vanish , and this determines for a given force parameter . with quantities can then determined .the practical solution procedure is by iteration . from the definitions of and it follows that the ratio is the probabilility , at force parameter , that among the first fibers there is no fatal burst , _ given _ that there are failed fibers on the right - hand side .then is the probability that an increase of the force parameter from to starts an avalanche of size and length , so that afterwards there is no fatal burst among the fibers on the left - hand side .finally we want to determine the probability for a burst of size in a system of fibers in a ring configuration ( fig .[ fig1 ] ) . on the left of a selected fiber we consider a region of fibers , and on the right a region of fibers .the probability that the force parameter increase induces a burst of size and length to the left of that holds is given by ( [ pro ] ) .here , of course .on the right - hand side of a number fibers adjacent to may have failed .( here is less than the remaining number of fibers . )the probability of such a configuration ( with no fatal burst ) is .we must also take into account that the fiber itself , with failed neighboring fibers , must hold , the probability of which is $ ] .when we take this together , sum over the possible values of , and , and integrate over , we obtain \;dx.\end{aligned}\ ] ] using the product property the sum over simply yields a factor , and we find . \label{864}\end{aligned}\ ] ] this may now be evaluated .the results for a bundle of are shown in table [ tab2 ] , together with simulation results for 4 000 000 bundles , each having 20 000 fibers .the agreement between the simulation data and the theoretical data is , as we see , extremely satisfactory .an analysis of the burst distribution obtained for this local model shows that the distribution does not follow a power law except for small values of ( fig .[ fig3 ] ) .if one nevertheless does a linear regression analysis on this part of the data set , the effective power would be of the order , considerable larger than the `` mean - field '' value for the global model .let us now attempt to find an simple estimate for the maximal force per fiber that the fiber bundle can tolerate . in order to dothat we assume that the fatal burst occurs in a region where no fibers have previously failed so that the burst has the same magnitude and length . we know that a single burst of length is fatal , eq .( [ max ] ) , so our criterion is simply if we take into account that the two fibers adjacent to the burst should hold , and ignore the rest of the bundle , the gap distribution would be ^ 2p(\delta , \delta;x)\;dx = \frac{2p(\delta,\delta)}{\delta(\delta+1)(\delta+2)^{\delta+1}}.\ ] ] with the abbreviation we have using stirling s formula .taking logarithms we have \simeq - ( \delta + 2),\ ] ] using the result ( [ conv ] ) of appendix [ appb ] for when is large . the failure criterion ( [ fatal ] ) then takes the form since we have the following estimate for the maximum force that the fiber bundle can tolerate before complete failure : thus the maximum load that the fiber bundle can carry does not increase proportional to the number of fibers , but slower .this is to be expected since the probability of finding somewhere a stretch of weak fibers that start a fatal avalanche increases when the number of fibers increases .the dependence agrees with a previous estimate by zhang and ding and is seen also in the model with thresholds zero or unity .we have in this article discussed burst distributions in fiber bundles with two different mechanisms for load distribution when fibers rupture , viz . global or extremely local load redistributions .the main results are the following : + ( i ) for the global model the burst distribution follows a universal power law .+ ( ii ) deviations from this power - law dependence may , however , occur for exceptional distributions of fiber strengths .+ ( iii ) for the local model and for a uniform distribution of fiber thresholds we show that it is possible , although complicated , to carry through an theoretical analysis of the burst distribution .+ ( iv ) a simulation study for a bundle of 20 000 fibers confirms convincingly the theoretical results .+ ( v ) for the local model the burst distribution falls off with increasing burst size much faster than for the global model , and does not follow a power law .+ ( vi ) the expected maximum load that a bundle with global redistribution mechanism can tolerate increases proportional to the number of fibers , and proportional to for the local redistribution mechanism . the combinatorial problem in sec .[ sec2a ] can be formulated more generally as follows : let be the probability that by distributing nonidentical particles among numbered boxes , box number will contain no particles , box number will contain at most particle , and in general box number will contain at most particles .we now prove by induction that assume that this holds for , all .insertion into the right - hand side of ( [ app.2 ] ) gives in accordance with ( [ app.3 ] ) .since ( [ app.3 ] ) is valid for , the induction is complete . in sec .[ sec3c ] an estimate for was needed .we base it on the recursion relations ( [ n3 ] ) , ( [ n5 ] ) and ( [ n7 ] ) for , : we have used that og vanish for . introducing the generating function the recursion ( [ r ] )may be transformed to the differential equation = g(z),\ ] ] with solution thus the radius of convergence of the power series ( [ g ] ) is unity , and therefore h. j. hermann and s. roux , eds . _ statistical models for the fracture of disordered media _ ( north - holland , amsterdam , 1990 ) .f. t. peirce , j. text .* 17 * , 355 ( 1926 ) .h. e. daniels , proc .london * a183 * , 405 ( 1945 ) . h. e. daniels , adv .* 21 * , 315 ( 1989 ) .p. c. hemmer and a. hansen , asme j. appl.mech .* 59 * , 909 ( 1992 ) .p. m. duxbury and p. m. leath , phys .b * 49 * , 12676 ( 1994 ) .a. hansen and p. c. hemmer , phys .a * 184 * , 394 ( 1994 ) .s. d. zhang and e. j. ding , phys .a * 193 * 425 ( 1994 ) .a. hansen and p. c. hemmer , trends in statistical physics * 1 * , 213 ( 1994 ) .w. lee , phys .b * 50 * , 3797 ( 1994 ) .d. g. harlow and s. l. phoenix , j. mech .phys.solids * 39 * , 173 ( 1991 ) , and references therein .d. g. harlow and s. l. phoenix , int .j. fracture * 17 * , 601 ( 1981 ) . c. c. kuo and s. l. phoenix , j. appl .* 24 * , 137 ( 1987 ) .s. d. zhang and e. j. ding , phys .b * 53 * , 646 ( 1996 ) .s. d. zhang and e. j. ding , j. phys .a * 28 * , 4323 ( 1995 ) .k. olaussen , private communication .
|
we review limiting models for fracture in bundles of fibers , with statistically distributed thresholds for breakdown of individual fibers . during the breakdown process , avalanches consisting of simultaneous rupture of several fibers occur , and the distribution of the magnitude of such avalanches is the central characteristics in our analysis . for a bundle of parallel fibers two limiting models of load sharing are studied and contrasted : the _ global _ model in which the load carried by a bursting fiber is equally distributed among the surviving members , and the _ local _ model in which the nearest surviving neighbors take up the load . for the global model we investigate in particular the conditions on the threshold distribution which would lead to anomalous behavior , i.e. deviations from the asymptotics , known to be the generic behavior . for the local model no universal power - law asymptotics exists , but we show for a particular threshold distribution how the avalanche distribution can nevertheless be explicitly calculated in the large - bundle limit .
|
in recent work we have studied recursive markov decision processes ( rmdps ) and turn - based recursive simple stochastic games ( rssgs ) ( ) , providing a number of strong upper and lower bounds for their analysis .these define infinite - state ( perfect information ) stochastic games that extend recursive markov chains ( rmcs ) ( ) with non - probabilistic actions controlled by players . herewe extend our study to recursive concurrent stochastic games ( rcsgs ) , where the two players choose moves simultaneously and independently at each state , unlike rssgs where only one player can move at each state .rcsgs define a class of infinite - state zero - sum ( imperfect information ) stochastic games that can naturally model probabilistic procedural programs and other systems involving both recursive and probabilistic behavior , as well as concurrent interactions between the system and the environment .informally , all such recursive models consist of a finite collection of finite state component models ( of the same type ) that can call each other in a potentially recursive manner . for rmdps and rssgs with multiple exits( terminating states ) , our earlier work already showed that basic questions such as almost sure termination ( i.e. does player 1 have a strategy that ensures termination with probability 1 ) are already undecidable ; on the other hand , we gave strong upper bounds for the important special case of _ single - exit _ rmdps and rssgs ( called 1-rmdps and 1-rssgs ) .our focus in this paper is thus on single - exit recursive concurrent stochastic games ( 1-rcsgs for short ) .these models correspond to a concurrent game version of multi - type _ branching processes _ and _ stochastic context - free grammars _ , both of which are important and extensively studied stochastic processes with many applications including in population genetics , nuclear chain reactions , computational biology , and natural language processing ( see , e.g. , and other references in ) .it is very natural to consider game extensions to these stochastic models .branching processes model the growth of a population of entities of distinct types . in each generationeach entity of a given type gives rise , according to a probability distribution , to a multi - set of entities of distinct types .a branching process can be mapped to a 1-exit recursive markov chain ( 1-rmc ) such that the probability of eventual extinction of a species is equal to the probability of termination in the 1-rmc . modelingthe process in a context where external agents can influence the evolution to bias it towards extinction or towards survival leads naturally to a game .a 1-rcsg models the process where the evolution of some types is affected by the concurrent actions of external favorable and unfavorable agents ( forces ) . in , we showed that for the turned - based 1-rssg termination game , where the goal of player 1 ( respectively , player 2 ) is to maximize ( resp .minimize ) the probability of termination starting at a given vertex ( in the empty calling context ) , we can decide in pspace whether the value of the game is for a given probability , and we can approximate this value ( which can be irrational ) to within given precision with the same complexity .we also showed that both players have optimal _ deterministic _ _ stackless and memoryless _ ( sm )strategies in the 1-rssg termination game ; these are strategies that depend neither on the history of the game nor on the call stack at the current state .thus from each vertex belonging to the player , such a strategy deterministically picks one of the outgoing transitions .already for finite - state concurrent stochastic games ( csgs ) , even under the simple termination objective , the situation is rather different .memoryless strategies do suffice for both players , but randomization of strategies is necessary , meaning we ca nt hope for deterministic -optimal strategies for either player .moreover , player 1 ( the maximizer ) can only attain -optimal strategies , for , whereas player 2 ( the minimizer ) does have optimal randomized memoryless strategies ( see , e.g. , ) .another important result for finite csgs is the classic hoffman - karp strategy improvement method , which provides , via simple local improvements , a sequence of randomized memoryless strategies which yield payoffs that converge to the value of the game .here we generalize all these results to the infinite - state setting of 1-rcsg termination games .we first characterize values of the 1-rcsg termination game as the least fixed point solution of a system of nonlinear minimax functional equations .we use this to show pspace decidability for the _ qualitative termination problem _ ( is the value of the game ? ) and the _ quantitative termination problem _ ( is the value of the game ( or , etc . ) , for given rational ) , as well as pspace algorithms for approximating the termination probabilities of 1-rcsgs to within a given number of bits of precision , via results for the existential theory of reals .( the simpler `` qualitative problem '' of deciding whether the game value is only depends on the transition structure of the 1-rcsg and not on the specific probabilities . for this problemwe give a polynomial time algorithm . )we then proceed to our technically most involved result , a strategy improvement technique for 1-rcsg termination games .we use this to show that in these games player 1 ( maximizer ) has -optimal randomized - stackless & memoryless ( r - sm for short ) strategies , whereas player 2 ( minimizer ) has optimal r - sm strategies . thus ,such games are r - sm - determined .these results mirror and generalize in a very strong sense the randomized memoryless determinacy results known for finite stochastic games .our technique extends hoffman - karp s strategy improvement method for finite csgs to an infinite state setting .however , the proofs in our infinite - state setting are very different .we rely on subtle analytic properties of certain power series that arise from studying 1-rcsgs .note that our pspace upper bounds for the quantitative termination problem for 1-rcsgs can not be improved to np without a major breakthrough , since already for 1-rmcs we showed in that the quantitative termination problem is at least as hard as the square - root sum problem ( see ) .in fact , here we show that even the _ qualitative termination problem _ for 1-rcsgs , where the problem is to decide whether the value of the game is exactly 1 , is already as hard as the square - root sum problem , and moreover , so is the quantitative termination decision problem for _ finite _ csgs .we do this via two reductions : we give a p - time reduction from the square - root sum problem to the quantitative termination decision problem for _ finite _ csgs , and a p - time reduction from the quantitative finite csg termination problem to the qualitative 1-rcsg termination problem .it is known ( ) that for finite concurrent games , probabilistic nodes do not add any power to these games , because the stochastic nature of the games can in fact be simulated by concurrency alone .the same is true for 1-rcsgs .specifically , given a finite csg ( or 1-rcsg ) , , there is a p - time reduction to a finite concurrent game ( or 1-rcg , respectively ) , without any probabilistic vertices , such that the value of the game is exactly the same as the value of the game .we will provide a proof of this in section [ sec : basics ] for completeness. * related work .* stochastic games go back to shapley , who considered finite concurrent stochastic games with ( discounted ) rewards .see , e.g. , for a recent book on stochastic games .turn - based `` simple '' finite stochastic games were studied by condon . as mentioned , we studied rmdps and ( turn - based ) rssgs and their quantitative and qualitative termination problems in .in we showed that the qualitative termination problem for both maximizing and minimizing 1-rmdps is in p , and for 1-rssgs is in np .our earlier work developed theory and algorithms for recursive markov chains ( rmcs ) , and have studied probabilistic pushdown systems which are essentially equivalent to rmcs .finite - state concurrent stochastic games have been studied extensively in recent cs literature ( see , e.g. , ) . in particular , the papers and have studied , for finite csgs , the _ approximate _ reachability problem and _ approximate _ parity game problem , respectively . in those papers , it was claimed that these approximation problems are in np .actually there was a minor problem with the way the results on approximation were phrased in , as pointed out in the conference version of this paper , but this is a relatively unimportant point compared to the flaw we shall now discuss .there is in fact a serious flaw in a key proof of .the flaw relates to the use of a result from which shows that for discounted stochastic games the value function is lipschitz continuous with respect to the coefficients that define the game as well as the discount .importantly , the lipschitz constant in this result from depends on the discount ( it is inversely proportional to ) .this fact was unfortunately overlooked in and , at a crucial point in their proofs , the lipschitz constant was assumed to be a fixed constant that does not depend on .this flaw unfortunately affects several results in .it also affects the results of , since the later paper uses the reachability results of . asa consequence of this error , the best upper bound which currently follows from the results in is a pspace upper bound for the decision and approximation problems for the value of finite - state concurrent stochastic reachability games as well as for finite - state concurrent stochastic parity games .( see the erratum note for on k. chatterjee s web page , as well as his ph.d . thesis . )it is entirely plausible that these results can be repaired and that approximating the value of finite - state concurrent reachability games to within a given additive error can in the future be shown to be in np conp , but the flaw in the proof given in is fundamental and does not appear to be easy to fix . on the other hand , for the quantitative decision problem for finite csgs ( as opposed to the approximation problem ) , and even the qualitative decision problem for 1-rcsgs , the situation is different .we show here that the quantitative decision problem for finite csgs , as well as the qualitative decision problem for 1-rcsgs , are both as hard as the square - root sum problem , for which containment even in np is a long standing open problem .thus our pspace upper bounds here , even for the qualitative termination problem for 1-rcsgs , can not be improved to np without a major breakthrough .unlike for 1-rcsgs , the qualitative termination problem for finite csgs is known to be decidable in p - time ( ) .we note that in recent work allender et .al . shown that the square - root sum problem is in ( the 4th level of ) the `` counting hierarchy '' ch , which is inside pspace , but it remains a major open problem to bring this complexity down to np .the rest of the paper is organized as follows . in section 2we present the rcsg model , define the problems that we will study , and give some basic properties . in section 3we give a system of equations that characterizes the desired probabilities , and use them to show that the problems are in pspace . in section 4 we prove the existence of optimal randomized stackless and memoryless strategies , and we present a strategy improvement method . finally in section 5we present reductions from the square root sum problem to the quantitative termination problem for finite csgs , and from the latter to the qualitative problem for recursive csgs .we have two players , player 1 and player 2 .let and be finite sets constituting the _ move alphabet _ of players 1 and 2 , respectively .formally , a _ recursive concurrent stochastic game ( rcsg ) _ is a tuple , where each _ component _ consists of : 1 . a finite set of _ nodes _ , with a distinguished subset of _ entry _ nodes and a ( disjoint ) subset of _ exit _ nodes .2 . a finite set of _ boxes _ , and a mapping that assigns to every box ( the index of ) a component . to each box , we associate a set of _ call ports _ , , and a set of _ return ports _ , , , and let be the set of all nodes , call ports and return ports ; we refer to these as the _ vertices _ of component .3 . a mapping that assigns to every vertex a type describing how the next transition is chosen : if it is chosen probabilistically and if it is determined by moves of the two players .vertices have no outgoing transitions ; for them we let .4 . a transition relation , where for each tuple , the source , the destination , where if then is a real number ] , is ?is ? + the _ approximate _ version : approximate to within desired precision .obviously , the qualitative termination problem is a special case of the quantitative problem , setting .as mentioned , for multi - exit rcsgs these are all undecidable .thus we focus on _ single - exit _ rcsgs ( _ 1-rcsgs _ ) , where every component has one exit .since for 1-rcsgs it is always clear which exit we wish to terminate at starting at vertex ( there is only one exit in s component ) , we abbreviate , etc . , as , etc . , and we likewise abbreviate other subscripts . a different `` qualitative '' problem is to ask whether ? as we will show in proposition [ prop : zeros ] , this is an easy problem : deciding whether for a vertex in a 1-rcsg can be done in polynomial time , and only depends on the transition structure of the 1-rcsg , not on the specific probabilities .as mentioned in the introduction , it is known that for concurrent stochastic games , probabilistic nodes do not add any power , and can in effect be `` simulated '' by concurrent nodes alone ( this fact was communicated to us by k. chatterjee ) . the same fact is true for 1-rcsgs .specifically , the following holds : there is a p - time reduction , which , given a finite csg ( or a 1-rcsg ) , , computes a finite concurrent game ( or 1-rcg , respectively ) , without any probabilistic vertices , such that the value of the game is exactly the same as the value of the game .first , suppose for now that in all probabilistic transitions have probability .in other words , suppose that for a probabilistic vertex ( which is not an exit or a call port ) in an 1-rcsg , we have two transitions and . in the new game , change to a play vertex , i.e. , let , and let , and replace the probabilistic transitions out of with the following 4 transitions : , , and .do this for all probabilistic vertices in , thus obtaining which contains no probabilistic vertices .now , consider any strategy for player 1 in the original game , and a strategy in the new game that is consistent with , i.e. for each history ending at an original play vertex has the same distribution as ( and for the other histories ending at probabilistic vertices it has an arbitrary distribution ) .for any strategy for player 2 in the game , consider the strategy , , for player 2 in , which is defined as follows : whenever the play reaches a probabilistic vertex of ( in any context and with any history ) plays and with 1/2 probability each .at all non - probabilistic vertices of , plays exactly as ( and it may use the history , etc . ) . this way , no matter what player 1 does , whenever play reaches the vertex ( in any context ) the play will move from to and to with probability each .thus for any vertex , the value in the game is the same as the value in the game .so the optimal payoff value for player 1 in the game starting at any vertex is not greater in than in .a completely symmetric argument shows that for player 2 the optimal payoff value starting at is not greater in than in .thus , the value of the game starting at is the same in both games .we can now generalize this to arbitrary rational probabilities on transitions , instead of just probability , by using a basic trick to encode arbitrary finite probability distributions using a polynomial - sized finite markov chain all of whose transitions have probability .namely , suppose goes to with probability and to with probability , where , are integers with bits ( we can write both as -bit numbers , by adding leading 0 s to if necessary so that it has length exactly , same as ) . flip ( at most ) coins .view this as generating a bit binary number .if the number that comes out is ( i.e. ) , then go to , if between and ( i.e. , ) then go to , if go back to the start , . a naive way to do this would require exponentially many states in .but we only need at most states to encode this if we do nt necessarily flip all coins but rather do the transition to or , as soon as the outcome is clear from the coin flips .that is , if the sequence formed by the initial sequence of coin flips so far differs from both the prefixes of and of the same length , then we do the transition : if transition to , if transition to , and if then transition to .thus , we only need to remember the number of coins flipped so far , and if is greater than the length of the common prefix of and then we need to remember also whether the coin flips so far agree with or with . clearly , a simple generalization of this argument works for generating arbitrary finite rational probability distributions , such that .if is a -bit integer , then the number of new states needed is at most , i.e. linear in the encoding length of the rationals .in ( ) we defined a monotone system of nonlinear min- & -max equations for 1-rssgs ( i.e. the case of simple games ) , and showed that its _ least fixed point _solution yields the desired probabilities . herewe generalize these to nonlinear minimax systems for concurrent games , 1-rcsgs .let us use a variable for each unknown , and let be the vector of all , .the system has one equation of the form for each vertex .suppose that is in component with ( unique ) exit .there are 4 cases based on the `` _ type _ '' of .1 . : . in this case : . : and .then the equation is ( if has no outgoing transitions , this equation is by definition . ) : is a call port . the equation is , where is the unique exit of .4 . .then the equation is , where the right - hand side is defined as follows . given a value vector , and a play vertex ,consider the zero - sum matrix game given by matrix , whose rows are indexed by player 1 s moves from node , and whose columns are indexed by player 2 s moves .the payoff to player 1 under the pair of deterministic moves , and , is given by , where .let be the value of this zero - sum matrix game .by von neumann s minimax theorem , the value and optimal mixed strategies exist , and they can be obtained by solving a linear program with coefficients given by the s . in vector notation , we denote the system by . given 1-exit rcsg , we can easily construct this system .note that the operator is _ monotone _ : for , if then .this follows because for two game matrices and of the same dimensions , if ( i.e. , for all and ) , then .note that by definition of , for , .we now construct the system of nonlinear minimax functional equations , , associated with the 1-rcsg we encountered in figure [ fig : example_1rcsg ] ( see example [ example : rcsg ] ) .we shall need one variable for every _ vertex _ of that 1-rcsg , to represent the value of the termination game starting at that vertex , and we will need one equation for each such variable .thus , the variables we need are , , , .the equations are as follows : \right)\cr x_{u_4 } & = { \mbox{val}}\left ( \left [ \begin{array}{cc } x_{(b_2,s ) } & x_t \end{array } \right ] \right ) } \ ] ] we now identify a particular solution to , called the _ least fixed point _( ) solution , which gives precisely the termination game values .define , and define , for .let denote the -vector ( using the same indexing as used for ) .for , let denote , similarly , the -vector .[ lfp - char - thm ] let be the system associated with 1-rcsg .then , and for all , if , then ( i.e. , is the _ least fixed point _ , of ) . moreover , , i.e. , the `` value iteration '' sequence converges monotonically to the lfp , .we first prove that .suppose . the equations for vertices of types , and can be used to define precisely the values in terms of other values .thus , the only possibility is that for some vertex of . in other words , .suppose . to see that this ca nt happen, we construct a strategy for player 1 that achieves better . at node ,let player 1 s strategy play in one step its optimal randomized minimax strategy in the game ( which exists according to the minimax theorem ) .choose such that .after the first step , at any vertex player 1 s strategy will play in such a way that achieves a value ( i.e , an -optimal strategy in the rest of the game , which must exist because the game is determined ) .let be an -vector every entry of which is .now , the matrix game is just an additive translation of the matrix game , and thus it has precisely the same -optimal strategies as the matrix game , and moreover .thus , by playing strategy , player 1 guarantees a value which is , which is a contradiction .thus .a completely analogous argument works for player 2 , and shows that .thus , and hence .next , we prove that if is any vector such that , then .let be the randomized stackless and memoryless strategy for player 2 that always picks , at any state , for play vertex , a mixed 1-step strategy which is an optimal strategy in the matrix game .( again , the existence of such a strategy is guaranteed by the minimax theorem . ) for all strategies of player 1 , and for all , . by induction .the base case is trivial. 1 . .if is an exit , then for all , clearly .2 . .let be the strategy defined by for all .then , 3 . . in this case , , and , where is the unique exit node of .now , by the inductive assumption , for all . moreover , since , .hence , using these inequalities and substituting , we get 4 . : in this case , starting at , whatever player 1 s strategy is , it has the property that . by the inductive hypothesis , so we are done by induction and by the monotonicity of .now , by the lemma , .this holds for any strategy .therefore , , for every vertex .thus , by the determinacy of rcsg games , we have established that , for all vertices . in other words , .the fact that follows from a simple tarski - knaster argument . for the system of equations given in example [ example : equations ] , associated with the 1-rcsg given in example [ example : rcsg ] , fairly easy calculations using the equations show that the least fixed point of the system ( and thus the game values , starting at the different vertices ) is as follows : ; ; ; ; and .in this case the values turn out to be rational and are simple to compute , but in general the values may be irrational and difficult to compute , and even if they are rational they may require exponentially many bits to represent ( in standard notation , e.g. , via reduced numerator and denominator given in binary ) in terms of the size of the input 1-rcsg or equation system. furthermore , in this game there are pure optimal ( stackless and memoryless ) strategies for both players .specifically , the strategy for player 1 ( maximizer ) that always plays l from nodes is optimal , and the strategy for player 2 that always player l from nodes and is optimal . in general for 1-rcsgs , we show randomized stackless and memoryless -optimal and optimal strategies do exist for players 1 and 2 , respectively .however , for player 1 only -optimal strategies may exist , and although optimal strategies do exist for player 2 they may require randomization using irrational probabilities .this is the case even for finite - state concurrent games .we can use the system of equations to establish the following upper bound for computing the value of a 1-rcsg termination game : the qualitative and quantitative termination problems for 1-exit rcsgs can be solved in pspace .that is , given a 1-exit rcsg , vertex and a rational probability , there is a pspace algorithm to decide whether ( or , or , etc . ) .the running time is where is the number of variables in .we can also approximate the vector of values to within a specified number of bits of precision ( given in unary ) , in pspace and in time .[ thm : complexity ] using the system , we can express the condition by a sentence in the existential theory of the reals as follows : note that the sentence is true , i.e. there exists a vector that satisfies the constraints of the above sentence if and only if the least fixed point satisfies them .the constraints for vertices of type 1 , 2 , and 3 ( exit , probabilistic vertex and call port ) are clearly polynomial equations , as they should be in a sentence of the existential theory of the reals .we only need to show how to express equations of the form in the existential theory of reals .we can then appeal to well known results for deciding that theory ( ) .but this is a standard fact in game theory ( see , e.g. , where it is used for finite csgs ) .the minimax theorem and its lp encoding allow the predicate `` '' to be expressed as an existential formula in the theory of reals with free variables and , such that for every , there exists a unique ( the game value ) satisfying .specifically , the formula includes , besides the free variables , existentially quantified variables , and for the probabilities of the moves of the two players , and the conjunction of the following constraints ( recall that each entry of the matrix is a variable where is the vertex such that ) 1 . for all ; ; 2 . for all ; ; 3 . for all ; 4 . for all . to approximate the vector of game values within given precision we can do binary search using queries of the form for all vertices .determining the vertices for which the value is 0 , is easier and can be done in polynomial time , as in the case of the turn - based 1-rssgs . given a 1-rcsg we can compute in polynomial time the set of vertices such that .this set depends only on the structure of the given 1-rcsg and not on the actual values of the transition probabilities .[ prop : zeros ] from the system of fixed point equations we have the following : ( 1 ) all exit nodes are not in ; ( 2 ) a probabilistic node is in if and only if all its ( immediate ) successors are in ; ( 3 ) the call port of a box is in if and only if the entry node of the corresponding component is in or the return port is in ; ( 4 ) a play node is in if and only if player 2 has a move such that for all moves of player 1 , the next node , i.e. the ( unique ) node such that , is in .only the last case of a play node needs an explanation .if player 2 has such a move , then clearly the corresponding column of the game matrix has all the entries 0 , and the value of the game ( i.e. , ) is 0 .conversely , if every column of has a nonzero entry , then the value of the game with this matrix is positive because for example player 1 can give equal probability to all his moves .thus , in effect , as far as computing the vertices with zero value is concerned , we can fix the strategy of player 1 at each play vertex to play at all times all legal moves with equal probability to get a 1-rmdp ; a vertex has nonzero value in the given 1-rcsg iff it has nonzero value in the 1-rmdp . the algorithm to compute the set of vertices with 0value is similar to the case of 1-rssgs .initialize to , the set of non - exit vertices .repeat the following until there is no change : 1 .if there is a probabilistic node that has a successor not in , then remove from .2 . if there is a call port such that both the entry node of the corresponding component and the return port of the box are not in , then remove from .3 . if there is a play node such that for every move of player 2 there is a move of player 1 such that the next node from under is not in , then remove from .there are at most iterations and at the end is the set of vertices such that .the proof of theorem 1 implies the following : [ cor - min - player ] in every 1-rcsg termination game , player 2 ( the minimizer ) has an optimal r - sm strategy .consider the strategy in the proof of theorem [ lfp - char - thm ] , chosen not for just any fixed point , but for itself .that strategy is r - sm and is optimal .player 1 does not have optimal r - sm strategies , not even in finite concurrent stochastic games ( see , e.g. , ) .we next establish that it does have finite r - sm _-optimal strategies _ , meaning that it has , for every , a r - sm strategy that guarantees a value of at least , starting from every vertex in the termination game .we say that a game is _ r - sm - determined _ if , letting and denote the set of r - sm strategies for players 1 and 2 , respectively , we have . 1 . ( strategy improvement )starting at any r - sm strategy for player 1 , via local strategy improvement steps at individual vertices , we can derive a series of r - sm strategies , such that for all , there exists such that for all , is an -optimal strategy for player 1 starting at any vertex , i.e. , for all vertices .+ each strategy improvement step involves solving the quantitative termination problem for a corresponding 1-rmdp .thus , for classes where this problem is known to be in p - time ( such as linearly - recursive 1-rmdps , ) , strategy improvement steps can be carried out in polynomial time .player 1 has -optimal r - sm strategies , for all , in 1-rcsg termination games1-rcsg termination games are r - sm - determined .note that ( 2 . ) follows immediately from ( 1 . ) , and ( 3 . )follows because by corollary [ cor - min - player ] , player 2 has an optimal r - sm strategy and thus + .let be any r - sm strategy for player 1 .consider .first , let us note that if then .this is so because , by theorem [ lfp - char - thm ] , , and on the other hand , is just one strategy for player 1 , and for every vertex , .next we claim that , for all vertices , satisfies its equation in . in other words , . to see this, note that for vertices , no choice of either player is involved , thus the equation holds by definition of .thus , the only equations that may fail are those for , of the form .we need the following . for any r - sm strategy for player 1 , and for any , .we are claiming that .the inequality follows because a strategy for player 2 can in the first step starting at vertex play its optimal strategy in the matrix game , and thereafter , depending on which vertex is the immediate successor of in the play , the strategy can play `` optimally '' to force at most the value . now , suppose that for some , . thus by the lemma .consider a revised r - sm strategy for player 1 , , which is identical to , except that locally at vertex the strategy is changed so that , where is an optimal mixed minimax strategy for player 1 in the matrix game .we will show that switching from to will improve player 1 s payoff at vertex , and will not reduce its payoff at any other vertex .consider a parameterized 1-rcsg , , which is identical to , except that is a randomizing vertex , all edges out of vertex are removed , and replaced by a single edge labeled by probability variable to the exit of the same component , and an edge with remaining probability to a dead vertex . fixingthe value determines an 1-rcsg , .note that if we restrict the r - sm strategies or to all vertices other than , then they both define the same r - sm strategy for the 1-rcsg . for each vertex and strategy of player 2 , define to be the probability of eventually terminating starting from in the markov chain .let . recall that defines a probability distribution on the actions available to player 1 at vertex .thus is the probability of action .let be any action of player 2 for the 1-step zero - sum game with game matrix .let denote the vertex such that .let .[ lem : pow - series ] fix the vertex .let be any function .the following properties hold : 1 .if at some point ] , then for all .first , we prove this for , for some vertex .note that , once player 1 picks a r - sm strategy , a 1-rcsg becomes a 1-rmdp . by a result of ,player 2 has an optimal deterministic sm response strategy .furthermore , there is such a strategy that is optimal regardless of the starting vertex .thus , for any value of , player 2 has an optimal deterministic sm strategy , such that for any start vertex , we have .let , and let be the ( finite ) set of deterministic sm strategies of player 2 .then .now , note that the function is the probability of reaching an exit in an rmc starting from a particular vertex .thus , by , for a polynomial system with non - negative coefficients , but with the additional feature that the variable appears as one of the coefficients . since this limit can be described by a power series in the variable with non - negative coefficients , has the following properties : it is a continuous , differentiable , and non - decreasing function of ] , ] , , then for all , . to see this , note that if and , then for all , , which contradicts the fact that .thus , and since is non - decreasing , it follows that for all . since , we also have for all .similarly , if for some , then for all . to see this , note that if for some , , , then since is non - decreasing and , it must be the case that .but then , which is a contradiction .it follows that has the same properties , namely : if at some point ] . on the other hand ,if at ] .the term will come out from under the square root , as , so we care only about the expression in the brackets , which is so , where and .[ thm : csg - reduction ] there is a p - time reduction from the quantitative termination ( decision ) problem for finite csgs to the qualitative termination problem for 1-rcsgs .consider the 1-rmc depicted in figure [ fig : example_rmc_param ] .we assume . as shown in ( , theorem 3 ) , in this 1-rmc the probability of termination starting at is if and only if .now , given a finite csg , , and a vertex of , do the following : first `` clean up '' by removing all nodes where the min player ( player 2 ) has a strategy to achieve probability 0 .we can do this in polynomial time as follows .note that the only way player 2 can force a probability 0 of termination is if it has a strategy such that , for all strategies of player 1 , there is no path in the resulting markov chain from the start vertex to the terminal node . butthis can only happen if , ignoring probabilities , player 2 can play in such a way as to avoid the terminal vertex .this can be checked easily in polynomial time .the revised csg will have two designated terminal nodes , the old terminal node , labeled `` 1 '' , and another terminal node labeled `` 0 '' . from every node of in the revised csg which does not carry full probability on its outedges , we direct all the `` residual '' probability to `` 0 '' , i.e. , we add an edge from to `` 0 '' with probability , where the sum is over all remaining nodes is the csg .let be a value that is strictly less than the least probability , over all vertices , under any strategy for player 2 , of reaching the terminal node .obviously such an exists in the revised csg , because by corollary [ cor - min - player ] ( specialized to the case of finite csgs ) player 2 has an optimal randomized s&m strategy .fixing that strategy , player 1 can force termination from vertex with positive probability .we take .( we do not need to compute ; we only need its existence for the correctness proof of the reduction . ) in the resulting finite csg , we know that if player 1 plays -optimally ( which it can do with randomized s&m strategies ) , and player 2 plays arbitrarily , there is no bottom scc in the resulting finite markov chain other than the two designated terminating nodes `` 0 '' and `` 1 '' . in other words , all the probability exits the system , as long as the maximizing player plays -optimally .now , take the remaining finite csg , call it .just put a copy of at the entry of the component of the 1-rmc in figure [ fig : example_rmc_param ] , identifying the entry with the initial node , , of .take every transition that is directed into the terminal node `` 1 '' of g , and instead direct it to the exit of the component .next , take every edge that is directed into the terminal `` 0 '' node and direct it to the first call port , of the left box .both boxes map to the unique component . call this 1-rcsg .we now claim that the value in the finite csg for terminating at the terminal `` 1 '' iff the value for terminating in the resulting 1-rcsg , .the reason is clear : after cleaning up the csg , we know that under an -optimal strategy for the maximizer for reaching `` 1 '' , all the probability exits either at `` 1 '' or at `` 0 '' .we also know that the supremum value that the maximizing player can attain will have value 1 iff the supremum probability it can attain for going directly to the exit of the component in is , but this is precisely the supremum probability that maximizer can attain for going to `` 1 '' in .lastly , note that the fact that the quantitative probability was taken to be for the finite csg is without loss of generality .given a finite csg and a rational probability , , it is easy to efficiently construct another finite csg such that the termination probability for is iff the termination probability for is .we have studied recursive concurrent stochastic games ( rcsgs ) , and we have shown that for 1-exit rcsgs with the termination objective we can decide both quantitative and qualitative problems associated with computing their values in pspace , using decision procedures for the existential theory of reals , whereas any substantial improvement ( even to np ) of this complexity , even for their qualitative problem , would resolve a long standing open problem in exact numerical computation , namely the square - root sum problem .furthermore , we have shown that the quantitative decision problem for finite - state concurrent stochastic games is also at least as hard as the square - root sum problem .an important open question is whether approximation of the game values , to within a desired additive error , for both finite - state concurrent games and for 1-rcsgs , can be done more efficiently . our lower bounds ( with respect to square - root sum ) do not address the approximation question , and it still remains open whether ( a suitably formulated gap decision problem associated with ) approximating the value of even finite - state csgs , to within a given additive error , is in np . in , we showed that model checking linear - time ( -regular or ltl ) properties for 1-rmdps ( and thus also for 1-rssgs ) is undecidable , and that even the qualitative or approximate versions of such linear - time model checking questions remains undecidable .specifically , for any , given as input a 1-rmdp and an ltl property , , it is undecidable to determine whether the optimal probability with which the controller can force ( using its strategy ) the executions of the 1-rmdp to satisfy , is probability , or is at most probability , even when we are guaranteed that the input satisfies one of these two cases . ofcourse these undecidability results extend to the more general 1-rcsgs . on the other hand , building on our polynomial time algorithms for the qualitative termination problem for 1-rmdps in , brzdil et . showed decidability ( in p - time ) for the qualitative problem of deciding whether there exists a strategy under which a given target vertex ( which may not be an exit ) of a 1-rmdp is reached in _ any _ calling context ( i.e. , under any call stack ) almost surely ( i.e. , with probability 1 ) .they then used this decidability result to show that the qualititive model checking problem for 1-rmdps against a qualitative fragment of the branching time probabilistic temporal logic pctl is decidable . in the setting of 1-rcsgs( and even 1-rssgs ) , it remains an open problem whether the qualitative problem of reachability of a vertex ( in any calling context ) is decidable .moreover , it should be noted that even for 1-rmdps , the problem of deciding whether the _ value _ of the reachability game is 1 is not known to be decidable . this isbecause although the result of shows that it is decidable whether there exists a strategy that achieves probability 1 for reaching a desired vertex , there may not exist any optimal strategy for this reachability problem , in other words the value may be 1 but it may only be attained as the supremum value achieved over all strategies . * acknowledgement * we thank krishnendu chatterjee for helpful discussions clarifying several results about finite csgs obtained by himself and others .this work was partially supported by nsf grants ccf-04 - 30946 and ccf-0728736 .
|
we study recursive concurrent stochastic games ( rcsgs ) , extending our recent analysis of recursive simple stochastic games to a concurrent setting where the two players choose moves simultaneously and independently at each state . for multi - exit games , our earlier work already showed undecidability for basic questions like termination , thus we focus on the important case of single - exit rcsgs ( 1-rcsgs ) . we first characterize the value of a 1-rcsg termination game as the least fixed point solution of a system of nonlinear minimax functional equations , and use it to show pspace decidability for the quantitative termination problem . we then give a strategy improvement technique , which we use to show that player 1 ( maximizer ) has -optimal randomized stackless & memoryless ( r - sm ) strategies for all , while player 2 ( minimizer ) has optimal r - sm strategies . thus , such games are r - sm - determined . these results mirror and generalize in a strong sense the randomized memoryless determinacy results for finite stochastic games , and extend the classic hoffman - karp strategy improvement approach from the finite to an infinite state setting . the proofs in our infinite - state setting are very different however , relying on subtle analytic properties of certain power series that arise from studying 1-rcsgs . we show that our upper bounds , even for qualitative ( probability 1 ) termination , can not be improved , even to np , without a major breakthrough , by giving two reductions : first a p - time reduction from the long - standing square - root sum problem to the quantitative termination decision problem for _ finite _ concurrent stochastic games , and then a p - time reduction from the latter problem to the qualitative termination problem for 1-rcsgs .
|
action potentials are stereotyped all - or - nothing events , meaning that their amplitude is not considered to transmit any information and only the exact time of occurrence matters .this view suggests to model neurons responses in the mathematical framework of point processes . an observation is a sequence of spike times and their stochastic propertiesare captured by a single function , the conditional intensity . for point processes on the time line , several approaches for evaluating goodness - of - fithave been proposed .the most popular in the neuroscientific community has been a test based on the time - rescaling theorem . in practice, neural data is binned such that a spike train is represented as a sequence of spike counts per time bin .specifically , generalized linear models ( glms ) are built on this representation .such discretized models of time series have mostly been seen as an approximation to continuous point processes and hence , the time - rescaling theorem was also applied to such models . herewe ask the question whether the time - rescaling theorem can be translated to discrete time .we review the approximations necessary for the transition to discrete time and point out a procedure to create surrogate point processes even when these approximations do not hold ( section [ sec : methods ] ) .two novel tests based on two different operations on point processes are introduced : random thinning and random complementing .these ideas are applied to a series of examples ( section [ sec : results ] ) , followed by a discussion ( section [ sec : discussion ] ) .we characterize a neuron by its response in terms of trains of action potentials using the theory of _ point processes _ ( figures [ fig : spiketrainrepresentations]a and [ fig : spiketrainrepresentations]b ) .an observation consists of a list of times , each denoting the time point of one action potential . following a common notation ,let ] .the process was simulated over a length of s and the intensity was discretized with ms .negative intensities were clipped to zero .a binary spike train was generated by calculating the probability of at least one spike in each time bin as and drawing samples from a bernoulli distribution with specified probabilities .for evaluating the different algorithms , wrong models for the intensity were created with jittered coefficients where indicates the strength of the deviation from the true model . for each jitter strength , spike trains were generated from the true model and was constructed using the wrong model ( figure [ fig : exinhomo]a ) . for any , the fraction of rejected models defines the sensitivity or test power . for ,the fraction of accepted models defines the specificity which was controlled to be at for each test .all three methods ( rescaling , thinning , complementing ) show a specified type - i error of approximately 5% ( ) and progressively detect the wrong models .notably , the complementing and thinning procedures detect a departure from the correct model earlier than the classical rescaling ( figure [ fig : exinhomo]b ) . for comparison , also the nave implementation of the rescaling transformationis shown . the significance level for the ks test used for the nave time - rescalingwas adjusted to to achieve a 95% specificity .the adjustment was necessary due to the discretization bias ( see section [ sec : surrogate ] ) . for models with an intermediate jitter strength ( ) ,roc curves were constructed . here , for a given significance level , a pair of true and false positive rates can be calculated and plotted for each test ( taking repetitions using the true model and the model with jittered coefficients ) .it can be seen that especially for intermediate jitter strengths , complementing and thinning outperform time - rescaling ( figure [ fig : exinhomo]c ) , independent of the chosen significance level . in a second example, we consider renewal processes , i. e. inter - spike intervals are an i. i. d. sample from a specific probability distribution . in this case , the conditional intensity is given by where denotes the time of the last spike prior to time t. for this example , we chose the gamma distribution as it is commonly used to model real spike trains .the spike train was generated from a true model , following a gamma distribution with scale parameter and shape parameter : .wrong models were generated by scaling the shape and scale parameter by a factor of ( `` jitter '' ) while keeping the expected value of the distribution constant ( i. e. , ) ( figure [ fig : exgamma]a ) .for each jitter strength , data sets of length were generated from the true model and the wrong model and the tests were applied .the analysis of test power for each test and the roc curve analysis for an intermediate jitter strength reveal that time - rescaling is slightly superior to thinning and complementing ( figure [ fig : exgamma]b and c ) . the nave time - rescaling performs worst ( adjusted significance level for the ks test , ) .we model an inhomogeneous spike response model with escape noise using a bernoulli - glm .the spiking probability is modulated by an inhomogeneous rate . additionally , for each spike, a post - spike kernel is added to the process intensity .the rate function is modeled like in the first example as a band - limited function with hz and coefficients that were randomly drawn from a uniform distribution on the interval $ ] .the post - spike kernel is modeled as a sum of three exponential functions ( ms , ms and s ) with appropriate amplitudes as to mimick a relative refractory period , a small rebound and a slow ( inhibitory ) adaptation . to construct the bernoulli - glm ,the spiking probability per bin of length ms is with .a binary time series ( the spike train ) was generated for a duration of s. the jittered models were constructed by adding a jitter on the coefficients of the inhomogeneous rate modulation ( figure [ fig : exsrm]a ) .for each jitter strength , data sets were generated from the true model and the wrong model and the tests were applied .both thinning and complementing are able to detect smaller distortions than both the time - rescaling on the surrogate and discrete data ( figure [ fig : exsrm]b , adjusted significance level for the nave rescaling , ) . a roc curve analysis for an intermediate jitter strength ( ) supports this finding ( figure [ fig : exsrm]c ) .assessing goodness - of - fit for generalized linear models has mostly been done by applying the time - rescaling transformation that is defined for point processes , assuming a match between those approaches .when the per - bin probability of spiking can not be regarded as low , this approximation breaks down and creates a bias when applying the time - rescaling transformation . in a first step , we proposed a procedure to create surrogate point processes from discretized models , such as bernoulli- and poisson - glms , that do not exhibit this bias . throughout all the examples , the time - rescaling theorem applied to the surrogate point process was systematically better than applying the nave time - rescaling on the discrete data .since only the adjusted time - rescaling procedure allows to reliably control the specificity of the test , it should be preferred over the classical time - rescaling in all cases where discretized models are used .we have presented two alternatives to an application of the time - rescaling theorem : for the first procedure , the observed spike train is thinned according to the value of the conditional intensity at the time of spikes .the resulting process is then a homogeneous poisson process with a rate that is equal to the lower bound on the conditional intensity .the second proposed method builds on the idea that an intensity function with an upper bound can be filled up to a homogeneous poisson process of rate by adding spike samples from the complementary process .the proposed tests work best if the lower and upper bounds are tight .however , in most practical cases , especially the lower bound will be prohibitively low to apply any statistical test on the thinned process . as a remedy, we proposed to consider only regions of for which the intensity exceeds a given threshold and repeat the thinning for different thresholds .this successfully overcomes the limitation that may have up to now prevented the use of the thinning algorithm as a goodness - of - fit measure for neural models .the three tests are complementary in the sense that they are sensitive to different deviations of the modeled and true intensity function .time - rescaling is only sensitive to the total integral of the intensity function between spikes , while thinning exclusively considers the intensity function at the time of spikes and is insensitive to its value at places where no spikes occurred .complementing is sensitive to the exact shape of regardless of where the spikes from the original observations are . for the examples of an inhomogeneous poisson process and the spike response model , thinning and complementing outperform the sensitivity of the simple time - rescaling procedure .they can detect deviations from the model that are only half as large as the ones necessary to alert the test based on time - rescaling . for modeling renewal processes ,time - rescaling was slightly advantageous compared to the to other methods .this should not come as a surprise since the time - rescaling test is known to be sensitive to modeling the distribution of inter - spike intervals . beside from likelihood criteria , there exist few goodness - of - fit tools for neural models based on generalized linear models . with the proposed procedure for surrogate point processes, we bridge the gap between such discrete models and point processes .that enables to make use of additional tests from this domain , such as thinning and complementing procedures .we expect these to be valuable contributions to the general practice of statistical evaluation in modeling single neurons as well as neural populations .
|
generalized linear models ( glms ) are an increasingly popular framework for modeling neural spike trains . they have been linked to the theory of stochastic point processes and researchers have used this relation to assess goodness - of - fit using methods from point - process theory , e.g. the time - rescaling theorem . however , high neural firing rates or coarse discretization lead to a breakdown of the assumptions necessary for this connection . here , we show how goodness - of - fit tests from point - process theory can still be applied to glms by constructing equivalent surrogate point processes out of time - series observations . furthermore , two additional tests based on thinning and complementing point processes are introduced . they augment the instruments available for checking model adequacy of point processes as well as discretized models .
|
when landauer argued in 1961 that any physical realisation of erasure of information has a fundamental thermodynamic work cost he irrevocably linked thermodynamics and information theory .a practical consequence of this insight is that all computers must dissipate a minimal amount of heat in each irreversible computing step , a threshold that is becoming a concern with future computer chips entering atomic scales .the treatment of general _ quantum _ information processing tasks within the wider framework of quantum thermodynamics has only recently begun .theoretical breakthroughs include the characterisation of the efficiency of quantum thermal engines and the extension of widely used classical non - equilibrium fluctuation theorems to the quantum regime .a new thermodynamic resource theory has led to the discovery of a _ set _ of second laws that replaces the standard macroscopic second law for finite size systems .these results have substantially advanced our understanding of nanoscale thermodynamics , however putting a finger on what is genuinely `` quantum '' in quantum thermodynamics has remained a challenge .quantum mechanics differs from classical mechanics in at least three central aspects : the special nature of measurement , the possibility of a quantum system to be in a superposition and the existence of quantum correlations .the thermodynamic energy needed to perform a ( selective ) measurement has been investigated and the total work for a closed thermodynamic measurement cycle explored .the catalytic role of quantum superposition states when used in thermal operations has been uncovered and it has been shown that work can be drawn from quantum correlations in a thermodynamic setting , see fig .[ fig : overviewpic ] . in particular , del rio _ et al . _ showed that contrary to landauer s principle , it is possible to _ extract _ work while performing erasure of a system s state when the system is correlated to a memory .this can occur if and only if the initial correlations imply a negative conditional entropy , a uniquely quantum feature .the thermodynamic process does however now require operation on degrees of freedom external to the system , i.e. the memory s .* thermodynamic setting . *a system , depicted as a spin , interacts with a heat bath at temperature , with which it exchanges _ heat _ , and with controlled energy sources , illustrated as coil and weight , with which it exchanges _work_. work drawn from the system can be collected in a work storage system ( weight ) for future use ., scaledwidth=45.0% ] our motivation is here to shed light on the implications of performing a measurement on a quantum state that has coherences .we will consider this task in the thermodynamic setting of landauer s erasure , involving a heat bath at fixed temperature and operation on uncorrelated and identically prepared copies of the system ( i.i.d .this is of interest in the context of the quantum jarzynski equality , for example , and will also be central for experiments testing quantum thermodynamic predictions in the future . to tackle this question we define the information - theoretic `` projection '' for a given initial quantum state and a complete set of mutually orthogonal projectors .such state transformation can be seen as analogous to the state transfer of erasure , , to a blank state .physically , this projection can be interpreted as the result of an unread , or unselective , measurement of an observable that has eigenvector projectors . in an unselective measurementthe individual measurement outcomes are not recorded and only the statistics of outcomes is known . in the literaturethe implementation of unselective measurements is often not specified , although it is typically thought of as measuring individual outcomes , e.g. with a stern - gerlach experiment , see fig .[ fig : blochpicture]a , followed by mixing .the crux is that the information - theoretic projection can be implemented in many physical ways .the associated thermodynamic heat and work will differ depending on _ how _ the projection was done and we will refer to the various realisations as `` thermodynamic projection processes '' .one possibility is decohering the state in the so - called pointer basis , , a thermodynamic process where an environment removes coherences in an uncontrolled manner resulting in no associated work . in generalit is possible to implement the state transfer in a finely controlled fashion achieving optimal thermodynamic heat and work values .of particular importance in thermodynamics is the projection of the system s initial state onto the set of energy eigenstates of the system s hamiltonian with the energy eigenvalues . here the state s off - diagonals with respect to the energy eigenbasis are removed - a state transformation that is frequently employed in quantum thermodynamic derivations and referred to as `` dephasing '' or `` measuring the energy '' .our key observation is that there exists a thermodynamic projection process realising this transformation and allowing to draw from the quantum system a non - trivial _ optimal average work _ of here is the temperature of the heat bath with which the system is allowed to interact , see illustration fig .[ fig : overviewpic ] , is the boltzmann constant and is the von neumann entropy .crucially , this work is strictly positive for quantum states with coherences .extending the key observation to general projections one finds that optimal thermodynamic projection processes can be implemented that allow to draw an average work of ,\end{aligned}\ ] ] where an additional internal energy change term appears .the optimal work values stated in eqs .( [ eq : wmeasure ] ) and ( [ eq : wmeasuregeneral ] ) are valid for processes applied to classical and quantum states alike .while for a classical ensemble the entropy change , , will be zero this is not so in the general quantum situation , where initial non - diagonal quantum states result in a strictly positive entropy change .we note that while the optimal work values are in principle attainable , practical implementations may be suboptimal resulting in a reduced work gain or a higher work cost .the physical meaning of can be grasped by considering a lower bound on it , , see appendix [ app : entropybound ] . here is the dimension of the system and denotes the hilbert - schmidt norm .the first factor quantifies the distance of the initial state from the fully mixed state , while the second factor , , quantifies the angle between the diagonal basis of and the projection basis .these terms correspond to incoherent and coherent mixing contributions .the entropy change is non - trivially bounded only if the initial state is not an incoherent mixture with respect to that basis .the entropy bound is the largest for pure initial states whose basis is mutually unbiased with respect to . in this casethe optimal entropy change is .one may wonder where the work has gone to .there are two equivalent approaches to the accounting of work . in the present analysisthe focus is on the work that the system exchanges , as done in statistical physics . in this approachit is often not explicitly mentioned where the work goes to , but the only place work can go to are the externally controlled energy sources . similarly , the heat , i.e. the energy change minus the work , is established implicitly .for example , in the experimental realisation of classical landauer erasure with a colloidal silica bead trapped in an optical tweezer , the dissipated heat of erasure was calculated by knowing the applied tilting forces and integrating over the bead s dynamics .the second approach is to collect work in a separate work storage system , as illustrated by the weight in fig .[ fig : overviewpic ] and detailed in appendix [ app : workstorage ] . both the implicit and the explicit treatment of work are equivalent in the sense that the results obtained in one approach can be translated into the other . the thermodynamic assumptions made to prove eq .( [ eq : wmeasuregeneral ] ) are congruent with current literature ; specifically they are : ( t0 ) an isolated system is a system that only exchanges work and not heat ; ( t1 ) the validity of the _ first law _ relating the internal energy change , , of the system during a process to its average heat absorbed and work drawn , ; ( t2 ) the validity of the _ second law _ relating the system s entropy change to its average absorbed heat , , when interacting with a bath at temperature , with equality attainable by an optimal process ; ( t3 ) the thermodynamic entropy to be equal to the von neumann entropy in equilibrium as well as out - of - equilibrium , .in addition we make the following standard quantum mechanics assumptions : ( q0 ) an isolated system evolves unitarily ; ( q1 ) control of a quantum system includes its coherences . details of the proof are in appendix [ app : mainproof ] .we note that in the single - shot setting whole families of second laws apply that differ from ( t2 ) stated above .however , in the limit of infinitely many independent and identically prepared copies of the system these collapse to the standard second law , ( t2 ) , on the basis of which eq .( [ eq : wmeasuregeneral ] ) is derived . from the information - theory point of viewthe projections considered here constitute just one example of the larger class of trace - preserving completely positive ( tpcp ) maps characterising quantum dynamics .of course , all tpcp maps can be interpreted thermodynamically with the assumptions stated above , resulting in an optimal average work given by a free energy difference .erasure is another such map whose study forged the link between information theory and thermodynamics .the benefit of discussing `` projections '' here lies in the insight that this focus provides : it uncovers that coherences offer the potential to draw work making it a genuine and testable quantum thermodynamic feature .this work is non - trivial even when the thermodynamic process is operated on the system alone , not involving any side - information stored in other degrees of freedom .to gain a detailed understanding of thermodynamic projection processes that give the optimal work stated in eq .( [ eq : wmeasure ] ) we now detail one such process for the example of a spin-1/2 particle ( qubit ) , see illustration in fig . [ fig : blochpicture]b and [ fig : blochpicture]c as well as appendix [ app : spinexample ] .this process consists of a unitary evolution , a quasi - static evolution and a quench , and it is optimal for any finite - dimensional quantum system as shown in appendix [ app : general3step ] .an experimentalist , emmy , prepares the spin in a state ( w.l.o.g . )exposed to an external magnetic field which she controls .the hamiltonian associated with the system is where the energy difference between the aligned ground state , , and anti - aligned excited state , , is given by with the spins magnetic moment .importantly , in general the spin state s basis , , are superpositions with respect to the energy eigenbasis , and with . for the optimal implementation of the projection emmynow proceeds with the following three steps .firstly , she isolates the spin from the bath and modifies external magnetic fields to induce a unitary rotation , , of the spin into the energy basis . in nuclear magnetic resonance ( nmr ) and pulsed electron spin resonance ( esr ) experiments such rotations are routinely realised by radio - frequency and microwave pulses respectively , as evidenced by rabi oscillations .the power , duration and phase of such a pulse would be chosen to generate the spin - rotation along the green circle until the desired unitary is achieved . in the same step emmyadjusts the strength of the external b - field such that the spin state is boltzmann - distributed at temperature with respect to the energy gap of the hamiltonian at the end of the step , . in nmr or esr the b - field magnitude is tuned quickly on the timescale to achieve the desired energy gap . in the second step, emmy wants to implement a quasi - static evolution of the spin that is now thermal .she brings the spin in contact with the heat bath at temperature and quasi - statically adjusts the magnitude of the external b - field allowing the spin state to thermalise at all times . the final b - field , ,is chosen such that the final thermal state becomes . in esrthis step can be realised by changing the external b - field slowly on the timescale so that the spin continuously equilibrates with its environment .finally , emmy isolates the spin from the environment and quickly changes the b - field to its original magnitude while the state remains . during step 1 and 3the system was isolated and the average work drawn is thus just the average energy change . during step 2the average work is the equilibrium free energy difference between the final and initial thermal states at temperature , see appendix [ app : spinexample ] for details . in nmr / esr the work contributions drawn from the spin system are done on the external b - field and the microwave mode .this could be detected by measuring the stimulated emission of photons in the microwave mode or observing current changes induced by the spins dynamics .the overall thermodynamic process has now brought the spin from a quantum state with coherences , , into a state without coherences , , while keeping the average energy of the spin constant .the net work drawn during the three steps adds up to showing the attainability of the optimum stated in eq .( [ eq : wmeasure ] ) for the spin-1/2 example .we note that eq .( [ eq : wmeasure ] ) is also the maximal work that can be extracted from a _ qubit _ state under _ any _ transformation of the system that conserves its average energy , ] is the thermal state for the hamiltonian at temperature .we can now use a single - shot analysis for steps and that involve only states diagonal in the energy basis , giving a single - shot work contribution of , see appendix [ app : singleshot ] . here and are the min- and max - relative quantum entropies , respectively . taking the limit of copies for steps and and adding the average work contribution for the initial non - diagonal rotation , ] , while leaving the reduced state of the auxiliary system unchanged , i.e. = { { \rm tr}}_s[\eta^{sa}] ] is the unit vector in the blochsphere pointing from the origin to the state , see fig .[ fig : blochpicture]b .we assume without loss of generality that .if this was not the case , the labels and should be interchanged .the spin s initial hamiltonian is given by , where with are the rank-1 projectors onto the two energy eigenstates and .this hamiltonian arises when the spin is exposed to an external magnetic field .the energy separation of the aligned ground state , , and anti - aligned excited state , , is , where is the magnetic moment of the spin .a general initial state is not diagonal in the basis , in other words the spin s eigenstates are superpositions with respect to the energy eigenbasis , and with .the spin s blochvector , , is then _ not _ parallel to the b - field , .emmy wants to obtain the state where the coherences with respect to the energy basis have been removed , \,\pi^h_k = p\ \pi^h_0 + ( 1-p)\ \pi^h_1 , \end{split}\end{aligned}\ ] ] where ] is the unit vector in the blochsphere pointing from the origin to the state .since geometrically the mapping is a projection of onto the vertical axis in the blochsphere , the length of the final blochvector , , is shorter than the initial blochvector , .this shortening is associated with an entropy increase .when describing the process in the following we assume that in accordance with the illustration in fig .[ fig : blochpicture]b . at the end of this sectionwe come back to the case .emmy proceeds with three steps made up of quantum thermodynamic primitives with known work and heat contributions , fig .[ fig : blochpicture]b : in the first step , , emmy isolates the spin from the bath and rotates the b - field such that the variation of the field induces a unitary transformation of the spin into the energy eigenbasis , with unitary . the state after this step is the b - field after this step , , is chosen such that the new hamiltonian has eigenvalues , where is the boltzmann constant and is the temperature of the heat bath that emmy will use in the next step .this choice of the b - field makes the state a thermal state with respect to at temperature , i.e. with ] .physical constraints may make this process difficult to realise , for instance , pure initial states would require a b - field , , of infinite magnitude because thermal states at any finite temperature are only pure if the energy gap is infinite . in this casethere is a trade - off between the maximal magnitude the b - field can reach and the precision with which the process is carried out . in the followingwe assume that the maximal b - field is large enough to make the error in the precision negligibly small . in the second step , , emmy brings the spin in contact with the bath at temperature , not affecting the spin s state as it is already thermal .she then quasi - statically decreases the magnitude of the b - field , while keeping the system in contact with the bath at all times , such that the final hamiltonian is where the b - field is chosen such that where is the probability of measuring in the initial state , .the quasi - static evolution means that the system is thermalised at all times , arriving in the final state which is thermal with respect to where ] and - k_b\ , t\ , s(\rho_1) ] and likewise for .+ finally , in the third step , , emmy isolates the spin from the bath and changes the energy levels of the hamiltonian such that it becomes the initial hamiltonian again .this step is done quickly so that the state of the spin does not change . because the system is isolatedthe energy change in this step is entirely due to work ] .the overall average work drawn from the spin is + { { \rm tr}}[\rho\,h ] -{{\rm tr}}[\eta^h\,h{^ { ( 2 ) } } ] + k_bt\,s(\eta^h ) \\ & \quad \ , + { { \rm tr}}[\rho_1\ , h{^ { ( 1 ) } } ] -k_bt\ , s(\rho_1 ) -{{\rm tr}}[\eta^h\,(h - h{^ { ( 2 ) } } ) ] \nonumber \\ & = k_bt\ , ( s(\eta^h ) - s(\rho_1 ) ) \nonumber \\ & = k_bt\ , ( s(\eta^h ) - s(\rho ) ) \equiv { \langle w_{{\rm opt}}\rangle } , \nonumber \end{split } \end{aligned}\ ] ] showing the optimality of the three step process for the spin example , cf .( [ eq : wmeasure ] ) .the above example assumed .suppose now that the probability to find the final state in the ground state with respect to the hamiltonian was smaller than to find it in the excited state , i.e. .proceeding through the three steps described one finds that the mathematics is exactly the same .in particular , after step 2 is a thermal state with respect to at inverse temperature .the only difference occurs in the interpretation as for the hamiltonian the ground state is because is negative .this is feasible by making the b - field negative , thus swapping the ground and the excited state .consequently the analysis above and the resulting expression of the total extracted work remain the same .+ the work extracted in the individual steps of the thermodynamic projection process can be either positive or negative , depending on the initial state , the hamiltonian and the temperature of the heat bath .their sum , , is strictly positive whenever the initial state was not diagonal in the energy eigenbasis , a consequence of the entropy increase from to . on the other hand for classical states all diagonal in the energy basis the optimal work for such a projection is always zero .appendix [ app : general3step ] extends the optimality proof of the above three step process to the general finite - dimensional case . + * a note on optimal work extraction at constant average energy . *assume we are given an initial state and a non - degenerate hamiltonian for a quantum system .the goal is to find the maximal work that can be obtained in a thermodynamic process that involves a heat bath at temperature under the restriction that the average energy of the system after the process is the same as it was before the process , ] , it is well - known that at a fixed expectation value of an observable the gibbs states } ] , that gives the maximum here .the maximum entropy is then = - { { \rm tr } } [ \sigma_{\lambda^ * } ( - \lambda^ * h - \ln { { \rm tr}}[e^{- \lambda^ * h } ] ) ] \\ & = \lambda^ * \ ,u + \ln { { \rm tr}}[e^{- \lambda^ * h } ] \end{split}\end{aligned}\ ] ] and the maximum average work that can be extracted from at fixed average energy is then - s(\rho ) ) , \end{aligned}\ ] ] for the special case that the system is a qubit ( two - dimensional ) the optimum gibbs state for work extraction is identical to the projected state and the maximal work that can be drawn from a system starting in state , while keeping its average energy fixed , is in eq .( [ eq : wmeasure ] ) . to see thiswe expand and with ] , i.e. , so that has just the right energy = u ] are located on the plane which contains and is perpendicular to the --axis . on the other hand , in the bloch picturea state has higher entropy the closer it is to the center of the sphere .hence , the optimal final state when extracting work from while conserving the average energy of the system is the state projected to the --axis , i.e. .in the previous section it was stated that work can be drawn from a quantum system when undergoing a thermodynamic projection process .but where has the work gone to ? there are two approaches of accounting for work that are mirror images to each other .one approach focusses on the work that the system exchanges , as described above .here it is often not explicitly mentioned where the work goes to , but the only place it can go to are the externally controlled energy sources , see fig . [fig : overviewpic ] .another way of accounting is to explicitly introduce a work system to store the work drawn .one way of doing so in an average scenario is to introduce a ` suspended weight on a string ' , described by a quantum system , that could be raised or lowered to store work or draw work from it . specifically , the hamiltonian of the work storage system is defined as , representing the energy of a weight of mass in the gravitational field with acceleration at height .in addition , an explicit thermal bath is introduced consisting of a separate quantum system in a thermal ( or gibbs ) state .both , the explicit work storage system and the heat bath are illustrated in fig .[ fig : overviewpic ] . in the latter approachthe total system starts in a product state of system ( e.g. spin ) , bath , and weight , , which together undergo _ average energy conserving unitary evolution _ with : the assumption is that the total hamiltonian is the sum of local terms , .the average energy conservation constraint then reads = 0 ] .it is straightforward to generalise the proof of optimality from the two - dimensional spin-1/2 example to thermodynamic projection processes in dimension . againthe projectors map onto the energy eigenspaces of the hamiltonian , , where , , are the energy eigenvalues . a general initial state can be written as where are probabilities , , are rank-1 projectors on the corresponding eigenvectors , and . a unitary operation , , is now chosen such that it brings the initial configuration into the new diagonal and thermal configuration where and .the new energy eigenvalues , , are adjusted such that the probabilities are thermally distributed with respect to for the bath temperature .adjusting the hamiltonian eigenvalues while letting the state thermalise at all times now results in a isothermal quasi - static operation from to . herethe new energy eigenvalues , , are chosen to be thermal ( at ) for the state s probabilities which are given by ] and ] is the thermal equilibrium free energy for hamiltonian , and similarly , - k_b \ , t \ , s(\eta^h) ] , one obtains concluding the optimality proof of the process sequence .the entropy change during a projection with projectors can be lower bounded . in the following , } ] is the probability to obtain .as argued in appendix [ app : spinexample ] w.l.o.g .we can assume that .in the bloch representation one can write and . herewe used a different notation for the bloch vectors of , , and , , for readability .the pauli matrices are self - adjoint and fulfil = 2\delta_{ij} ] is the partition function .step of this sequence rotates the initial non - diagonal state to the diagonal state .as discussed , it can not be treated with the single - shot framework but it is possible to associate an average extracted work with this unitary process , ] and likewise for , where is the logarithm to base 2 .the quantities and as well as their regularized version , the standard quantum relative entropy , can be seen as different measures characterizing the distance between two states . when applied here , they measure the ` distance ' between the thermal state and another diagonal state in such a way that the operational meaning of this distanceis given by the work one has to invest or is able to extract when transforming one into the other .the derivation shows that in the asymptotic limit the optimal average work is recovered from the single - shot components .but it is important to realise that from eq .( [ eq : avgopt ] ) one can not conclude that the above single - shot process forming from is optimal .going via the thermal state is just one option which is particularly convenient in this case as the processes of maximal work extraction and work of formation from the thermal state have been treated in the single - shot scenario .it is an open question whether there are better single - shot protocols for general thermodynamic transformations , see fig .[ fig : singleshotmmt]b & [ fig : singleshotmmt]c .the introduction of `` catalysts '' in single - shot thermodynamics provides a promising avenue to establish bounds on the work that can be drawn from a state with coherences during a projection in the single - shot setting . after making publicour results on average work associated with removing coherences in thermodynamic projection processes very recently a paper appeared that derives the work that can be extracted when removing coherences in a single - shot setting . in this paper the previously mentioned framework describing the catalytic role of coherence in thermodynamics by berg is used together with insights from reference frames in quantum information theory .these results are in agreement with our findings and strengthen our conclusion that coherences are a fundamental feature distinguishing quantum from classical thermodynamics .a common route of deriving the quantum jarzynski equation is as follows .a quantum system is initialised in a thermal state for a given hamiltonian , with energy eigenvector projectors , at given inverse temperature . here is the initial free energy associated with the initial hamiltonian .the aim is to calculate the average exponentiated work , , that the quantum system will exchange when undergoing a unitary process that is generated by varying the hamiltonian in time , i.e. , from to a final .the final state after the unitary is the non - equilibrium state , see fig .[ fig : qujarzynski]a . to identify the work for an individual run of the experiment the energy of the system is measured at the beginning , by projecting into , and at the end , by projecting in the final energy basis .the ( extracted ) fluctuating work identified with each transition is the ( negative ) observed _ fluctuating energy difference _ of the system the average exponentiated work then becomes where are the transition probabilities for energy jumps starting in and ending in at time .these probabilities are given by \\ & = e^{-\beta ( e{^ { ( 0)}}_n - f{^ { ( 0)}}_t ) } \ , { { \rm tr}}[\,\pi{^ { ( \tau)}}_m \ , v \ , \pi{^ { ( 0)}}_n \ , v^{\dag}\ , ] , \end{split}\end{aligned}\ ] ] simplifying the exponentiated average work to .\end{aligned}\ ] ]the completeness of the projectors , , now finally results in the well - known quantum jarzynski work relation where is the difference of the equilibrium free energies corresponding to the final and initial hamiltonians , i.e. .similarly , the average work extracted from the system is the average energy difference between and + { { \rm tr}}[h{^ { ( 0)}}_n \ , \rho_0 ] \\ & = - u(\rho_\tau )+ u(\rho_0 ) \end{split}\end{aligned}\ ] ] where ] . the derivations of the average work , , as well as the average exponentiated work , i.e. the quantum jarzynski equality , , are based on eq .( [ eq : single - work ] ) which was made assuming that the initial and final state of the process that is being characterised are and .there is no question that the mathematical details of the derivations of the above relation are sound .experimentally , there is however a need to acquire knowledge of the fluctuating energy to quantify the work and this requires the implementation of the second measurement , see fig .[ fig : qujarzynski]b . only after the measurement has been made can theoretical predictions be tested .the measurement is an unavoidable non - unitary component of the overall experimental process .specifically , the ensemble state after the unitary , , is further altered by the measurement to result in the final state , i.e. it is the state with any coherences in the energy basis of removed .while the experimentally observed average energy difference is not affected by the measurement step , i.e. , the entropy difference does change , i.e. .this means that the system may absorb heat , , during the measurement step , indicated in fig .[ fig : qujarzynski]b .its actual value depends on _ how _ the measurement is conducted with the optimal heat positive , . since ( t1 ) this implies that in an experimental implementation of the jarzynski relation the work done by the system on average can be more than previously thought , with the optimal value being .in the special case that the average heat is zero it is possible ( although not necessary ) that eq .( [ eq : single - work ] ) , and thus the standard jarzynski expression , are correct .in particular this applies to classical measurements .we conclude that the suitability of identifying , and hence the validity of the quantum jarzynski work relation depends on the details of the physical process that implements the measurement .quantum work fluctuation relations that have only one measurement , instead of the two discussed above , offer a feasible route of measuring work fluctuations experimentally .instead of measuring separately the initial and final fluctuating energies , and , to establish their joint probabilities , this method acquires _ only _ knowledge of the joint probabilities by measuring energy differences directly .but also here is one final measurement , in general on a non - diagonal state , needed .similarly to erasure with a correlated memory one can consider projections on a system that is correlated with an ancilla the experimenter has access to . assuming a total hamiltonian , we denote the global initial state by and its marginals on and by ] , respectively .+ * a note on notation . * for claritywe employ a slightly different notation here .the roles of initial state and final state are the same as in the main text and the previous sections of the appendix .however , now the superscripts of the final state no longer denote the projection basis but the system for which describes the state .for instance , denotes the reduced state after the projection on system alone .the same holds for the superscript of the initial state , and , and the hamiltonians and .only the superscript of the mutually orthogonal rank-1 projectors acting on system is kept to indicate which basis is being projected in .+ for an initial global state of system and ancilla a local projection map on results in a new global state due to the properties of the projectors the marginal state on is unchanged , = \sum_k { { \rm tr}}_s[(\pi^{\cal p}_k\otimes\mathbbm{1}^a)\ , \rho^{sa } \ , ( \pi^{\cal p}_k\otimes\mathbbm{1}^a ) ] \\ & = \sum_k { { \rm tr}}_s[\rho^{sa } \ , ( \pi^{\cal p}_k\otimes\mathbbm{1}^a ) ] = { { \rm tr}}_s [ \rho^{sa } ] = \rho^a . \end{split}\end{aligned}\ ] ] the reduced state of the system becomes = \sum_k \pi^{\cal p}_k \ , \rho^{s } \, \pi^{\cal p}_k = \sum_k p_k \ , \pi^{\cal p}_k ] , and the conditional states on after the process are denoted ] are the probabilities to measure on and ] .furthermore , isometries do not change ( von neumann ) entropy and thus , .in addition , by construction of the marginals of the final state on and have the same entropy : .since ( see e.g. nielsen & chuang ) . thus where in the last equality we made use of the fact that is a classical - quantum state .going back to eq .( [ eq : deltassa ] ) and applying the the above lemma we see that in general - \delta u^{\cal p } \leq k_bt \ , s(\eta^s)- \delta u^{\cal p}$ ] , which proves that purifications on yield the maximally possible extracted work .janzing , d. , wocjan , p. , zeier , r. , geiss , r. & beth , t. thermodynamic cost of reliability and low temperatures : tightening landauer s principle and the second law .phys . _ * 39 * , 2717 ( 2000 ) .liphardt , j. , dumont , s. , smith , s.b . ,tinoco , i. & bustamante , c. equilibrium information from nonequilibrium measurements in an experimental test of jarzynski s equality ._ science _ * 296 , * 1832 ( 2002 ) . brut , a. , arakelyan , a. , petrosyan , a. , ciliberto , s. , dillenschneider , r. & lutz , e. experimental verification of landauer s principle linking information and thermodynamics ._ nature _ * 483 , * 7388 ( 2012 ) .tomamichel , m. _ a framework for non - asymptotic quantum information theory_. phd thesis , eth zurich ( 2012 ) dupuis , f. , kraemer , l. , faist , p. , renes , j.m . & renner , r. xviith international congress on mathematical physics , 134 - 153 ( 2013 ) .
|
thermodynamics is a highly successful macroscopic theory widely used across the natural sciences and for the construction of everyday devices , from car engines and fridges to power plants and solar cells . with thermodynamics predating quantum theory , research now aims to uncover the thermodynamic laws that govern finite size systems which may in addition host quantum effects . here we identify information processing tasks , the so - called `` projections '' , that can only be formulated within the framework of quantum mechanics . we show that the physical realisation of such projections can come with a non - trivial thermodynamic work _ only _ for quantum states with coherences . this contrasts with information erasure , first investigated by landauer , for which a thermodynamic work cost applies for classical and quantum erasure alike . implications are far - reaching , adding a thermodynamic dimension to measurements performed in quantum thermodynamics experiments , and providing key input for the construction of a future quantum thermodynamic framework . repercussions are discussed for quantum work fluctuation relations and thermodynamic single - shot approaches .
|
complex networks have made an enormous impact on research in a number of disciplines .networks have revolutionized the study of social dynamics and human contact patterns , metabolic and protein interaction in a cell , ecological food webs , and technological systems such as the world wide web and airline transportation networks .seminal results include the small - world and scale - free nature of many real - world systems .one of the most important areas of network research has been the study of community structure .communities , sometimes called modules , clusters , or groups , are typically considered to be subsets of nodes that are densely connected among themselves while being sparsely connected to the rest of the network .networks containing such groups are said to possess modular structure .understanding this structure is crucial for a number of applications from link prediction and the flow of information to a better understanding of population geography .much effort has been focused on finding the best possible partitioning of a network into communities .typically , this is done by optimizing an objective function that measures the community structure of a given partition .many algorithmic approaches have been devised .most partition the entire network while some focus on local discovery of individual groups .overlapping community methods , where nodes may belong to more than one group , have recently attracted much interest . for a lengthy review of community methodssee .given the reliance on objective functions , it is important to understand how the intuitive notion of communities as internally dense , externally sparse groups is encoded in the objective function . some functions simply measure the density of links within each community , ignoring the topological features those links may display , while other functions rely upon those links forming many loops or triangles , for example .we show the importance of understanding these distinctions by revealing some surprising features of how communities are evaluated .in particular we show that the only requirement for strong communities , according to the most popular community measure , is a lack of external connections , that bottlenecks leading to isolated groups can make strong communities even when those groups are internally maximally sparse .this contradicts the notion of communities as being unusually densely interconnected groups of nodes .this paper is organized as follows . in sec .[ sec : measuringcomms ] we present several measures of community quality and discuss their different features and purposes . in sec .[ sec : modularitytrees ] we show analytically that trees and treelike graphs can possess partitions that display very high , often arbitrarily high values of modularity .this is our primary result . in sec .[ sec : realworldexamples ] we apply two successful community discovery algorithms to these trees and show that the discovered communities can have even higher modularities .we also study the community structure of a treelike network derived from genealogical data . in sec .[ sec : nullmodel ] we perform statistical tests on the various communities and find that most of the partitions we consider for trees are statistically significant .we finish with a discussion and conclusions in sec . [sec : conc ] .given a network , represented by a graph of nodes and links whose structure is encoded in an adjacency matrix , where if nodes and are connected and zero otherwise , we wish to determine to what extent possesses modular structure . to put the notion of a community or module onto a firm foundation ,objective functions have been introduced to quantify how `` good '' or `` strong '' a community or a partitioning into communities is .these objective functions are also often the goal of an optimization algorithm , where the algorithm attempts to find the community or communities that maximizes ( or minimizes ) the objective function . herewe briefly discuss three objective functions : subgraph conductance , modularity , and partition density . due to its popularity and wide usewe will focus primarily on modularity .the conductance of a subgraph is a measure of how ` isolated ' the subgraph is , in analogy with electrical conductance .subgraphs with many connections to the rest of the network will have high conductance , whereas a subgraph will have low conductance if it relies on a few links for external connectivity .for a given subgraph such that , one form of conductance is where if proposition is true and zero otherwise , is the sum of the degrees ( number of neighbors ) of all nodes in , and is the total number of links in .( the factor of in the denominator is sometimes dropped . ) in other words , subgraph conductance is the ratio between the number of links exiting the subgraph to the number of links within the subgraph . while low may appear to be a good indicator of community structure , we remark that it primarily measures isolation or `` bottleneckedness , '' meaning that , e.g. , a random walker moving in a subgraph with low conductance will have very few opportunities to exit the subgraph , whereas it would have many opportunities if the subgraph had high conductance .this is also true if the subgraph is a densely interconnected module .however , consider a large two - dimensional ( 2d ) periodic square lattice of size , .this graph has nodes and links and is generally considered to have no modular structure .the conductance of a subgraph created by cutting the lattice in half along the direction is .as the lattice grows , the conductance of this subgraph decreases , despite there being no modular structure .a key point lacking in earlier definitions of communities such as conductance is that they fail to quantify the statistical significance of the subgraph .it may be possible for a randomized null graph to contain subgraphs exhibiting comparable conductance , for example , and conductance alone does not capture this .modularity was introduced to account for this in an elegant way .it has become the most common community objective function and possesses a number of distinct advantages over previous approaches , such as not requiring the number of communities to be known in advance .however , it has some drawbacks as well .it is known to possess a _ resolution limit _ where it prefers communities of a certain size that depends only on the global size of the network and not on the intrinsic quality of those communities . meanwhile ,sparse , uncorrelated random graphs are expected not to possess modular structure , but fluctuations may lead to partitions with high modularity . yetanother concern is modularity s highly degenerate energy landscape , which may lead to very different yet equally high modularity partitions .modularity can be written as \ibra{c_i = c_j } \nonumber \\ & = \sum_c \left [ \frac{m_c}{m } - \left(\frac{k_c}{2m}\right)^2 \right ] , \label{eqn : modularity}\end{aligned}\ ] ] where is the total number of links in the network , is the community containing node , is the total number of links inside community , and is the total degree of all nodes in community .the first definition of illustrates the intuition of its form : for every node pair that shares a community we sum the difference between whether or not that pair is actually linked with the expected `` number '' of links between those same two nodes if the system was a purely random network constrained to the same degree sequence ( this null model is known as the configuration model , and the loss term is approximate ) .this is then normalized by the total number of links in the network . by rewriting the sum over node pairs as a sum over the communities themselves, the second definition of makes clear the resolution limit : global changes to the total number of links will disproportionately affect each community s local contribution to .this can potentially shift the maximal value of to a different partition even when the local structure of the communities remains unchanged .equation gives values between and .when , there is strong evidence that the discovered community structure is not significant , at least according to this null model , while the communities are considered better and more significant as grows . in practice, researchers may assume that a network possesses modular structure when or .however , since fluctuations can induce high modularity in random graphs , one must always approach the raw magnitude of with caution : statistical testing ( sec .[ sec : nullmodel ] ) may provide stronger evidence for the presence of modules than modularity alone .yet another approach to quantifying community structure is that of partition density .partition density was introduced specifically for the case of link communities , where links instead of nodes are partitioned into groups .this allows for communities to overlap since nodes may belong to multiple groups simultaneously .we do not consider overlapping communities here , but partition density can still be calculated for nonoverlapping node communities .the partition density is partition density measures , for each community , the number of links within that community minus the minimum number of links necessary to keep a subgraph of that size connected , , the size of its spanning tree .this is then normalized by the maximum and minimum number of links possible for that connected subgraph , and , respectively .the partition density is then the average of this quantity over the communities , weighted by the fraction of links within each community . for a link partition that covers an entire connected network , we have , but this does not necessarily hold for a node partition .a crucial feature of the partition density is that it explicitly compares the link density of a subgraph to that of a tree of the corresponding size .this controls for the fact that the subgraph in question is connected , making the reasonable assumption that communities should be internally connected . the null model used by modularity , on the other hand ,does not make this assumption , and it may potentially assign very low probabilities to such an event .as we will show , this is a crucial aspect of modularity .we now study a model tree graph that one may consider to not possess modular structure and show that these graphs possess partitions with arbitrarily high modularity values .we also study a mixed case graph containing both modular and non - modular structures .the cayley tree is a regular graph with no loops and where every node has the same degree ( except for leaf nodes on the boundary which possess ) .see fig .[ fig : cartoon_cayley ] .it can be constructed by first starting from a root node at generation 0 , giving that node child nodes , and then repeatedly giving each new child children of its own .this continues for a fixed number of generations .these trees can grow either in `` width '' ( via ) or in `` depth '' ( via ) .the number of nodes in generation is , and the total number of nodes is .since this is a tree , the total number of links is .since the bulk of the graph is regular , the cayley tree has no density fluctuations ( all connected subgraphs of the same size have the same number of links ) , and so it does not in an obvious way conform to our preconceived notions of communities as internally dense , externally sparse groups . in the thermodynamic limitthe cayley tree is known as the bethe lattice .we concern ourselves here primarily with finite graphs , however , such that finite size and edge effects can not be ignored . and .the root node is indicated in white.[fig : cartoon_cayley],scaledwidth=35.0% ] we now compute the modularity of a specific partition of the cayley tree , which we call the _ analytic partition_. first place the root node into a community of its own .then create a new community for each child of the root node , containing that child _ and all of its descendants_. thus there are communities in total .apart from the singleton community containing the root node , every community is a complete -ary tree ( which is not exactly a cayley tree ) with generations .partitioning the tree in this way requires cutting only links .there are zero links inside the singleton community and links inside the other communities . to compute the total degree of nodes within the community, we note that all /(z+1)12 & 12#1212_12%12[1][0] __ ( , ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) _ _ ( , ) * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physreve.84.066122 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreve.74.016110 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreve.81.046106 [ * * , ( ) ] * * , ( ) * * , ( ) http://books.google.com/books?id=ebeliyys3noc[__ ] ( , ) `` , '' * * , ( ) link:\doibase 10.1103/physreve.81.046110 [ * * , ( ) ] link:\doibase 10.1371/journal.pone.0033721 [ * * , ( ) ] link:\doibase 10.1126/science.1158684 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1137/080734315 [ * * , ( ) ]
|
much effort has gone into understanding the modular nature of complex networks . communities , also known as clusters or modules , are typically considered to be densely interconnected groups of nodes that are only sparsely connected to other groups in the network . discovering high quality communities is a difficult and important problem in a number of areas . the most popular approach is the objective function known as modularity , used both to discover communities and to measure their strength . to understand the modular structure of networks it is then crucial to know how such functions evaluate different topologies , what features they account for , and what implicit assumptions they may make . we show that trees and treelike networks can have unexpectedly and often arbitrarily high values of modularity . this is surprising since trees are maximally sparse connected graphs and are not typically considered to possess modular structure , yet the nonlocal null model used by modularity assigns low probabilities , and thus high significance , to the densities of these sparse tree communities . we further study the practical performance of popular methods on model trees and on a genealogical data set and find that the discovered communities also have very high modularity , often approaching its maximum value . statistical tests reveal the communities in trees to be significant , in contrast with known results for partitions of sparse , random graphs .
|
the monte carlo technique has served us well in the study of equilibrium statistical mechanics and other fields .the traditional local monte carlo method is simple , extremely general , and versatile .however , there are some intrinsic drawbacks .first , the convergence of the results to the exact values is slow .the basic probabilistic nature has limited the monte carlo error to decrease as , where is monte carlo steps or computer time . with exception of quasi monte carlo for numerical integration ,as long as we use a probabilistic approach , it does not appear possible to overcome this barrier .most of the work to improve the efficiency of monte carlo method has been via variance reduction , which reduces the value of coefficient in front of the law .next , while the traditional monte carlo method is good for computing expectation values such as the internal energy and its derivatives , it is more difficult to compute the free energy or entropy . over the last few decades, a number of methods have been developed to compute the density of states .the histogram method and the multiple histogram method can be regarded from this point of view .the multi - canonical method in some sense is also a computation of the density of states .both of these methods involve the re - weighting of probabilities to construct the canonical distribution .et al _ proposed a broad histogram method , in which the density of states is also computed from simulation .if the density of states can be computed with sufficient accuracy , then most thermodynamic quantities can be obtained with little further effort .this includes the moments of energy , entropy , and free energy .moreover , the results are obtained as a continuous function of temperature from a single simulation . in this paper , we present such a method and study its efficiency .this method includes the use of a transition matrix , a stochastic matrix defined in the space of energy , and a class of related simulation algorithms .the present method has the elements of both the broad histogram method and multi - canonical method .the flat - histogram algorithm offers an effective way to compute density of states for all energy . with its multi - canonical element, it also offers fast dynamics for systems at first - order phase transitions .the use of transition matrix improves the efficiency of data analysis . in the next section, we shall discuss the formalism and the essential aspects of the method .we also present the results of some numerical tests and discuss the connections of our method with previous methods . in the appendices , we give details of a transition matrix dynamics .the monte carlo method aims at generating samples with probability distribution , where is a particular state of the system . in the ising model , which we shall use as a concrete example, is a vector of all the spins , where . in the usual application of the monte carlo method , the invariant distribution is given by the canonical distribution ( gibbs distribution ) .however , this need not be the case . in the equal - hit ensemble that we shall discuss later , is not known , and is not even unique . nevertheless , it is still a valid monte carlo algorithm that can have significant advantages .a sequence of states or samples is generated by a markov chain with transitions between states described by a matrix .this is the conditional probability that state moves to given that the current state is .this matrix is known as a stochastic matrix and it must satisfy is considerable freedom in choosing the matrix , but the most important condition ( or criterion ) is detailed balance markov chain that satisfies the above condition is called a reversible markov chain .this condition guarantees the invariance of the probability with respect to the transition matrix , i.e. , application of to an arbitrary probability distribution makes the resulting probability distribution converge to a fixed point , .we shall not elaborate on the condition that an invariant probability distribution exists and is unique . roughly speaking, we must be able to make transitions in a finite numbers of steps from any initial state to any final state .this is known as ergodicity .the standard metropolis algorithm is to take is a selection function a conditional probability of attempting to go to state given that the current state is . within the above formulation, it is required that the matrix is symmetric , this condition can also be relaxed . following oliveira , we call this condition microscopic reversibility .the diagonal elements of are fixed by the normalization condition , eq .( [ eq - norm ] ) .note that the diagonal elements are not needed explicitly in a computer simulation . in a computer implementation, a move is selected according to ( e.g. , pick a site to flip a spin ) .the move is made if a random number between 0 and 1 is less than the flip rate ; otherwise , it is rejected , and the original configuration is counted once more as the next configuration in a monte carlo move . clearly the above formalism is very general . although the procedure can be used to sample any distribution , it has its limitations .one drawback of standard algorithm is that the configurations generated are correlated .these correlations severely limit the efficiency of the method near phase transitions or for models with competing interactions and many local minima .a number of methods have been proposed to address this problem , such as the cluster algorithms , the multi - canonical methods , replica monte carlo , and simulated tempering .the flat histogram method presented in this paper is similar in some aspects to the multi - canonical method .the implementations of flat histogram method and transition matrix based methods are very simple and efficient .the concept of an energy histogram is essential to all of these methods .other types of histogram of macroscopic quantities can be easily defined in analogy to the energy histogram and may be useful in some contexts , such as the joint histogram of energy and total magnetization .we define the energy histogram ( in the case of a discrete energy spectrum ) as the number of instances of each value of the energy generated during a monte carlo simulation ; we denote the histogram by .the histogram is important because of its direct relationship to the probability distribution of the energy in the system being simulated .if the probability of a state is given by for a given simulation , then the probability that the system has an energy .if the monte carlo simulation in question generates configurations , then the expectation value of the histogram ( average of the histogram over an infinite number of similar monte carlo runs ) is given by for the canonical distribution , we have where is the partition function , .we define the density of states ( for systems with discrete energy spectrum ) the number of states with energy .note that the configuration dependence of the probability is only through energy implicitly .thus , two configurations with the same energy will have the same probability .we shall call this the microcanonical property .the transition matrix monte carlo to be discussed below relies on this property crucially , while allowing the function to be arbitrary .the histogram sampled during a monte carlo run ( the number of visits to energy ) is an estimator to , i.e. , .the usual canonical monte carlo method is equivalent to using the number of visits to compute the moments of at the simulation temperature .the histogram method of ferrenberg and swendsen is based on the simple observation that density of states can be estimated ( up to a proportionality constant ) by . with this information, the moments of can be extrapolated for nearby temperatures as well . clearly ,if we can determine , then most of the energy related thermodynamic averages can be determined , such as internal energy , specific heat , free energy , and entropy .the free energy is given then by other quantity of interest can also be computed if a `` histogram '' ( as a function of ) of such quantity is also collected , the main objective of this paper is to show that we can determine the density of states for the whole range of energy with monte carlo sampling efficiently .the transition matrix defined below serves a dual purpose for the computation of the density of states and for the construction of flat histogram algorithms .there are a number of ways to look at the transition matrix based methods .we shall take the detailed balance equation , ( [ eq - detailed - balance ] ) , as a basic starting point .consider all initial states with energy and all final states with energy .each pair of states has a detailed balance equation. some of the equations may be the identity if the transition by a single - spin flip is not possible .summing up the detailed balance equations for all the states with a fixed energy and all the states with a fixed energy , we have that the configuration probability distribution is a function of energy only , i.e. , , and defining the transition matrix in energy as have a consequence of being a stochastic matrix , is also a stochastic matrix : the histogram being the invariant distribution : similar definition to eq .( [ eq - t - matrix - definition ] ) was introduced in ref . in a different context .because the matrix is composed of two factors , only the second of which depends on the specific ensemble under consideration , it is convenient to refer all calculations to the `` infinite - temperature '' transition matrix , is the number of spins , or more generally , the number of allowed moves from a given state .if we define , we then have in a random single - spin - flip dynamics , equals if the two configurations and differ by one spin , and zero otherwise . thus , the second summation over gives the number of configurations of energy that can be reached from of energy by a spin flip .the first summation is over the configurations with energy , i.e. , a microcanonical average of the quantity .the constancy of for the nonzero matrix elements is important for this interpretation of .the quantity is central to the current method , as well as to the broad histogram method . within the single - spin - flip dynamics , the matrix then given by any flip rate can be inserted once has been determined .substituting eq .( [ eq - transition - matrix ] ) into the energy detailed balance equation ( [ eq - energy - detailed - balance ] ) , we can cancel and as for a valid dynamics which generates distribution , we have the usual detailed balance , final equation is known as broad histogram equation , initially presented by oliveira _et al _ , in terms of the transition matrix notation , this becomes name `` broad histogram '' equation is historical and clearly a misnomer .the above equation has a very simple interpretation .consider all pairs of states with energy and states with energy such that the moves ( or transitions ) between and are allowed .these states correspond to states for which the matrix elements in are non - zero .due to the microscopic reversibility , if to is allowed , so is the reverse move from to .there are two ways to count the total number of moves , summing up from states with energy or summing up from states with energy .the state has ways to move into energy .the total number of moves to energy from all states with energy is . by the reversibility requirement, we must have microcanonical average of any quantity is defined by this definition , the previous equation ( [ eq - n - eq ] ) is reduced to eq .( [ eq - broad - histogram ] ). this argument is first put forth by oliveira and by berg and hansmann .clearly , the result does not depend on what type of moves we use , so long as it satisfies the reversibility condition .the detailed balance condition imposes a restriction on the transition matrix , which we call the identity .consider three distinct energy levels , , , and , for which energy transition matrix elements among them are nonzero .let us write down three equations associated with the transitions among them : the left and right sides of the three equations together and assuming that the product is nonzero , we can cancel this factor from the equation and obtain : is the identity .the importance of this equation is that it does not require the knowledge of the stationary distribution to check for agreement of the data with the condition of detailed balance .while for normal monte carlo simulation , the detailed balance is built - in directly to the transition matrix , this is not the case for some of the transition matrix monte carlo schemes that have been proposed .one implication of detailed balance violation is that the microcanonical property that all states with the same energy have the same probability is violated .this detailed balance violation for the initial choice of oliveira s broad histogram dynamics , a particular choice of the transition rate , has been demonstrated explicitly for small systems .the significance of the identity is that given the probability of energy having value , if we can predict at energy by the detailed balance equations in two ways , one directly from to , one by two hops , from to , and then to , then the identity guarantees that the results are exactly the same .that is , and the identity says that the two predictions based on the detailed balance are equal , .detailed balance implies identity .is the reverse also true ?i.e. , given a complete set of identities , do they imply detailed balance in the sense of equation for all and ?the answer is yes .the identity turns out to guarantee a consistent ( detailed balance ) solution involving three jumps , say , to , to , to , versus to directly when such jumps are allowed .therefore , identities that follow from detailed balance are automatically fulfilled and neither provide further information , nor require separate proof .naturally , when we consider adding more complex monte carlo moves , either to improve the efficiency of a calculation or to reflect the nature of a more complex model , more identities are generated and identities with more factors of are automatically satisfied .we define a quantity to measure the detailed balance violation for three energy levels for which the transitions among them are nonzero as is monte carlo estimate of . for a single - spin - flip dynamics with the metropolis rate ,the energy transition matrix is given by eq .( [ eq - transition - matrix ] ) , with , thus the above equation is equivalent to ising model where energies are equally spaced , we consider three levels at , and , where is the coupling constant . a plot of for ising model is presented in ref . .if a configuration has probability , then the histogram is .a flat histogram is obtained if we take .a single spin flip with the flip rate can be used to do the simulation .lee s reformulation of multi - canonical method is essentially this . the trick there is to determine efficiently . from the equation describing detailed balance for the transition matrix , ( [ tmatrix - broad - hist ] ) ,we can write the acceptance rate as is the first equation derived for flat histogram dynamics , although we will show below that it is not unique .this rate is first proposed in ref . , and is independently discovered by li . unlike the quantity , a good approximation is already available in the very beginning , since we can use the instantaneous value as a preliminary estimate for . a cumulative average of contributions to can be used as a convenient , and remarkably accurate , approximation for the microcanonical average .we shall discuss how good this scheme is in a later section .there is another equivalent way of looking at the problem .consider the energy detailed balance equation in the form we require that the histogram is a constant , , then the spin - flip rate must satisfy the following equation , , eq .( [ eq - flat - histogram - rate ] ) satisfies the above equation .moreover , there is a whole family of choices of the transition rates .some of the choices are given in table [ tb - rates ] .the standard metropolis algorithm contains two steps .first , a move is proposed .next , this move is accepted with probability or rejected with probability , where .is it possible to always accept a move without sampling the same configuration repeatedly ?the answer is yes , if we are willing to keep extra information during the simulation . in the flat histogram method ,this extra information is already there for free .it is precisely . in an -fold way simulation (also known as event - driven simulation ) , we do not change the dynamics ; it is fully equivalent to the usual single - spin - flip dynamics .however , there is a substantial improvement in efficiency in those situations for which the rejection rate is high .the method begins by computing the total probability that a move would be accepted with the standard approach .for the ising model , this probability is the factor is due to the fact that each spin located at site is picked up with probability .the notation refers to a configuration with the spin at site reversed in sign .the quantity is the probability that any spin is flipped .since the flip rate depends on the initial and final energies only , we can simplify the above equation as divide the possible moves into classes according to their energy increment . within a given class ,each spin has the same flipping probability .we now set the probability of the class with energy change being chosen as spin in this class is then chosen at random and flipped with probability one . as a practical consideration in designing algorithms , we note that the condition that must be between 0 and 1 can now be relaxed because of the normalization by in this equation . in the original algorithm , monte carlo timeis rescaled to make the dynamics equivalent to that of the usual algorithm . for the purpose of calculating averages, we will reweight each configuration to achieve the same effect .each configuration in the original single spin flip has equal weight .the monte carlo average of a quantity is computed as at step and the subsequent steps could be the same , due to the possibility of rejecting a move .the probability that a move is rejected is .thus the life - time of a configuration has the probability distribution average life - time for configuration is then in the -fold - way simulation , each step generates a distinct state from the immediate preceding one ; each of these states is supposed to last for a during of , on average . thus in replacing eq .( [ eq - average ] ) the correct formula for statistical average with -fold - way simulation is naively the computation of seems to require basic steps at each single - spin - flip move .but the effect of changing configuration by a flip is local , involving only the site in question and its neighboring sites .we only need to compute the changes in .thus each move takes in computer time .it is few times slower than a corresponding straightforward single - spin - flip program .the -fold way does require extra memory , as a list of spins for each class is required in order to be able to pick a spin from the class with a computer time of .the flat histogram ensemble in some sense is the best ensemble to evaluate the density of states , for each energy level is sampled with the same frequency .however , as we have seen in the -fold way , this is not entirely true , as some configurations are weighted more than others .the equal - hit algorithms generate fresh configurations with equal probability at each energy .there is a very interesting aspect of these algorithms the histograms in such algorithms are not unique and depend on the details of the dynamics . the energy histogram in the normal single - spin - flip dynamics ( that is , not in the -fold way ) is computed as .e . , the contribution to the histogram from the configuration of energy is 1 for and zero for other energies .the angular brackets refer to monte carlo average . in the -fold way, the statistics are weighted with to get the equivalent result in the original dynamics , so we have angular brackets indicate simple arithmetic average over the samples generated in an -fold way , c.f .( [ eq - n - fold - average ] ) .we put a subscript n to emphasize the fact that the average is over -fold way samples .we define the hits as quantity measures the average number of fresh configurations generated at each energy , since in the -fold way , each configuration in the sample is distinct from the previous sample due to the fact that there is no rejection in -fold way .we can relate the histogram to the hits by the average of inverse acceptance rate for states of fixed energy in the -fold way samples .note that this is not the same as the microcanonical average of , which in general is eq .( [ eq - histogram - hit ] ) into energy detailed balance equation , ( [ eq - histogram - detailed - balance ] ) , we have - hit is realized if the flip rate satisfies the above equation with .for example , choices are also possible ; some of them are given in table [ tb - rates ] .we note that since the transition rate depends on the underlying dynamics through , there is no guarantee that the histogram is unique .in fact , in the equal - hit dynamics , is not known , and has to be determined self - consistently through the equal - hit dynamics .we note that is defined by eq.([eq - a ] ) using , which uses in eq .( [ eq - hit - rate ] ) .there is also a peculiarity that diverges for some choices of the transition rates for the ground states .in this section , we evaluate the performance of various proposed algorithms . in table [ tb - rates ] , we list a dozen possible choices of flip rates , . for each rate , the simulation can be implemented with or without the -fold way .as the flip rates require the knowledge of exact microcanonical average , which is not available , we use the crucial approximation energy histogram is the number of samples generated for a given energy , , is the total number of samples generated so far , and is the configuration at step in the algorithm .we collect a sample after every attempt of moves .the above expression is suitable for the normal single - spin - flip algorithm . in -fold way, the statistics have to be weighted by .whenever information is not available , we set the flip rate to 1 .this biases the system to visit unexplored energy levels . in the above method ,the simulation is started automatically , without an iteration process .this bootstrap is efficient and is also correct in the sense that it converges .we shall call the above method iteration 0 . strictly speaking ,iteration 0 need not be a valid monte carlo algorithm as the transition rates are fluctuating quantities , thus the normal markov chain theory for convergence does not apply .however , numerical results do support convergence , although a rigorous proof is lacking .= 0.8 figure [ fig - histo ] presents energy and hit histograms for a square lattice for an 8-state ferromagnetic potts model .as expected , the energy histogram is flat for the flat histogram algorithm .the hits are roughly proportional to the number of different configurations generated .this is not exactly true since in the -fold way , we can only guarantee that the next configuration is different from the immediate preceding one. the new configuration can be the same with configurations in earlier steps . by requiring that the hits are equal for all energies , we obtain the equal - hit algorithm with the corresponding energy and hit distribution shown in the lower part of the figure .= 0.90 due to the statistical nature of the histogram and also due to the fact that the energy range is explored similarly to a random walk , the histogram is not exactly flat , but fluctuates around a mean . during a simulation of length of monte carlo steps ( sweeps ) ,we have generated independent samples , where is correlation time related to the histogram .these samples are distributed to order bins of different energies ( for the two - dimensional ising model , it is exactly bins ) .thus , each bin has about independent samples .the relative fluctuation of the histogram is ( asymptotically for large ) although the above argument applies for the fluctuation between different runs , it is also reasonable to apply it to the fluctuation among different energies , since the samples between different energies are assumed independent .the same analysis also applies to the hits in equal - hit algorithm .equation ( [ eq - fluctuation ] ) can be used as a definition for the correlation time .a perfect poisson process has a constant correlation time ( ) ; an ideal random walk in energy is .we compute for the nearest neighbor ising model on one - dimensional chain , two - dimensional square lattice , and three - dimensional cubic lattice .we found that numerically eq .( [ eq - fluctuation ] ) is approximately satisfied , with the correlation time growing with size , in one dimension , in two dimensions , and or in three dimensions , see fig .( [ fig - h - slowing - down ] ) .the algorithm 0 becomes inefficient ( slow convergence ) for large lattices in two and three dimensions .we shall discuss this further later on ..a list of choices of the flip rates and their errors in transition matrix with respect to the exact results on a square lattice for the ising model . in the formula , we have , , , , where is the current energy and is proposed new energy . in simulation ,the exact microcanonical average is replaced by a cumulative average . [cols=">,^,>,<",options="header " , ] we tested the zeroth iteration algorithms for convergence to the exact values for the infinite temperature transition matrix , , on a ising square lattice model .the exact result is obtained by an exhaustive enumeration .figure [ fig - t - converge ] is a plot of overall error in the -fold - way simulation data , defined by monte carlo simulation length ( averaged over many runs ) .the monte carlo times are in units of sweeps ( moves ) .the asymptotic value for large is listed in table [ tb - rates ] , which characterizes the rate of convergence .= 0.90 as expected , the best algorithm is algorithm number , when the input is exact .the algorithms ( or rates ) number 0 , 1 , and 7 are only slightly worse than the best .numbers 5 and 6 are the second best . the rate numbers 3 , 4 , and 11 may still converge extremely slowly or not converge at all .we do not understand why this is so .number 9 does not converge , which we know since it does not satisfy detailed balance equation .using instantaneous values rather than average values is clearly wrong .errors for single - spin - flip algorithm are about 1.5 larger than with the -fold - way , or about less efficient in monte carlo steps . however , since the -fold way is slower in cpu time than single - spin flips by a factor of 2 or more , the two methods are comparable in overall efficiency in this particular instance . from the above results we conclude that even the zeroth iteration converges to the correct resultsthis does not mean that the rate of convergence is uniform in .in fact , we found for large lattices , the violation of detailed balance , , is large at the two ends of the energy spectrum .there are a number of different ways of determining the density of states .the matrix has eigenvalue 1 with corresponding left eigenvector .however , the solutions of the eigenvalue problem are numerically unstable . the broad histogram equation , ( [ eq - broad - histogram ] ) , can be used . in the simplest application, we can ignore the extra equations , and consider only these with smallest , and obtain solution from iteration , there are more equations of this type than the unknown , we can use least - squares method .our experience suggests that we should view the problem as an optimization with nonlinear constraints .there are two possible models for an optimization solution . introducing the optimization variable , consider , to any known constraints .for the -dimensional ising model , we have three conditions are the symmetry between low and high energies , the degeneracy for the ground states , and the total number of states .the weight is the variance of the monte carlo estimates of the quantity .the above minimization problem is essentially a linear problem .we solve it by an iterative method .the second , different formulation of the optimization is expressed in variable , is monte carlo estimate with error , and is unknown . the minimization is subject to the conditions last one is symbolically a identity , see eq .( [ eq - ttt - identity ] ) for a concrete example . for the ising model ,there is also an additional symmetry relation set of constraints is much more difficult to handle .intuitively , this second optimization problem should give better result than the first one .however , this is not the case , at least for the two - dimensional ising model . for this model , optimization in , eq .( [ eq - opt - t ] ) , gives twice the error ( defined below ) of the first optimization method .a simple iteration with eq .( [ eq - simple - iter ] ) and has 4 times the error comparing to optimizing solution of eq .( [ eq - minimize - n ] ) and ( [ eq - opt - n2 ] ) .= 0.90 figure [ fig - n - conv ] is another convergence test plot for algorithm 0 and a two - stage simulation , both with -fold way , on the two - dimensional ising square lattices . in this plot, we consider the relative error per energy level for density of states , the normalization by is to exclude ground state energy and its symmetric state energy , for which the exact values are imposed . the exact value is obtained according to ref . ; is monte carlo estimate obtained from solving eq .( [ eq - minimize - n ] ) with .we see signs of slower convergence for large lattice sizes for algorithm 0 .a two - stage simulation improves the efficiency to that of using the exact density of states in flip rates .we first apply the algorithm 0 using cumulative average in the flip rates ; we then apply algorithm ( lee s method of multi - canonical algorithm ) , using the density of states obtained in the first step .both steps use the same number of monte carlo sweeps .with the two - stage algorithm , the slowing down is roughly , using a similar definition as given in eq .( [ eq - fluctuation ] ) .= 0.90 we have already presented two correlation times characterizes the convergence of the histogram and characterizes the convergence of the density of states . both of them showed the effect of reduced performance when the size is increased . by definition of the correlation times and , they also measure roughly how many monte carlo steps are needed to generate independent samples .of course , in , it also reflects the effect of data analysis methods .another measure of the performance of algorithms is the tunneling time . in our study , we define the tunneling time as the average monte carlo steps that the system in the lowest energy state goes to the highest energy state , or vice versa .more precisely , as soon as a ground state is reached , we record the current monte carlo move , and then look for the highest energy , which may happen at .we then look for the ground state again .the difference consists of one sample for the tunneling time .the tunneling time for the two - dimensional ising model is very well described by ( in units of sweeps ) . an ideal random walk in the space of energy would have an exponent of 2 ( in general ) .the dynamics is close but not quite random walk in energy .the spin glass is one of the most difficult systems to simulate .the performance of the flat histogram algorithm for the two - dimensional spin glass is presented in ref . . in fig .[ fig - tun ] , we show the tunneling time as a function of system linear size for three algorithms ( algorithm 0 and 1 with -fold way , and algorithm 0 without -fold way ) on the three - dimensional ising spin glass . the gain from -fold way as comparing to standardsingle - spin - flip is by a constant factor .the equal - hit algorithm is about a factor of 6 faster in tunneling times than the algorithm 0 without -fold way .unfortunately , this gain is not very significant as the -fold - way program runs few times slower than standard single - spin flips .the slowing down exponent is about 8 , this is comparable to that of multi - canonical method .the stochastic matrix describes monte carlo dynamics in the space of spin configurations .such state space is extremely large , containing states , from which monte carlo moves sample only a very small fraction . on the other hand, we introduced a new stochastic matrix in a coarse - grained space of energy .the matrix and are related by eq .( [ eq - t - matrix - definition ] ) .we shall call the dynamics induced by this stochastic matrix energy transition matrix dynamics . in discrete time step ,the dynamics describes the evolution of the histogram as is the significance of this dynamics ?the dynamics describes the change of energy distribution through the following single - spin - flip moves : given the current state with energy , pick a new state with the same energy among all the degenerate spin states with equal probability , flip a spin according to the usual spin flip rate as embedded in .as we can see , since the state changes at random to a completely new state of the same energy to try another flip , its dynamics is substantially faster than single spin flip or even cluster flip .unfortunately , such dynamics is not realizable on a computer , but it is of interest for comparison with realizable algorithms .we can say more about the dynamics given by eq .( [ eq - discrete - tmmc ] ) .let us first convert the equation into continuous time which is more convenient for analytic treatment , and which is a valid description for moderately large system . introducing and and define , we have dividing both side by , taking the limit of large , we have where the continuous time transition matrix is related to the discrete step matrix by . two results were initially reported in ref .detailed derivations will be given in appendices .firstly , an explicit form of can be given for the one - dimensional ising model with glauber flip rate , as , , and is the chain length .the diagonal term is computed from the relation the rest of the elements if .secondly , equation ( [ eq - t - conti ] ) is continuous in time but still discrete in energy. we can go one step further to consider the limit of both time and energy to be continuous . for transition matrix associated with canonical ensemble, we found a partial differential equation in such limit as and are properly scaled energy fluctuation and time : with is the average energy per spin and is the reduced specific heat per spin .this equation , ( [ eq - diffusion ] ) , is equivalent to the one - dimensional quantum harmonic oscillator equation , thus the analytic solutions are readily obtained .the most important consequence of this equation is that the relaxation time is proportional to the specific heat of the system .this result can also be seen from a less rigorous point of view . since this artificial dynamicsinvolves a random walk on the probability distribution of the energy , the characteristic relaxation time will be proportional to the square of the energy fluctuation , , which is in turn proportional to the specific heat .in the single histogram method , one performs a canonical ensemble simulation at a fixed temperature , and collects the histogram . the histogram is proportional to , so an estimate to the density of states is obtained from once we have the density of states , we can use it to evaluate thermodynamic quantities at any temperature .= 0.90 unfortunately , since is approximately a gaussian function with mean and variance , where is the specific heat per spin , the accuracy of the estimates deteriorates exponentially outside the energy window of order , c.f . fig .[ fig - tmfs - err ] .detailed error analyses for energy and specific heat are given in .the region of good accuracy coincides with the critical region at a second order phase transition , so the single histogram method is still an extremely valuable tool to study phase transitions .the transition matrix approach can also be used in a way similar to single histogram method , i.e. , collecting the statistics of the transition matrix in a canonical simulation .numerical comparison suggests that the two methods are comparable in accuracy .in fact , as we can see in fig .[ fig - tmfs - err ] , for a certain interval , the transition matrix gives results which are up to 10 times better , but become comparable or worse outside the limited range of . if we use the two results to compute the average energy or heat capacity , we found that the errors are about the same .the reason is that the contributions to errors are dominated by the tails of the histogram distribution , at these ranges , the density of states is of comparable accuracy .it is somewhat disappointing that the single histogram method and transition matrix monte carlo analysis are of the same accuracy .some improvement can be made by a careful analysis using baysian method .but it is unlikely that we can bring about an improvement of order for the accuracy . both of the multiple histogram method and the multi - canonical method density of states over a wide range .while the multiple histogram method uses a collection of standard canonical simulations , the multi - canonical method uses only one simulation . in reality, a multi - canonical simulation needs to be iterated few times to converge to the desired ensemble . in this respect , the flat histogram method takes at most three iterations .the first iteration already gives excellent results , although there are noticeable biases for large systems ; the second iteration with fixed flip rates greatly improves the accuracy ; the third iteration would give correct sample average for the transition matrix as well as correct multi - canonical ensemble .= 0.90 the additional benefit of using the transition matrix is improved accuracy comparing to other methods , within the same simulation runs .figure [ fig-2d16lee ] shows the accuracy of the density of states for two - dimensional ising model on a lattice .we note that the accuracy is sensitive to the constraints imposed with the optimization .this extra accuracy comes about due to the nature of the samples that are taken . in histogram or multi - canonical methods , each new stategives one count in the histogram , while counts are collected from each state for the matrix .naively , we expect an improvement by a factor of in terms of the variance , since each state contributes 1 for the histogram , and each state contributes a number of for the transition matrix . while the accuracy of the transition matrix elements does improve as the system size increases ( may be by for the error ) , as has been pointed out by others , this accuracy is lost in the density of states .this is due to accumulation of errors , as the transition matrix elements only determine the ratio of the density of states , c.f .( [ eq - simple - iter ] ) .if we use a simple iteration method starting from the ground state , we see that this extra accuracy in the matrix elements gets canceled exactly by the accumulation of errors .however , the optimization methods of determining make the error analysis difficult . here are some quantitative comparisons .we take the exact multi - canonical rate ( algorithm ) in simulation , and collect both the transition matrix and the histogram . with transition matrix , the average relative errors of the density of states defined by eq .( [ eq - n - err ] ) for the two - dimensional ising model of size , 8 , 16 , 32 , and 50 with monte carlo steps in each run are 0.0003 , 0.0012 , 0.0037 , 0.011 , 0.024 , respectively .the corresponding results computed by histograms are 0.0033 , 0.010 , 0.027 , 0.058 , 0.10 , respectively . in general , transition matrix method performs better than histogram based methods , as we have shown numerically .a proposal to use the transition matrix was given by smith and bruce in 1995 in connection with multi - canonical simulation .this is further developed by fitzgerald _et al _ .the canonical transition probability ( ctp ) method also estimates the transition matrix and uses energy detailed balance equations to estimate the canonical distribution . in the simplest version , instead of collecting the histogram , a matrix is incremented by 1 for every monte carlo move from state with energy to state with energy .clearly , this quantity is an estimator of .the transition matrix is obtained by .both of the above methods and the present method are similar in the way that transition matrix is used .however , there are two important differences : ( 1 ) in ctp method , only the current move is used for statistics , not all possible moves of the state ; ( 2 ) the simulation is performed in canonical ensemble at a given temperature .= 0.90 very recently , wang and landau proposed an intriguing method to determine the density of states .the dynamics follows the usual multi - canonical simulation or entropic sampling , by the single - spin - flip rate . however , is not a constant , but is updated at each step of trial move with for the current energy .this is somewhat like the lee s method of entropic sampling , but to some extent the updating of the weights are done at every move .if were greater than 1 , the algorithm would never converge , so the idea is to reduce after some monte carlo steps , by , for example .a criterion of flatness of the histogram was used to determine if should be reduced .wang and landau s idea can be adopted in the context of transition matrix .for example , we can consider updating the logarithm of density of states , , using the information from the transition matrix by where is some small parameter and is the predicted logarithmic density of states , based on possible hops from to the current .if we already know the ground state degeneracy , we can fix it to the constant .unlike the updating rule which makes grow indefinitely , will converge to the exact value. this generalization does give more accurate results than algorithm 0 if it converges to flat histogram .however , it appears to have the problem of sticking to a gaussian - like distribution for the histogram for large systems .we made an extensive test of the accuracy of the random walk algorithm of wang and landau .in general , the random walk algorithm is far inferior to the flat histogram algorithm in terms of accuracy and rate of convergence to flatness , particularly for small systems . for the original implementation of the method ,once the system passes the transient period , the error becomes independent of the total monte carlo steps used and the system sizes , and is primarily determined by how slowly is reduced .this feature is useful for its robustness , particularly for large systems . in fig .[ fig - wl - compare ] we plot average errors in the density of states as defined by eq .( [ eq - n - err ] ) for fixed monte carlo steps of ( for the two - stage algorithm ) .it is clear that all methods have bigger errors for larger systems .but the random walk algorithm is generally order of magnitudes worse than the best transition matrix based methods .the sharp increase of the error with system sizes from 16 to 32 , and to 50 for the random walk algorithm is an indication that with steps , the system is still in transients for these sizes . if monte carlo steps are used , the random walk algorithm comes closer to the accuracy of algorithm 0 at size .the two - stage method ( algorithm 0 followed by algorithm , both using -fold way ) gives the best performance .the transition matrix approach can be used for continuous degrees of freedom . in such applications ,both the dynamic variables and energy spectrum have to be discretized .et al _ have applied the broad histogram method to the xy model .the important issue here is how to control the numerical error caused by discretization .it is straightforward to generalize the transition matrix to more than one macroscopic variable , such as energy and magnetization ; in fact , this has already been done with the broad histogram method .this approach may have problems , particularly when the hamiltonian is complicated .first , the matrix may be too large to handle in general .second , with more elements to fill , the statistics for individual entry is poor .this makes the method less robust .the transition matrix simulation can be parallelized very efficiently , where each processor works on separate configurations , using and updating a common transition matrix .each processor can be limited to work on a range of energies .the advantage of this is clearly a fast way of approaching the flat distribution for the histogram .starting from the detailed balance equation , we have formulated the transition matrix in energy .the infinite - temperature version of this matrix serves as the basic data from which we determine the density of states and at the same time is used for construction of flat - histogram and equal - hit algorithms .this method of simulation together with optimization method to determine the density of states offers a better way of computing thermodynamic quantities by monte carlo simulation .in such an approach , a single simulation produces the whole function of temperature ( or other parameter of the model ) through re - weighting .it is efficient and easy to implement .as the use of accumulated average for the transition rates causes slow convergence for large systems , a two - stage iteration is recommended and is enough to get the best convergence .dynamically , the flat - histogram algorithm for long simulations is equivalent to the multi - canonical method . using the equal - hit algorithm together with -fold wayoffers additional benefits .we thank fugao wang for providing us his programs for the random walk simulation .we also thank lik wee lee , bernd a. berg , and fei fan for comments on the manuscript . j .- s .w. is supported by nus and singapore - mit alliance research grants .we consider the single - spin - flip dynamics with a random pick of spins and the glauber flip rate .\ ] ] in one dimension for the ising model , the quantity can be evaluated exactly .let us first define a set of new variables . is 1 for a satisfied bond , and 0 for an unsatisfied bond .the mapping from to is unique modulo an overall spin up - down symmetry .we assume periodic boundary condition and lattice size to be even .there are three possible energy changes , , 0 , .if the original spin of a site and the spins of two neighboring sites all have the same sign , it contributes one count for the total . in terms of , it requires two consecutive satisfied bonds , i.e. , .thus we can express in terms of as note that only variables , , are independent ( since must be even ) . a microcanonical average at energy needs to be carried out .let us use to label the equally spaced energy levels , .then , and .the microcanonical average can be expressed as a summation over all subject to being an integer constant .thus we have and similarly the factor 2 is due to the two - to - one mapping from to . in order to compute the above sums , we consider the statistical mechanics problem of a one - dimensional lattice gas with the hamiltonian , the partition function of this system ( at ) is taking the derivative with respect to , we have where .the desired quantity is obtained from the generating function as .the partition function is obtained by the standard trick of transfer matrix .we find where are the eigenvalues of the matrix .\ ] ] after some algebra , we find thus a similar derivation from a slightly different hamiltonian gives combining the above results with the density of states for the ising model , $ ] , which is readily obtained by the combinatorial problem of putting unsatisfied bonds in places , we obtain the expressions given in eq .( [ eq-1d - exact - a ] ) and ( [ eq-1d - exact - b ] ) .we start from the dynamical equation where time is continuous but energy is discrete .the aim is to consider the continuous energy limit .this limit is natural and is a very good approximation for large systems .we follow the general method known as expansion .let us introduce a new variable , where will be determined later . since is of order ,naively is of order .however , by choosing to be the average of , we cancel the leading dependence ; is actually of order 1 , measuring the relative fluctuation around mean .we look for nontrivial solution in variable in the scaling limit of , keeping fixed .more precisely , we find equation in such that the coefficients of the differential equation are independent of .consider the function in terms of as .we also write in as where is the change of energy associated with , and , .we assume a -dimensional ising model in the derivation .replacing by , by , equation ( [ eq - energy - discrete ] ) becomes the crucial step now is to take taylor expansion assuming small , and to find equation that is leading order in the large limit .for notational convenient , we shall drop the tildes on and , which actually denote different functions .we know in the limit , and are smooth functions in .then ( omitting the variable for clarity ) the primes denote derivatives with respect to .we should note that in the large limit with fixed , and its derivatives do not contain .the dependence is made explicit by the above expansion .substituting eq .( [ eq - expansion ] ) into eq .( [ eq - new - variable ] ) , we have where naively , since is , we expect , , and and the equation does not have a well - defined large limit . but this is not true , due to special relations among . two relations are important in the derivation below to show that both , , and are of and the third derivative can be dropped in the large limit .the existence of an equilibrium implies expressed in , it is thus , using from the above \\ & = & \sum_{i \neq 0 } \left [ t_i^{\prime}(x ) { \frac { i\,a } { \sqrt{n } } } - { \frac{1}{2 } } t_i^{\prime\prime}(x ) { \frac { ( i\,a)^2 } { n } } + \cdots \right].\end{aligned}\ ] ] the last equation above used a taylor expansion . since has a scaling form in the large limit , we find that the -th derivative of at is of order .thus is of order , and we can safely replace by 0 , and find for , we make an expansion in for and find where in the last formula , the first term is of order , the second term is of order 1 , and last term is of order . if the first term were there , we would have an ill - defined limit .so we must require that this is in fact a condition to fix .we shall show that this condition requires , the canonical average of energy per spin .we evoke the energy detailed balance equation , ( [ eq - energy - detailed - balance ] ) . in terms of the new variables and , it is let , taylor expanding the terms involving , we find : ^{\prime}\delta + t_{-i}^{\prime}(0 ) \delta + o\left ( { \frac{1 } { n } } \right).\ ] ] note that the first term is of order , the second term of order .it is important to realize that we are looking for the scaling limit of , fixing . substituting this equation into the expression for , we find { \frac{k a } { \sqrt{n } } } \approx \sum_{k=1,2,\cdots , d } k^2 t_k(0 ) [ \ln h_{eq}(0 ) ] ^{\prime}{\frac{a^2 } { n}}.\ ] ] the requirement that is equivalent to say that is at the extreme of equilibrium probability distribution .when the first term in is set to 0 , we have the coefficient is a constant to leading order in : the equilibrium distribution of energy for large system is a gaussian distribution with mean , and variance where is reduced specific heat per spin , thus substituting this result into the partial differential equation in equilibrium we find . this same relation can also be obtained using detailed balance equation .changing variables from to and from to , we obtain eq .( [ eq - diffusion ] ) .the file below is the raw data for various errors .formally this is not part of the paper .we included here which could be useful for future benchmarking use .1 14.961 15.059 15.043 13.995 14.299 15.357 13.330 14.241 14.455 15.883 13.859 16.06 13.776 15.237 10 6.480 8.107 7.795 6.509 6.33 8.83 5.52 5.40 6.79 9.91 6.32 9.68 6.41 8.09 100 0.681 0.819 0.736 0.759 2.28 1.09 0.710 0.713 0.667 1.29 1.47 1.00 1.171 0.825 1000 0.188 0.204 0.210 0.234 1.50 0.320 0.221 0.221 0.199 0.226 0.581 0.227 0.370 0.256 1e4 0.0608 0.0644 0.0663 0.080 1.00 0.14 0.070 0.070 0.064 0.076 0.539 0.079 0.256 0.094 1e5 0.0181 0.0206 0.0215 0.030 0.698 0.070 0.0219 0.0224 0.0201 0.0246 0.5353 0.026 0.2136 0.0349 1e6 0.0060 0.0064 0.0070 0.0098 0.43 0.044 0.0067 0.0077 0.0064 0.0085 0.533 0.008 0.131 0.0144 1e7 0.00186 0.0020 0.0021 0.0040 0.357 0.0172 0.0024 0.0023 0.0021 0.0030 0.5342 0.0026 0.0926 0.0047 1e8 0.00062 0.00066 0.29 0.012 0.0842 mcs -1 0 1 2 3 4 5 6 7 8 9 10 11 12 1 16.448 15.26 15.263 15.178 15.307 15.31 15.18 15.31 15.41 15.30 15.31 15.31 15.54 15.30 10 9.329 9.162 9.165 100 1.286 1.583 1.599 1000 0.287 0.312 0.310 1e4 0.093 0.097 0.095 1e5 0.0287 0.0311 0.0314 algo no ( error at n - fold mcs=1e4 ) best -1 ( 0.061 ) very good 0,1,7 ( 0.065 ) good 5,6,8 ( 0.07 ) ok 2,10 ( 0.08 ) not so good 4,12 ( 0.1 )bad ( converge ) 3,11 ( > 0.25 ) do n't converge 9 ( 0.5 ) 2d ising model , e_n = ( 1/(n-1 ) ) sum | n(e)/n_ex(e ) - 1 | algo 0 ( n - fold - way ) mcs l=4 l=8 l=16 l=32 l=50 1 1.200 10 0.152 1.38^ 100 0.0340 0.212 1.3^ 1000 0.0109 0.0492 0.20 1^ 1e4 0.00342 0.0154 0.0568 0.28 0.31 1e5 0.00108 0.0054 0.0189 0.084 0.23 1e6 0.000338 0.00154 0.0066 0.07 0.22+/-0.02 1e7 0.000110 0.00051 0.0024 0.042 0.191+/-0.01381e8 0.000035 0.000190 0.0009 0.022 0.16 2d ising model , e_n = ( 1/(n-1 ) ) sum | n(e)/n_ex(e ) - 1 | algo 1 ( n - fold - way , equal - hit ) mcs l=4 l=8 l=16 l=32 l=50 1 1.20 10 0.142 1.27 100 0.0410 0.178 1000 0.0134 0.051 0.26 1e4 0.0042 0.0152 0.048 0.18^ 0.4 1e5 0.00138 0.0050 0.015 0.04 0.12^ 1e6 0.00044 1e7 0.00015 2d ising model , e_n = ( 1/(n-1 ) ) sum | n(e)/n_ex(e ) - 1 | algo -1 ( n - fold - way ) ( exact rate , best we can do ) mcs l=4 l=8 l=16 l=32 l=50 1 1.32 10 0.131 14 100 0.031 0.18 1000 0.0097 0.038 0.16 1e4 0.0030 0.0125 0.041 0.13 0.23 1e5 0.00096 0.00383 0.0121 0.037 0.07 1e6 0.00031 0.00119 0.0037 0.0114 0.024 1e7 0.000095 0.00040 0.00122 0.00337 0.007 1e8 0.000032 0.000103 0.00045 0.00091 ising model , same as above algo -1 n - fold - way , but use histogram to compute n(e ) , i.e. berg / lee method mcs l=4 l=8 l=16 l=32 l=50 1e5 0.010 0.033 0.0733 0.32 0.5 1e6 0.0033 0.0099 0.027 0.058 0.10 2d ising model , e_n = ( 1/(n-1 ) ) sum | ... | two stage simulation , stage 1 , using algo 0 , stage 2 , using algo -1 of n(e ) generated from stage 0 .same run length for the two stages mcs l=4 l=8 l=16 l=32 l=50 100 0.0311 0.26 1000 0.0096 0.0404 0.10 10000 0.0030 0.0122 0.0401 0.2 0.5 1e5 0.00100 0.00349 0.0131 0.038 0.075 1e6 0.00036 0.00123 0.0034 0.0133 0.017 1e7 0.00038 0.0016 0.0029 0.0074 1e8 0.0004 0.0010 modified wang fg / wang js method ( algo 13 ' ) , e_n = ( 1/(n-1 ) sum | ... | using s(e ) < - s(e ) + eta ( s^pred - s(e ) ) ( eta = 0.1 ) ( two numbers - using tmatrix / using s(e ) directly ) mcs l=4 l=8 l=16 l=32 100 1000 0.00936/0.358 1e4 0.00306/0.0087 0.39/0.96 1e5 0.00096/0.0028 0.0039/0.015 0.99/0.99 1e6 0.00115/0.004 0.71/0.99 1e7 0.00049/0.0014 wang fugao 's program ( fixed mcs , f_max = 2.718 , f_min=1 , 80%h , check every 1k steps ) e_n = < | n(e)/n_ex(e ) - 1 | > mcs l=4 l=8 l=16 l=32 l=50 1e4 0.172 10^ 1e5 0.065 0.073 1 1e6 0.066 0.049 0.066 0.36 9(+/-4 ) 1e7 0.055 0.073 0.043 0.05 0.18 1e8 0.069 0.047 0.07 0.025 0.14wang fg method e_n ( my program algo 13 , 80%h , checks 30 times , single - spin - flip ) mcs l=4 l=8 l=16 l=32 l=50 1e4 0.079 1.62 1e5 0.034 0.071 0.7 1e6 0.012 0.031 0.056 0.42 5.5 1e70.0044 0.010 0.025 0.06 0.14 1e8 0.0013 0.02 .... 99 k. binder , ( editor ) , `` monte carlo methods in statistical physics , '' topics in current physics , vol 7 , springer - verlag ( 1979 ) ; `` applications of the monte carlo method in statistical physics , '' topics in current physics , vol 36 , 2nd ed . , springer - verlag ( 1987 ) ; `` the monte carlo method in condensed matter physics , '' topics in applied physics , vol 71 , 2nd ed . , springer - verlag ( 1995 ) .
|
we present a formalism of the transition matrix monte carlo method . a stochastic matrix in the space of energy can be estimated from monte carlo simulation . this matrix is used to compute the density of states , as well as to construct multi - canonical and equal - hit algorithms . we discuss the performance of the methods . the results are compared with single histogram method , multi - canonical method , and other methods . in many aspects , the present method is an improvement over the previous methods . pacs numbers : 02.70.tt , 05.10.ln , 05.50.+q . keywords : monte carlo method , flat histogram , multi - canonical ensemble .
|
schwarz & kennicutt ( 2004 ) recently conducted a detailed study of papers published in the _ astrophysical journal _ ( apj ) , designed in part to document the adoption rate and citation impact of the arxiv preprint server ( astro - ph ; ginsparg 2001 ) .the database they constructed for this study is a rich source of demographic and citation information for the apj , but the two most interesting results were specific to astro - ph .they found that the adoption rate of astro - ph varied widely between subfields of astronomy , with nearly universal adoption ( 95% ) in cosmology and the lowest adoption ( 22% ) in solar astrophysics and planetary science by 2002 .this difference among subfields is significant because they also found that papers posted to astro - ph are cited about twice as often as papers that are not posted .this boost in citation rates was more than a factor of 10 for cosmology papers , and still a factor of 1.8 for papers in solar astrophysics and planetary science .considering the main findings of schwarz & kennicutt about apj papers , i wondered whether the other major journals would show similar trends .is astro - ph as widely used by author communities in other parts of the world ?does astro - ph produce a comparable impact on the citation rates for papers published in the other journals ?answering these narrow questions for a large sample of journals is considerably easier than the deep study undertaken by schwarz & kennicutt for the apj . the astrophysics data system ( ads ; kurtz et al .2000 ) recently added links to the arxiv preprint records of journal papers that have also been posted to astro - ph .this new feature , combined with the citation data and search filtering capability of ads , allows users to measure the adoption rate and citation impact of astro - ph for each journal with just a few clicks .in this short paper , i document the rise in the adoption rate of astro - ph for three major astronomy journals published in north america ( apj ) , the uk ( mnras ) , and continental europe ( a&a ) .i also determine the citation impact of astro - ph on papers published in 2002 for 13 journals spanning the globe .i describe in section [ sec2 ] how to generate these statistics for any journal using the ads , and i validate the method by comparing it to the schwarz & kennicutt sample .i present and discuss the main results in section [ sec3 ] , and i summarize the major conclusions in section [ sec4 ] .for any given range of dates and for any particular journal in the ads , statistics on the fraction of papers posted to astro - ph ( the adoption rate ) and the resulting impact on the citation rate ( the citation impact ) can be generated with two simple queries .for example , generating statistics that can be compared to the schwarz & kennicutt sample requires the following : ( 1 ) enter a publication date range between 7/1999 and 12/1999 . ( 2 ) under the filters section select references from , choose all refereed publications and type the journal code `` apj .. '' in the select / deselect publications field .( 3 ) for the first query , set select references with to a bibliographic entry ( the default ) . for the second query , set it to all of the following and check arxiv preprint .( 4 ) under the sorting section , select sort by citation count . in the query results , just above the bibliographic entries , the first query yields the total number of apj papers published in this date range and the total number of citations to those papers in the ads database ( which is not complete , but includes data from all of the major journals beginning in 1999 ) .the second query yields the corresponding numbers for the subset of these papers that have also been linked to an astro - ph record . in this example , the first query results in total number selected : 1182 , total citations : 29904 , while the second query gives total number selected : 706 , total citations : 22623 .this means that the totals for non - astro - ph papers are 476 and 7281 respectively . since these queries return papers published as letters and main journal articles together , and because additional citations have now been added to ads , the results are not strictly comparable to the schwarz & kennicutt sample .even so , the fraction of apj articles posted to astro - ph from these queries ( ) is nearly identical to that found for main journal articles by themselves ( 61% ; schwarz & kennicutt 2004 , their table 4 ) .in addition , the mean number of citations for astro - ph papers ( ) compared to non - astro - ph papers ( ) from these queries is larger by a factor of 2.09 , which is similar to the overall factor of 2.05 found by schwarz & kennicutt ( 2004 , their table 6 ) .this broad agreement suggests that the ads can by itself generate useful statistics on the adoption rate and citation impact of astro - ph . for papers published in 2002, i repeated the procedure described above for 13 different journals spanning the globe and covering a broad range of isi impact factors .for three major astronomy journals ( apj , mnras , and a&a ) i also tracked the annual rise in the adoption rate of astro - ph from 1992 to 2004 , to look for differences in the preprint posting habits of these author communities .the rise in the use of astro - ph over the past decade has led to a major shift in the way astronomical research is disseminated , and it has had a dramatic impact on the citation rates of posted papers as a consequence . in figure[ fig1 ] , i show the fraction of papers posted to astro - ph for three major astronomy journals from 1992 ( astro - ph began in april of that year ) to 2004 . for all three journals , the adoption rate of astro - ph grew steadily into the late 1990 s and then started to level off over the past few years .authors of papers appearing in the apj and mnras have adopted astro - ph at roughly the same rate over this time period , while authors publishing in a&a have generally been slower to adopt it .current adoption rates for mnras and the apj have leveled off near 70 - 80% . as noted by schwarz & kennicutt ,this is similar to the fraction of authors who allow their articles to be posted in preprint form on the apj website ( without any additional work by the author ) .this service was initiated by the apj in 2002 , and may have actually contributed to the limited growth in astro - ph submissions in recent years by authors who considered it equivalent to astro - ph ( mnras did not start a similar service until 2004 ) .the adoption rate of astro - ph among a&a authors has grown more slowly and currently stands near 60% , a value the apj and mnras both reached in 2000 . within the next 5 years , the use of astro - ph seems likely to become almost universal among authors of papers in these major astronomy journals . lcrcrccrccccc journal & & impact&&&&&&adoption&&citation + & & factor&&#&&&#&&&rate&&impact + nature & & 30.432 & & 13 & & & 97 & & & 0.12 & & + science & & 26.682 & & 13 & & & 31 & & & 0.30 & & + apj & & 6.187 & & 1670 & & & 661 & & & 0.72 & & + aj & & 5.119 & & 327 & & & 182 & & & 0.64 & & + apjs & & 4.749 & & 68 & & & 58 & & & 0.54 & & + mnras & & 4.671 & & 809 & & & 254 & & & 0.76 & & + a&a & & 3.781 & & 1034 & & & 802 & & & 0.56 & & + icarus & & 3.009 & & 11 & & & 235 & & & 0.04 & & + pasp & & 2.830 & & 62 & & & 76 & & & 0.45 & & + pasj & & 1.996 & & 67 & & & 63 & & & 0.52 & & + soph & & 1.875 & & 11 & & & 168 & & & 0.06 & & + pasa & & 0.898 & & 24 & & & 41 & & & 0.37 & & + ap&ss & & 0.383 & & 47 & & & 265 & & & 0.15 & & + a broad survey of the adoption rate and citation impact of astro - ph on papers published in 2002 for a wide variety of journals is summarized in table [ tab1 ] .the survey includes high - impact multidisciplinary journals like _ nature _ and _ science _ , as well as a selection of astronomy journals published in various parts of the world with a wide range of impact factors . the topical journals _ solar physics _ and _ icarus _ were included to try to elucidate the low adoption rate found by schwarz & kennicutt for apj papers in the subfield that included solar astrophysics and planetary science .the most striking feature of table [ tab1 ] is that in almost every journal surveyed , papers posted to astro - ph are cited significantly more often than papers that are not posted . for most of the major astronomy journals , astro - ph papers are cited between 1.6 and 3.5 times as often as non - astro - ph papers .the median boost is about a factor of 2 , which is comparable to what was found by schwarz & kennicutt for apj papers .astronomy papers published in the high - impact journals get an even larger boost from being posted to astro - ph . despite a significantly lower adoption rate ,astronomy papers appearing in _ nature _ and _ science _ that are also posted to astro - ph are cited about 5 times more often than papers that are not posted .this extra boost may come from a kind of `` brand recognition '' associated with these high - impact journals , capturing an even broader audience than an astro - ph paper published in one of the other journals .high - impact journal papers that are not posted to astro - ph have citation rates comparable to non - posted apj papers , and are actually cited less often than major astronomy journal papers that are posted to astro - ph .similarly , astro - ph papers that appear in journals with the lowest impact factors still get a citation boost over non - astro - ph papers near the average .as expected , the _ absolute _ citation rate is roughly correlated with the isi impact factor , but the boost in citations due to astro - ph does not change by much across a wide selection of astronomy journals . the only papers in this survey that did not receive a significant boost from being posted to astro - ph were those published in _ solar physics _ ( soph ) .schwarz & kennicutt noted that the apj papers with the lowest adoption rate of astro - ph were in the solar system ( ss ) subfield , which included solar astrophysics and planetary science .the low adoption rate by itself may not explain the lack of a citation impact for soph since the adoption rate is even lower for _ icarus _ papers , which are still cited twice as often when posted to astro - ph .an alternative explanation is that relatively few solar astrophysicists learn about new research through astro - ph .it is simply not yet part of the culture in this subfield .across a wide selection of major astronomy journals , papers that are posted to astro - ph are cited about twice as often as papers that are not posted .as the single source containing most of the new research to be published in refereed journals around the world , the astro - ph preprint server appears to be the method that most astronomers now use to keep up with the literature .if citation rates are any indication of the assimilation of new research by the astronomical community , then astro - ph seems to be the best single form of advertising available .editors who want to maximize the impact factor of their journals should encourage authors to post their preprints to astro - ph , and authors in subfields where astro - ph has not yet been adopted should consider the advantages that other subfields have already discovered .ginsparg , p. 2001 , creating a global knowledge network , in electronic publishing in science , ed .d. shaw ( paris : unesco ) , 41 kurtz , m. j. , eichhorn , g. , accomazzi , a. , grant , c. s. , murray , s. s. , & watson , j. m. 2000 , , 143 , 41
|
the rise in the use of the arxiv preprint server ( astro - ph ) over the past decade has led to a major shift in the way astronomical research is disseminated . schwarz & kennicutt ( 2004 ) recently found that _ astrophysical journal _ papers posted to astro - ph are cited roughly twice as often as papers that are not posted , suggesting that the preprint server has become the primary resource for many astronomers to keep up with the literature . i describe a simple method to determine the adoption rate and citation impact of astro - ph over time for any journal using nasa s astrophysics data system ( ads ) . i use the ads to document the rise in the adoption of astro - ph for three major astronomy journals , and to conduct a broad survey of the citation impact of astro - ph in 13 different journals . i find that the factor of two boost in citations for astro - ph papers is a common feature across most of the major astronomy journals .
|
turbulence is governed by the nonlinear equations of fluid dynamics , but those equations are too complex to be solved analytically .an understanding of the behavior of turbulent flows is better approached using statistical ideas , while also taking into account the few exact results derived by considering isotropic , homogeneous and incompressible turbulence . in all casesone must invoke statistical averaging , even if the starting point is the navier - stokes equations themselves . in this study statistics and probabilities alone appear ; the underlying equations of fluid dynamics do not enter at all .although a probabilistic approach is taken , it is clear that `` ... turbulent does not mean random .the complex motion of the fluid contains characteristic patterns , events and structures that show through all the randomness '' .our aim is to unveil this structure of turbulence using tools from information theory .nevertheless one obtains results that are consistent with those deduced using the traditional approach .these include the existence of a cascade , implying the correlation between eddies of differing sizes .let the velocity of the fluid be a random variable and be one of its possible values .for simplicity of discussion we consider only a single component .there is an associated probability density function , which is determined experimentally by counting occurrences . instead of the velocity we could also consider the vorticity measured at a point or a velocity difference measured between two nearby points in a homogeneous fluid .it will be assumed that the turbulent fluid is in a steady state , making the absolute observation time of little interest .once is in hand , we could proceed to calculate such quantities as the velocity moments .when is replaced with the velocity difference over a scale : , these moments are called the structure functions .they play an important role in turbulence theory , with particular prominence given to kolmgorov s famous exact solution , where is the energy injection rate . instead of moving in this traditional direction ,we proceed in another .we ask what the probability density function itself has to say .completely random systems have flat distributions , where all possible values have equal weight : constant . in the other extreme ,the distribution is a delta function , with only one possible value .this latter scenario corresponds to the ideal laminar state with no fluctuations .the former case is not seen even when the flow is turbulent , because constraints on the energy of the flow force it to be closer to gaussian .now we ask whether there is a way to make this description more quantitative . at the center of information theoryis a quantitative measure of the broadness " of probability distributions .shannon called this the entropy , apocryphally encouraged to do so by von neumann because he could safely hide inside the confusion this term evokes in the physics community .the most basic form of the entropy , which we shall later revise , is simply where can be thought of as a measure of surprise at observing any particular .the form of eq .[ eq : h ] is the same as for the energy states in , the canonical ensemble of statistical mechanics .the astute reader will notice we have used a sum instead of an integral .shannon s original work was on messages with discrete variables ( like the letters in the english language ) , and while there is a generalization to continuous variables , we will switch to the discrete form here .( the discretization of continuous variables needs to be done carefully , and we leave this issue for later . ) clearly is large when is broad and small if it is narrow .large uncertainty means large . if we are measuring the velocity in a laboratory and adopt the interpretation that the readout on our instrument is the fluid system s message " to us , then is the amount information we obtain from our measurement .we are told nothing new by repeatedly making measurements of laminar flow , since it never changes , but we are always getting new information from turbulent flow .r. s. shaw was the first to recognize this information production as a general feature of chaotic systems .an analogous observation outside of fluid dynamics is an image taken from a newspaper .the image is created by varying the local density of black dots of ink . just as some images are characterized by wild variations in paint color ( a jackson pollock painting is a good example ) ,others , such as many mondrian paintings , show small variations , and have simple geometrical forms .the information or is larger in the first of these . it is closer to high reynolds number flows and the mondrian painting is akin to flow at moderate or low reynolds number .this visual link is displayed in a juxtaposition of these paintings with the velocity field in a pipe in fig .[ pollock ] . if information theory s utility were limited to the above quantification of uncertainty through , it would indeed be of limited use . however , is the basis for extracting other quantities , like the mutual ( or shared ) information between two quantities and .these quantities will be introduced in the forthcoming examples as the need arises .the emphasis here is on the application of these tools to real data taken in the laboratory or calculated on a computer .we turn first to demonstrating experimentally the assertion made earlier about the entropy of laminar and turbulent flow .we perform measurements of the axial velocity at the centerline of a long ( cm ) , cylindrical pipe as a function of time using a dantec laser doppler velocimeter ( ldv ) and hollow glass , silver - coated 10 m particles for scattering .the reynolds number here is defined as , where is the cross - sectionally averaged velocity , and cm is the diameter , with the kinematic viscosity of water .a schematic of the setup is shown in fig .[ pipe_setup ] . in pipes , as in many other shear flows , the system can in principle remain laminar to infinite .however , turbulence can be triggered by a finite perturbation , such as an obstacle .doing so results in an intermittent time series of turbulent and laminar patches .[ slug ] shows an interval of time when the flow at a point is transitioning between the two states .the higher value is the laminar state and the lower is the turbulent .the pdfs to the right of the time series confirm that even the laminar flow has some fluctuations ( either instrumental or from the setup ) , although they are smaller than the turbulent ones .while the entropy is theoretically zero for laminar flow , since there is only one value of the velocity , the presence of noise hinders a direct experimental confirmation of this .we begin by determining . binning data to make a histogram of inevitably finite experimental or numerical data is a familiar procedure . in fig .[ slug ] , very fine bin sizes were used so that to the naked eye the curve looks continuous .there is no problem using these same bin sizes for calculating , but it should not come as a surprise that the bin size choice affects .figure [ partition ] shows how this binning works .calling the bin size , a continuous stream of data is converted to discrete data or symbols , each representing a range of values . if is small , then the number of symbols , the alphabet size , is large and vice - versa . to be examined is the effect of varying on the pdf , and hence .figure [ pdfs ] shows for several values of .the general shape is the same , but the gaussian character is not clear but for small . what value of should be used ?this is not an easy question to answer .it is tempting to try the limit , as is done when calculating the kolmogorov - sinai entropy , but it should be clear that this is not an option for real data .an additional complication is noise , as highlighted by the laminar portion of fig .if is below the noise , one gets a different value from the theoretical .it turns out that it may not matter , depending on the question one is trying to answer .arbitrarily , we choose , the standard deviation for the laminar noise and the turbulent slugs respectively ( separately ) and proceed to see how information theory can distinguish between the two .this corresponds to the coarsest pdf in fig .if we use a different , the results are qualitatively the same .the pdfs of both of the noisy laminar data and the turbulent data are nearly gaussian and will nearly collapse when normalized . as a result ,the two values of are nearly identical .true noise ( often called shot noise ) is uncorrelated with itself at all finite lag times .an even stronger statement is to say that it is statistically independent of itself at different times .we can take advantage of this fact by considering finite blocks of the velocity time series : $ ] , where is the inverse mean sampling rate of the ldv .the probabilities of these blocks directly relate to the inter - relationships between its members .consider the so - called block entropy : if there is no statistical dependence between any of the data points inside the blocks ( no correlations ) then . by an application of jensen s inequality ,this is the maximum value : .it is lowered by any statistical dependence between the s inside of , temporal or spatial correlations .this observation can be exploited to distinguish the shot noise , seen even in laminar flows .the above definition is applied to for laminar and turbulent data from the pipe measurements .it is divided by , so that if the data is truly random , it will return to the value .as expected , the random noise in the laminar flow has no correlations , so it is hardly reduced at all by this division , while the correlations inherent in the kolmogorov picture of turbulence reduce the entropy for the turbulent flow , as shown in fig .[ slug_h ] .it is instructive to examine how depends on .this function is not well - behaved at very large .let be the total number of velocity data points in a typical run ( typically , is of order . )when the number is too small , there are too few blocks to determine the occurrence probabilities needed to evaluate , resulting in the data appearing to be _ less _ random than it really is . this effect is compounded if velocities are correlated , as in turbulence , where the correlation length can be large and a large is needed to capture the physics .figure [ slug_h ] , shows for the turbulent fluctuations in slugs and the shot noise in the laminar flow .the laminar measurements show a dependence on for the physically uninteresting reason described above that the data are not collected for a long enough time interval to accurately measure probabilities appearing in .this same phenomenon plagues the slug curve , and so researchers typically identify the inflection point as being important . if did not drop off , it would remain constant and this limit is called the entropy rate or density : there are three equivalent ways to define .the first is described above , and the second is while the third is here is the conditional entropy ( conditioned on ). the entropy density converges to a finite value because less and less information is gained on increasing by one unit .the utility of in elucidating turbulence will be explored further in the soap film experiments to be described next .the gravity - driven soap film is a mixture of soap detergent and water , with small density - matched glass spheres added for the velocity measurements . fig .[ setup ] is a diagram of the experimental setup .a laser doppler velocimeter ( ldv ) is used to measure the velocity components in the streamwise direction . by adjusting the flow rate and the channel width , several decades of reynolds number be explored .here , where is the rms velocity and is the channel width . for more experimental details ,we refer the reader to ref . . using eq .[ entropydensity ] , we go on to calculate the entropy rate for turbulence in a soap film . here , depending on the forcing , three entirely different kinds of behavior can be generated .if the soap film travels between rough walls , the perturbation they steadily create an inverse cascade of energy from smaller to larger eddies .if , instead , the film is penetrated by a comb , a row of 1 mm rods , a different type of cascade is created ; the rods create vortices which cascade downscale to the smallest sustainable size : the direct enstrophy cascade .the accompanying energy spectra are as seen in fig .[ spectra ] . here is wavenumber in inverse cm and has units of kinetic energy per kg per unit wave number .a final case is when the perturbations of the comb are too weak to initiate a cascade at all , resulting in a flat . besides ,another quantity is also plotted in fig .this is an alternative method for estimating the information and in the limit of infinite data the two coincide .computer memory is necessarily limited , making it useful to store and transmit information in as compact a form as possible .the total amount of memory necessary to send or store a message can be shortened by ( re-)coding the words to minimize their length .one example is using a coding scheme that assigns a small number symbols to words that appear with high frequency , as in morse code where is coded with the shortest symbol , a dot. in turbulence the probability distribution like in fig .[ pdfs ] suggests something similar might be done .remarkably , shannon proved that the limit on compression scheme is given by the entropy of the message .if the original length or size of a message is , then after the compression algorithm does its work , the new size satisfies the inequality while shannon provided no hints as to how equality might be approached , substantial work has been done in his wake to satisfy the equality in the limit .these coding schemes are called optimal , an example being the lempel - ziv algorithm . as expected , the compression ratio in fig .[ h_l ] is equal to or greater than ; it should not be smaller . binarizing the data gives a rough description , so it is not surprising that the difference between and is revealed only when the data is segmented into ten values .why does decrease as increases ? presumably this because increasing the strength of the turbulence increases the correlations between different spatial scales , and correlations always the surprise element in all observations . but a puzzle remains : when is decreased sufficiently , the flow must become laminar , in which case =0. therefore must go through a maximum - at a value that these experiments can not measure . a strange predictionis suggested by fig .[ h_l ] . if we treat as a state function in thermodynamics , then changing simply moves the system along this unique curve .consider the case of decaying turbulence , as with a comb in a soap film . near the comb the flow exhibits the enstrophy cascade and as defined heremeasurements made progressively further away from the comb have a lower energy due to the decay , and so a lower . at the same time, will increase until there is a transition , apparently continuous with respect to , from an enstrophy cascade to an inverse energy cascade .this remarkable transition has , in fact , been observed in soap films , although the mechanism has been attributed to wall shear effects .to further illustrate some of the tools of information theory in practice , we turn to a toy model of turbulence designed to mimic the essential properties of turbulence .the gledzer - ohkitani - yamada ( goy ) shell model is the simplest model with a cascade of energy , but it still yields to theoretical analysis and can easily be numerically integrated even on a laptop computer . herein lies its utility . as long as its limits are kept in mind , the goy model can be a useful playground for new ideas that can later be applied to full - blown navier - stokes turbulence .this approach has led to the discovery of intermittency in the helicity cascade of 3d turbulence .as kadanoff quipped , `` models are fun , and sometimes even instructive '' . in the goy model ,each variable ( shell ) corresponds to velocity fluctuations on a different spatial scale .these shells , however , live in fourier space , and so the independent variable is the wavenumber .( this is also why the velocity of the shells are complex ) .it is useful to think of this model as a truncation of the navier - stokes equation in fourier space .there are a finite number ( ) of shells and we denote by any particular shell ( from 1 to ) .following custom , is set to 22 .the wavenumbers are picked to be a fixed logarithmic distance apart : , where here and .large corresponds to small scales , while small refers to large scales .the governing set of equations is where is the forcing , , and denotes the complex conjugate. the variables , and are shell dependent but constant in time .these determine the strength of energy flow between scales .we refer the reader to the accessible review by kadanoff or the detailed review by biferale for more particulars . following the numerical scheme outlined in ref . , the viscosities are assigned the values , , , where the reynolds number .the equations are integrated using matlab .the energy spectra are shown below in fig .[ goy_spectra ] .as the viscosity decreases ( increases ) , the kolmogorov -5/3 scaling is present at higher and higher wavenumbers . what sort of questions can be address with the tools of information theory ?consider one of the most fundamental concepts in turbulence phenomenology , the universality of small scales . in the traditional picture of the turbulent cascade, energy is injected at some large scale and then transferred to smaller and smaller scales . in order for the scaling of the velocity differences to depend only on the local scale and the energy injection rate, it must `` forget '' about the large scales . or , as kadanoff puts it, the system remembers as little as it possibly can about the conditions at which energy is added or dissipated .this forgetfulness is readily quantified using information theory , in particular through the mutual information .if one scale forgets about the other , then the mutual information between them will be small ( identically zero if statistically independent ) : where is the energy at and is the spacing between adjacent shells .figure [ forcing_mutinfo ] is a plot the instantaneous mutual information between the energy each scale and at the forcing scale ( = 4 shell ) .apart from the peak at = 4 , where the mutual information is now equal to the self - information ( the entropy ) , the shared information falls off algebraically as -1/4 until the dissipative scales are reached .thus the inertial scales do not completely forget the details of the forcing , although the shared information becomes very small .indeed , the goy model is well known to exhibit the same intermittency phenomenon ( deviations from kolmogorov and violations of the universality assumption ) as fluid turbulence .while the entropy plays the most fundamental role , that of a measure , in shannon s information theory , it is the mutual information that has proved the most useful in the field of communications . in any causal physical relationship , or interaction , one system or subsystemwill exchange with another not only energy or momentum but also information .( indeed , it is well known that the speed of light restriction of special relativity is most properly formulated in terms of information transfer , albeit not explicitly in the shannon sense . )it is reasonable to consider whether the cascade may also be a communication channel .to do so , let us now consider the local transfer of information between adjacent shells using the mutual information . since the information source , just like the energy , must be at the forcing scale , we expect information to flow downscale .however , it is well known that there is `` backscatter '' of energy ( enstrophy ) even in the 3d energy ( 2d enstrophy ) cascade . with this in mind , we also consider the transfer of information upscale , as done in .the cascade is a dynamical process .an amount of energy at a wavenumber at time is transferred to wavenumber in a time . to reflect this ,we introduce a time lag into the mutual information . a large shell ( eddy " ) at an arbitrary time will share information forward in time to a small shell , so will be large if information is going downscale , whereas the following will be large if information is going upscale the use of this time lag is similar to its use in . subtracting the transfer upscale from that downscale , in fig .[ info_transfer ] we indeed find a net downscale transfer that is positive only in the inertial range and increases in magnitude as increases .thus there exists a companion information cascade along with the energy cascade in the goy model. it would be interesting to determine if the same is true for 3d and 2d turbulence .the presence of an information flux is not only related to intermittency , but suggests a further useful constraint on the physics of turbulence .while information theory was created to characterize messages coded in the form of words , it applies equally well to experimental observations in which the words of a text are replaced by measured quantities . in this work ,information is applied to three - dimensional turbulent flow in a long pipe , to two - dimensional turbulence in a gravity - driven soap film , and to a mathematical model that takes into account the turbulent cascade of energy from larger to smaller eddy sizes .the goal of the work is to demonstrate that information theory can illuminate physical observations , even when the equations governing the system s behavior are intractable or may not even be known . in this study, no appeal is made to the navier - stokes equations , which govern the fluid flows under observation .even when true velocity fluctuations are absent , as in laminar flows , shot noise ( also called poisson noise ) , can appear as a confounding effect .the goal of the work is to introduce the reader to the information theory approach , and to demonstrate its usefulness .durst , f. , nsal , b. : forced laminar - to - turbulent transition of pipe flows .. mech . * 560 * , 449 - 464 ( 2006 ) mullin , t. : experimental studies of transition to turbulence in a pipe .* 43*(1 ) , 1 - 24 ( 2011 ) frigg , r. : brit .sci . * 55 * , 411 ( 2004 ) adrian , r.j ., yao , c.s .: power spectra of fluid velocities measured by laser doppler velocimetry .fl . * 5*(1 ) , 17 - 28 ( 1986 ) schrmann , t. , grassberger , p. : entropy estimation of symbol sequences . chaos * 6*(3 ) , 414 - 415 ( 1996 ) ziv , j. , lempel , a. : compression of individual sequences via variable - rate coding .ieee trans .* 24 * , 530 ( 1978 ) chen , q. , chen , s. , eyink , g.l . ,holm , d.d .: intermittency in the joint cascade of energy and helicity .lett . * 90*(21 ) , 214503 ( 2003 ) materassi , m. , consolini , g. , smith , n. , de marco , r. : information theory analysis of cascading process in a synthetic model of fluid turbulence .entropy * 16 * , 1272 - 1286 ( 2014 ) kadanoff , l. , lohse , d. , wang , j. , benzi , r. : scaling and dissipation in the goy shell model .phys . fl . * 7*(3 ) , 617 - 629 ( 1995 ) biferale , l. : shell models of energy cascade in turbulence .. mech . * 35*(1 ) , 441 - 468 ( 2003 ) vastano , j.a . ,swinney , h.l . : information transport in spatiotemporal systems .lett . * 60 * , 1773 - 1776 ( 1988 )
|
a message of any sort can be regarded as a source of information . claude . e. shannon showed in the last century that information ( `` what we do nt already know '' ) is equivalent to the entropy as defined in statistical mechanics . a string of experimental observations is like a succession of words ; they both convey information and can be characterized by their entropy . for the fluid flow measurements and simulations to be discussed here ( pipe and soap film flow , goy model ) , the entropy depends on controllable parameters such as the reynolds number . the information theory approach is applicable to measurements of any type including those governed by intractable equations or systems where the governing equations are not known . this contribution is dedicated to the memory of leo kadanoff , an inspiring teacher and one of the most important scientific leaders of the last half century .
|
a central question of quantum information science is to determine the minimal time required to perform a quantum computation using a set of physical resources known to be universal for computation .our understanding of what resources are universal for computation is very well - developed , and it is known that when fast local control is available , any unitary dynamics capable of generating entanglement is universal for computation . however , the question of using these resources in a time - optimal fashion is , by comparison , understood relatively poorly .this paper considers a particular simplified setting , that of time - optimal simulation of two - qubit unitaries using a fixed interaction hamiltonian and arbitrary fast local control .arbitrary fast local control means that the evolution of the interaction hamiltonian may be interrupted by arbitrary single - qubit operations , and that these operations take no time to perform .this assumption corresponds to certain experimental setups where single - qubit operations are performed on a much faster time scale than joint operations .hammerer , vidal and cirac have given a construction for this simulation , as well as an elegant expression for the minimum achievable simulation time .the simulation scheme of hammerer _ et al . _uses , in general , an infinite number of steps to achieve time optimality .that is , the interaction hamiltonian is , in general , interrupted an infinite number of times by local operations , and the time between each interruption is infinitesimal .a simulation scheme requiring infinitely many time steps is not practical for at least two reasons .first , the original premise that local operations can be performed in zero time is no longer valid if one must perform infinitely many of them .second , the effects of noise on such a simulation will overwhelm the intended coherent dynamics .the purpose of our paper is to ask , first , whether infinitely many time steps are actually _ required _ , in general , for time - optimal simulation ?we will find that the answer is yes .indeed , we will show that the overwhelming majority of two - qubit interaction hamiltonians have this property . given this, we then address the question of determining how close to time optimal a simulation can get , given that one demands a simulation using only a finite number of time steps .the paper is structured as follows . in sec .[ sec : preliminaries ] we review results about two - qubit time - optimal simulation in the limit of fast control . in sec .[ sec : procedure ] we provide a procedure for determining which two - qubit hamiltonians require infinitesimal time steps when used in this setting . finally , in sec .[ sec : finite ] we quantify the sacrifice that must be made to time - optimality when one insists on having a simulation using a finite number of time steps .[ se : future ] concludes the paper .the purpose of this section is to introduce notation and to review some concepts and results associated with time - optimal two - qubit simulation in the limit of fast local control .we end the section with an introduction to the idea of a `` lazy '' two - qubit hamiltonian . up to rescaling of the ground state energy ,an arbitrary two - qubit hamiltonian can be parameterised as follows : where and are real 3-vectors , is a 3 by 3 real matrix , and is the vector of pauli operators .with respect to the computational basis ; \quad y=\left[\begin{array}{rr } 0&-i\\i&0 \end{array}\right ] ; \quad z=\left[\begin{array}{rr } 1&0\\0&-1 \end{array } \right].\ ] ] when all the entries are zero we say that the hamiltonian is _local_. otherwise we say that the hamiltonian is _nonlocal_. we say that a unitary is local if it can be expressed as a tensor product of single - qubit unitaries .otherwise we say the unitary is nonlocal .we shall henceforth restrict the single - qubit unitaries to be elements of the special unitary group ( i.e. , the group of two - by - two unitaries having unit determinant ) .a simulation scheme to approximate an arbitrary two - qubit unitary , , using a fixed hamiltonian , , and arbitrary local unitaries may , without loss of generality , be written as follows . where the parameters are nonnegative .that is , in order to achieve the desired dynamics we can apply as many times as we wish for arbitrary lengths of time , interspersed with arbitrary operations on the individual qubits .we occasionally refer to eq .( [ eq : scheme ] ) as being a _ circuit _ for .it is worth noting that the assumption that contains no term and that single - qubit unitaries are in , implies that is in .these restrictions entail no loss in generality , as they simply take advantage of the fact that the global phase of a unitary operator is irrelevant .corresponding to the simulation eq .( [ eq : scheme ] ) is the _ interaction time _ , which we define to be the total time for which the interaction hamiltonian is applied . for a given and , there are many possible circuits each giving rise to a simulation of . over this range of possible circuits for , there is a corresponding range of values for the interaction time . a circuit which achieves the minimum interaction time for a given and said to be _ time - optimal_. we define to be the minimum achievable interaction time for simulating using .reference gives a simple expression for , in the two - qubit scenario . to discuss this result , we first briefly review the canonical form of a two - qubit unitary and two - qubit hamiltonian operator .for any unitary there exists a _canonical decomposition _ , where , , , and are single - qubit special unitaries , and , , and are unique real numbers satisfying although 15 parameters are needed in order to completely specify an arbitrary two - qubit unitary , the canonical decomposition shows us that the _ nonlocal _ behaviour of can be characterised in terms of only three parameters , , and .we call these three parameters the _ canonical - form parameters _ of and the operator the _ canonical form _ of .the local parts ,, , and of the canonical decomposition do not affect the interaction time , as they can be trivially included in the first and last steps , and , of a simulation . therefore the canonical - form parameters are all we need to know about in order to calculate the minimum required interaction time for eq .( [ eq : scheme ] ) .how does one calculate the canonical - form parameters ?for completeness , we review the method given in appendix a of . in the followingit will be helpful to take advantage of properties of the so - called _ magic basis _ , it is known that , when expressed in the magic basis , local two - qubit special unitaries are real , and canonical - form unitaries are diagonal .this means that in the magic basis the canonical decomposition looks like , where and are real orthogonal matrices , and is diagonal .the diagonal elements of can be easily written in terms of the canonical - form parameters of : if we define then the diagonal elements of are , , , and . note that eq .( [ eq : lambdaj ] ) together with eq .( [ eq : order ] ) implies that we have that , so the eigenvalues of are just the squares of the diagonal elements of .that is , to determine the canonical - form parameters for a particular , we first calculate the eigenvalues of ( where the transpose is taken in the magic basis ) , then derive the via eq .( [ eq : eigutu ] ) , and finally solve eqs .( [ eq : lambdaj ] ) . a word of caution : the task of deriving the from the is not as trivial as it may first seem .the problem is that , in general , there is no guarantee that the values will lie in any particular branch of the logarithm function .so , naively taking the _ argument _ of will not necessarily give you .a relatively simple procedure exists to correct for this problem .however in the context of sec .[ sec : procedure ] we will see later that taking the logarithm of along the standard branch of the logarithm function will suffice to evaluate .closely related to the canonical form for a two - qubit unitary is the canonical form for a two - qubit hamiltonian .it is discussed in section v.a of , where it is referred to as the _ normal _ form . given the purely nonlocal part of a hamiltonian , when it is expressed in the form eq .( [ eq : h ] ) , the canonical form of is defined to be the unique hermitian operator that satisfies for some local unitary , where .the existence and uniqueness of this canonical form is established in , where it is shown that , and are the singular values of the matrix , and .the canonical form of a hamiltonian encapsulates the nonlocal behaviour of the evolution of for very small time steps .this can be seen as follows . from sections iii.b and v.a of we can write for some local unitaries , , , and . to order , the evolution of is given by a unitary having canonical - form parameters , and . we are almost ready to review the expression for given in . before we doso we review the concept of _ special - majorization_. special - majorization describes a particular type of partial ordering of three - vectors .its use in allows certain results to be described very succinctly .to define special - majorization , it is necessary to first introduce the idea of a _ special - ordered _ three - vector . given a real vector , the corresponding _ special - ordered _vector is defined as follows .the absolute value of the components of are given by the absolute value of the components of rearranged in nonincreasing order .that is , for the permutation that gives and are nonnegative , and that has the same sign as the product .then , is said to be special - majorized by ( denoted ) if where and are the special - ordered versions of and .let be a two - qubit hamiltonian having canonical form and let be a two - qubit unitary having canonical form .then , the minimum time required to simulate using , is given by the minimum value of such that either holds .we now introduce the the central concept of a `` lazy '' hamiltonian . for a given two - qubit hamiltonian , we define a function as follows : that is , is the minimum total time for which the hamiltonian must be applied , when it is being used together with arbitrary local unitaries , to simulate its own action .such a simulation would be of the form the trivial `` simulation '' having the single step has an interaction time of .thus the minimum achievable interaction time will be no greater than : under what circumstances will be less than ?it turns out that this question is very closely linked to our main question : what are the circumstances under which a time - optimal simulation will require infinitesimal time steps ? consider the class of two - qubit hamiltonians having the following property : we shall say that a hamiltonian is `` lazy '' if it is nonlocal and satisfies eq .( [ eq : lazy ] ) .[ prop : lazy ] if a hamiltonian is lazy , then the time - optimal simulation of any two - qubit nonlocal unitary using requires infinitely many time steps .* proof : * suppose there exists a time - optimal simulation scheme , of the form of eq .( [ eq : scheme ] ) , for a nonlocal using a lazy hamiltonian , where the number of time steps is finite .at least one of the must be nonzero , otherwise the simulation would be unable to produce nonlocal dynamics .for such a nonzero , consider the corresponding factor in the simulation . since is lazy, there exists a simulation for having an interaction time _ less _ than .if we substitute such a simulation for back into the simulation for , then the new simulation for now has a lesser interaction time than it did before .however this contradicts the assumption that the original simulation was time optimal .hence the premise that the original simulation had finite time steps is false .thus we conclude that any lazy hamiltonian will require infinitesimal time steps when used for the time - optimal simulation of any nonlocal two - qubit unitary . to show that a particular is lazy , it is sufficient to show that for all in some interval for any positive . to see this , note that if there is a simulation for with interaction time , then clearly there exists a simulation for with interaction time , for any positive integer .thus , implies that for all positive integers .so , if for all , then for all .which two - qubit hamiltonians are lazy ? we have seen in the previous section ( proposition [ prop : lazy ] ) that lazy two - qubit hamiltonians require infinitely many time steps if they are to be used for time - optimal control , and thus are impractical . in this sectionwe provide a simple set of sufficient conditions for a hamiltonian to be lazy , expressed in terms of the parameters of the hamiltonian .the parameterisation in eq .( [ eq : h ] ) is more general than it needs to be for this purpose .we can simplify matters by using the fact that a hamiltonian is lazy if and only if is lazy , where and are any single - qubit unitaries .this is a consequence of the fact that has the same canonical form as .thus , without loss of generality we choose to only consider hamiltonians where the purely nonlocal part is in canonical form , that is where .recall from subsection [ sec : lazy ] that we define a hamiltonian to be lazy if over some interval .suppose we could find a taylor series expansion for in the variable .thus , then , because we can assume is small , the corresponding hamiltonian is lazy if and only if the first nonzero item in the list , , , , is negative .our procedure involves finding expressions for the first few items in that list , in terms of the parameters , and of the hamiltonian .we then find the conditions under which each expression will be negative .we find that in fact always , and so is the first term that may be negative .accordingly , in the analysis that follows we consider the behaviour of up to order , so as to arrive at some nontrivial conditions for a hamiltonian being lazy .we seek expressions for the taylor coefficients of , namely , , , , and .the expression for involves the canonical - form parameters , and of the unitary .so we first try to find expressions for the canonical - form parameters , and of the unitary . from subsection[ sec : canon ] , the canonical - form parameters can be expressed in terms of parameters , , , where with the transpose taken in the magic basis .thus , where the vector of integers accounts for the ambiguity in taking the argument , and where the down - arrow sorts in decreasing order so that we are in agreement with the ordering of the in eq .( [ eq : lambdaorder ] ) .however , since we are only interested in the behaviour over a small interval ] .now , it is possible to write where is given by the campbell - baker - hausdorf series ( for a derivation see , for example , ) + \nonumber \\ & & \frac{1}{12}(-it)^3([h^t,[h^t , h]]+[h,[h , h^t]])\nonumber\\ & & + \dots , { \label{eq : cbh}}\end{aligned}\ ] ] where =ab - ba ] we can write from theorem 1.10 of , a normal - valued operator function that can be expressed as a power series has eigenvalues which are holomorphic functions of . thus , the entries of the vector can be expressed as holomorphic functions of , and the components of are therefore continuous piecewise - holomorphic functions of , over some interval ] .that is , for small , is equal to up to order , where the components of are arranged so the zeroth taylor coefficients are in decreasing order . in the special case where some of the zero - order coefficients are equal, then we break the tie by considering the first - order coefficients , and if those are equal we consider the next highest order and so - on . inwhat follows , we will use the ordering scheme as described in this paragraph , so that we may use eq .( [ eq : lambdavec3 ] ) instead of eq .( [ eq : lambdavec2 ] ) . in this subsectionwe describe the calculation of the taylor coefficients of the components of . in the magic basis , the hamiltonian , , in eq .( [ eq : hs ] ) reads the components of in the magic basis are found with the aid of the computer algebra system maple .this is straightforward in any of the standard computer algebra systems , and the specific form is both complex and not particularly illuminating , so we will not reproduce the components of here . note that each component is a third - order polynomial in .next , we find expressions for .to find , we evaluate the expression in eq .( [ eq : dettaylor ] ) using maple , including terms in the series up to at least order .then , is given by the coefficient of the term in this expression .explicit expressions for will not be given here as they are rather lengthy and not illuminating .they are polynomials in the parameters of the hamiltonian .the next step is to solve the equations , , via maple .the results are as follows . * solving yields .that is , the zero - order taylor coefficients of each component of are zero . thus , from eq .( [ eq : lambdavec3 ] ) and eq .( [ eq : lambdaj ] ) we have . * solving , , and provides no new information about the . *solving yields four solutions , one for each component in .we write them in nonincreasing order as follows : this gives . *solving gives .thus , . *solving gives four solutions to , so long as we assume .each of the four solutions for correspond to one of the four solutions to ( which were substituted in turn ) .thus , we are able to correctly associate each of the four solutions to a particular component of the ordered vector . for the sake of brevity we will not reproduce the expressions for .rather , we just provide the resulting expressions for : the special cases , and provide different ( and rather more complicated ) solutions for the compared with above .arriving at the solution in these cases requires solving up to .we will not write out these results explicitly . thus we have for in some interval $ ] . is given in eq .( [ eq : theta3 ] ) , except in the special cases noted above . from section ii.c of ,the expression for takes a simpler form when we have . in this special case , is given by the minimum value of such that where again is the vector of canonical - form parameters of and is the vector of canonical - form parameters of .this special case certainly holds for the canonical - form parameters of when is sufficiently small . in this case( [ eq : mints ] ) is equivalent to which is equivalent to thus , given eq .( [ eq : lazy - inter ] ) , we have for small , where is given by it is clear that whenever , the hamiltonian is lazy .it is also clear that is never greater than zero , because that would imply , a contradiction .we find below the solutions ( in terms of the parameters of the hamiltonian ) for ; all hamiltonians which do not belong to this solution set are guaranteed to be lazy .note , however , that the complement of this solution set does not entirely characterise the class of lazy hamiltonians , since there may be hamiltonians in this set that are lazy due to higher - order taylor coefficients that are negative .so our results may not fully characterise the set of _ all _ lazy hamiltonians .let the coefficient is zero if and only if at least one of are zero .it is straightforward to show that for , we have arrived at the main result of this paper : any hamiltonian of the form of eq .( [ eq : hs ] ) for which and for which none of the three conditions 1 . 2 . 3 . hold , is lazy .such hamiltonians will therefore need to be applied infinitely many times when used in a time - optimal simulation of a nonlocal two - qubit unitary .these conditions obviously make it very easy to generate examples of lazy hamiltonians , and imply that almost all two - qubit hamiltonians are lazy .note that the special cases , and yield somewhat more complicated conditions for a hamiltonian to be lazy .these conditions are complex and not very illuminating , but can be obtained using techniques similar to those described above , so we will not reproduce them here .the results of the previous section show that almost all two - qubit hamiltonians are lazy .this means that , in a simulation circuit , infinitesimal time steps must be employed to achieve time - optimality .we now show that , despite this requirement , if finite time steps are used then the corresponding sacrifice of interaction time is not very large only a small relaxation from strict time - optimality is required in order to reduce the number of time steps to something practical . to make our results concrete, we consider the case where the unitary being simulated is the controlled - not ( cnot ) gate .similar conclusions can be reached in the general case by following a similar argument to that below , and making use of the results of .it can be shown that the minimum time for simulating a cnot is where is the largest canonical - form parameter of the interaction hamiltonian .when is lazy , can we construct a simulation using a finite number of time steps such that the total interaction time is not much larger than the optimum ?such a scheme is given in , whereby an arbitrary nonlocal two - qubit unitary is applied a finite number of times together with local unitaries to simulate a cnot . using the scheme in ,if has largest canonical - form parameter such that is an integer greater than one , then the scheme can be used to simulate a cnot by applying exactly times .of course , we are interested in the case when , that is is given by the evolution of an interaction hamiltonian over a time . the total interaction time would then be from the previous section , the function can be written , for small , as thus , for small , this shows that to simulate a cnot gate by applying a lazy interaction hamiltonian in a ( finite ) number of small time - steps , then the penalty in the total interaction time , as compared with the optimum , is only of order .as an example , consider a specific interaction hamiltonian .using the results of the previous section it can easily be verified that is lazy .the graph of as a function of is shown in fig .[ fig : numeric ] .we choose a range of positive integer values of , and for each we calculate how long the corresponding time step ( ) is by numerically solving no solution to eq .( [ eq : solveit ] ) exists for .this can be seen from the fact that , which is greater than the maximum value that takes .for equal to 8 or greater , a corresponding can be found .finally , the interaction time required to simulate the cnot is calculated via eq .( [ eq : ts ] ) .the results are shown in fig .[ fig : example ] .the dashed line is the optimal time , .the results clearly show a near - optimal simulation with relatively small numbers of time steps . for 20 time steps ,the total interaction time is just percent greater than the optimal .we have defined a class of `` lazy '' two - qubit hamiltonians , those which can simulate themselves faster with the aid of fast local control than with uninterrupted evolution . when a lazy hamiltonian is used in the time - optimal simulation of any nonlocal two - qubit unitary, we have shown that the simulation will require an infinite number of steps , and thus will be impractical .we have derived a simple set of sufficient conditions enabling us to prove that a given hamiltonian is lazy .this set of conditions implies that almost all two - qubit hamiltonians are lazy .finally , we have shown that only a rather small sacrifice in the simulation time needs to be made in order to use a lazy hamiltonian in a finite - step simulation .thanks to chris dawson for helpful discussions and for suggesting the name `` lazy '' hamiltonian .thanks also to guifr vidal for enlightening discussions .hlh and man enjoyed the hospitality of the institute for quantum information at the california institute of technology , where part of this work was completed .
|
what is the time - optimal way of using a set of control hamiltonians to obtain a desired interaction ? vidal , hammerer and cirac [ phys . rev . lett . * 88 * ( 2002 ) 237902 ] have obtained a set of powerful results characterizing the time - optimal simulation of a two - qubit quantum gate using a fixed interaction hamiltonian and fast local control over the individual qubits . how practically useful are these results ? we prove that there are two - qubit hamiltonians such that time - optimal simulation _ requires _ infinitely many steps of evolution , each infinitesimally small , and thus is physically impractical . a procedure is given to determine which two - qubit hamiltonians have this property , and we show that almost all hamiltonians do . finally , we determine some bounds on the penalty that must be paid in the simulation time if the number of steps is fixed at a finite number , and show that the cost in simulation time is not too great . epsf
|
increasing access to streaming data on dynamic networks drives interest in formal models to quantify stochasticity and structure of latent processes underlying observable data streams .modeling interests are coupled with concerns to monitor and adapt to changing patterns , and to signal and highlight dynamics that may reflect interesting departures from the norm .key challenges are real - time / sequential analysis and scalability : interest lies in relevant statistical models whose analyses are inherently sequential in time , as well as computationally efficient and scalable with network size and sampling rates .relevant models should also define sound statistical methods for monitoring and short - term prediction , and elucidate the complexities and dynamics in network structure in both single sample inference and multi - sample comparisons across contexts .we contribute to this area with modeling and methodological developments coupled with a motivating applied study of internet traffic in e - commerce .consistent with the primary applied goals outlined above , the main contributions of this work are as follows . * a flexible and customized statistical modeling framework for : ( i ) characterizing patterns of temporal variation in network flows at the levels of nodes and pairs of nodes ; ( ii ) model - based exploratory data analyses of network flows within and across contexts ; and ( iii ) the ability to scale to large networks . * use of these flexible , efficient models as bayesian emulators of more structured network flow models .this yields computationally efficient dissection of the dynamics to evaluate node - specific and dependence / interaction effects across nodes in a structured model context where analysis is otherwise computationally challenging in more than small networks . *formal bayesian model assessment methodology for sequential monitoring of flow patterns with the ability to signal departures from predictions in real - time and allow informed interventions as a response , and in a scalable framework . *development and validation of the above in exploratory and monitoring analyses of data from the motivating application ; here the observations are streaming counts of visitors in a set of defined web domains ( collections of webpages ) in a structurally well - defined but dynamic / evolving website .this includes evaluation of node - specific and node - pair interactions in the flow dynamics within the network over a given time period , comparisons across time periods and across days , and analyses utilizing bayesian monitoring and adaptation to respond to departures from predicted flow patterns . following a discussion ofthe motivating applied setting , network and data in section [ sec : webdata ] , we develop our class of bayesian dynamic flow models ( bdfms ) in section [ sec : bdfm ] .bdfms are flexible univariate dynamic models for series of counts representing flows into the network and between within - network node pairs . these ( non - stationary and non - normal ) state - space models for streaming count data rely on discrete - time gamma processes historically used in volatility modeling , and that have very recently become of interest as flexible smoothing and short - term predictive models for space - time processes underlying count data .our use of these models for within - network flows is novel and involves methodological extension to adapt and customize them to provide suitable univariate emulators of the underlying , inherent dynamic multinomial structures governing flows at each time point .this use of sets of _ decoupled _ univariate models that are then _ recoupled _ to define the actual multinomial probability processes is : ( a ) explicitly designed to be computationally efficient in on - line data analysis , scaling quadratically in the number of network nodes and enabling distributed implementation for streaming data on large networks ; ( b ) allows for diverse patterns in the dynamics of flow rates that a time - varying dirichlet - multinomial model simply annot ; and ( c ) relates to the recent development of conceptually similar ( decouple / recouple ) approaches that have advanced multivariate dynamic modeling in conditionally normal contexts .section [ sec : bdfmdata ] discusses some aspects and summaries of exploratory analysis of the network flow data from a defined maxpoint network of the fox news website .this highlights the use of customized bdfms , with one focus on exploring aspects of flow dynamics on the network across the same time periods on different days .the supporting appendix material gives additional technical details and discussion .section [ sec : dgm ] introduces a class of more highly structured _ dynamic gravity models ( dgms ) _ for network flows .these are non - normal , log - linear random - effects models with time - varying parameters for flow rate contributions of origin nodes , destination nodes and origin - destination interaction effects .our dgms extend prior work with static gravity models ( e.g. * ? ? ?* ; * ? ? ?? * ; * ? ? ?* ) to the time - varying parameter context , defining a class of models able to represent complicated patterns of dependency structure , and their temporal variations , across nodes .importantly , we show that the flexible and computationally simple bdfm framework can be mapped one : one to that of the dgm .this underlies one further novel contribution of this work : the use of the fast , efficient bdfms as _emulators _ of dgms .this is key from the viewpoint of scalability ; fitting gravity models , even without time - varying parameters , is a challenging issue in more than modest dimensional networks , and simply infeasible in any realistic dynamic extension appropriate for scalable , on - line analysis of streaming network flow data .further , we avoid the challenging approach of defining and parametrizing time - evolution models for dgms directly , adopting the implicit structures induced in the mapping from bdfms where model specification and fitting is relatively facile .example results and highlights from the bayesian emulation analysis of dgms for the maxpoint fox news study appear in section [ sec : dgmdata ] .section [ sec : monitor ] develops methods of formal , sequential bayesian model monitoring and adaptation ( automatic intervention ) for bdfms .the aim here is to build into the fast , decoupled analysis an ability to efficiently evaluate incoming flows against model predictions so as to signal data at the level of individual nodes and node - pairs that appear discrepant , and that may signal outliers or changes in flow trends / rates beyond the norm .in addition to signaling such events and thus providing opportunity for direct intervention , we couple monitoring with the use of automatic intervention to allow the model to appropriately adapt to data at the next few time points .this bayesian testing / adaptation strategy builds on core theory underlying its use in time series forecasting contexts with dynamic linear models ( ; see also chapter 11 of ) .some of the novelty here is in the use of these ideas in non - linear , non - normal dynamic models for count data our class of bdfms .importantly , monitoring is applied in parallel across nodes and node - pairs , so is also scalable with network size .some departures from normal variation in patterns of flow may be related across nodes , and the approach has an ability to explore and evaluate this both within the bdfm model context and then following the map to more structured dgms that directly reflect interaction effects .application to the fox news network data highlights some aspects of the use of this in connection with selected network nodes .summary comments conclude the paper in section [ sec : closingcomments ] , and additional supporting material is given in the appendix .our context is traffic flow among _ domains _ ( defined sets of pages ) of the fox news website .domains include the homepage and several categories of news and consumer content such as politics , entertainment , travel , science , etc. as defined by fox news . while the domain structure is persistent , the nature of webpage definition and content within a domain is dynamic ; content changes on a daily basis ( updated at midnight ) but also more rapidly when noteworthy events occur .maxpoint places ads on pages in these fox news domains , and thus can track flows of anonymized users as they move through its pages . while some users can be tracked individually ,this is not the norm , and we focus in this paper on aggregated flow counts , not the trajectories of individuals . on - lineadvertisers are interested in a host of statistical issues related to traffic flow and domain content .the field has become quite sophisticated , employing complex recommender systems , sentiment analysis , text mining , and other methods .however , basic questions of understanding and characterizing traffic across domains have not received the attention they require .in particular , there is commercial value in identifying how the popularity of a site changes on short time scales , and how sites interact with respect to traffic .as example , our data showed a morning - after spike in traffic to the entertainment domain following the grammy awards , which would have been an opportunity to market concert tickets ; unusual interactions in flows between science and health may reflect new medical findings that might incline people to purchase gym memberships ; increased flow rates from homepage to science that contradict the general stable or somewhat decreasing trends of overall traffic may indicate specific opportunities to target scientific product consumers .as pages within a domain are updated , questions arise as to whether browsing traffic patterns change as a result . to address this statistically ,we need to understand stochastic variation in past browser traffic so that comparisons can be made of incoming traffic streams against recent statistical norms " , and significant deviations from short - term predictions based on current dynamic patterns can be identified .companies that have flow models which enable them to predict how traffic will change as content changes , that are able to sensitively characterize and monitor patterns of change in interactions as well as overall rates , and that can signal anomalous changes to provide opportunities for intervention and actions , will be advantaged .they can recognize opportunities more quickly , and for example may then adapt bidding strategies for relevant keywords to be dynamically calibrated to expected revenue .the data set contains fox news website visit data during 09:0010:00am and 01:0002:00pm est on each of six days , february 23rd-24th , march 2nd-3rd and 9th-10th , 2015 . these days are mondays or tuesdays .since the fox news website structure changes often , with new pages being added and old pages being archived , the analysis aggregates webpages into groups specified by the host domain www.foxnews.com , and the set of first url paths after the host domain , including examples such as e.g. www.foxnews.com/politics/ * and www.foxnews.com / us/*. these classify all pages into 22 domains : homepage , politics , us , opinion , entertainment , technology , science , health , travel , leisure , world news , sports , shows , weather , category , latino , story , on - air , video , national news , magazine , and other .the data set includes anonymized visitors from nearly every time zone on the planet . in order to study time - of - day effects , such as , say , a tendency to browse news in the morning and entertainment in the afternoon, it is necessary to stratify by time zone .here we focus on users in the eastern north america time zone ; those are the most numerous , and the two time windows used in this study were chosen with the expectation that different browsing patterns might occur at those times .aggregate data give time series of counts in half - minute intervals , i.e. , time points of domain occupancy , flows from each domain , and flows into each domain .in each half minute interval , if the record shows the same user in two or more domains , then each of her / his moves is counted in the flow data into each of these domains .if the user refreshes the same page multiple times spanning more than one time interval , then s / he is counted as simply staying in that domain ; this can be done as the web browsing tool performs automatic refresh .importantly , if a user stays in the same domain for more than five minutes , s / he is declared as no longer active and counted as leaving the fox news site . if such a user later appears in one domain , s / he counts as inflow from outside the fox news site . finally ,we can not track user information either before or after the one - hour observation window ; we thus restrict attention to the period 09:0509:55am and 01:0501:55pm , consisting of uncensored flows , using the first 5 minutes of data informally to define priors .thus the series runs from with in each time period .aggregation at half - minute intervals reflects a balance of interests in fine - time scale modeling against information content of data with low flow rates . for domain pairs with low flow ratesit is sometimes too low , leading to excessive volatility in the bdfm and noisy parameter estimates in the dgm .no single window is good for all node pairs at all times , but preliminary exploration found this to be a good compromise .the decision that a user is inactive after five minutes is based on previous research on how users access on - line articles .few people read more than the first paragraph of a news story , and at the times of day for which data were collected , interruptions are likely .investigations of on - line session length have focused on dynamics of search engine use ( e.g. * ? ? ?* ; * ? ? ?* and references therein ) in settings where the full breadth of user browsing behavior is visible .using additional context such as change in search engine query topic , these studies have derived average user session lengths between 560minutes .we employed a session length limit on the lower end of this spectrum due to the fact that a user is likely to only spend a fraction of a larger total browsing session on fox news .referring to sites external to the fox news website as node 0 , we have network nodes ; the actual domains and external " , indexed as at each time define as the flow count from node to , including the inflows and outflows relative to the external domain .also , denote the number of occupants of node at the end of the period by a random quantity at the start of the period , but then known and given by the sum of inflows minus outflows at the end of the period .figure [ fig : networkmovie ] provides a visualization of data at the first time interval , and the schematic in figure [ fig : networkcartoon ] reflects our notation .if has a gamma distribution with shape and scale , we write , noting that and if has a beta distribution with p.d.f .proportional to for we write if the of counts has a multinomial distribution with total counts and probability vector we write for any series of random quantities over we use the succinct notation for any indices .,width=480 ] & & & & & & * + + + [ o][f-]{1 } \ar@{.}[d ] & \\ & & & * + + + [ o][f-]{i } \ar@/^/@{-->}^*+{x_{i1t}}[rrru ] \ar@{-->}^*+{x_{ijt}}[rrr ] \ar@/_/@{-->}^*+{x_{iit}}[rrrd ] \ar@/_0pc/@{-->}_*+{x_{i0t}}[llld ] & & & * + + + [ o][f-]{j } \ar@{.}[d ] & \\ \textrm{outflow } & & & & & & * + + + [ o][f-]{i } & \\ } \ ] ]bdfms are based on gamma - beta random walks that have been key to stochastic volatility modeling for more than 30 years ( * ? ? ?* ; * ? ? ?* section 10.8 ) . based initially on bayesian discount concepts related to exponential smoothing for volatilities and with steady " evolutions in non - gaussian dynamic models , these _ gamma - beta discount models _ yield closed form bayesian analyses .they have seen some though limited use as models for rates underling time series of conditionally poisson counts , which is a starting point here .we extend the applicability of the basic model in a number of ways , with novel model forms customized to the network flow context and that define flexible models for conditional multinomial data with time - varying probabilities that go beyond prior use .we also heavily utilize full bayesian posterior simulation of posteriors for latent rate processes , extending the use from the normal / linear dynamic volatility modeling context ( * ? ? ?* chapter 4 ) .the essentials of gamma - beta discount models follow ; see appendix [ app : betagammadm ] for further details and discussion . in generic notation , suppose that is a time series with conditionally independently over here is a latent level process and a scaling factor known at time .the process evolves via the markov model where is a specified _ discount factor _ and is a known function of and independent innovations drive the evolution .the beta distributions imply : ( i ) hence this is a multiplicative random walk model , or steady " evolution ; ( ii ) a lower value of leads to a more diffuse distribution for , and hence increased uncertainty about and adaptability to changing rates over time ; a value closer to one indicates a steady , stable evolution .the model is structured to ensure full conjugacy in the forward filtering / bayesian sequential learning over time , and in retrospective analysis .this is reflected in some key summaries , as follows and with further details in appendix [ app : betagammadm ] . here is a synthetic notation for initial information . *_ forward filtering ( ff ) : _ at any time both the prior and the posterior for the current " latent level are gamma distributions , with trivially computed parameters that are updated as evolves * _ one - step forecasts : _ the one - step ahead forecast distribution made at time to predict time is generalized negative binomial with p.d.f . in . on observing the p.d.fis trivially evaluated to feed into computation of model marginal likelihoods ( mmls , as in appendix ) for assessment .* _ backward sampling : _ at end time recursive simulation generates _ time trajectories _ of the rate process under its full posterior the computations are trivial , as detailed in section [ sec : bs ] .the model can be defined by any sequence of specified discount factors a constant value over time defines a global smoothing rate ; values closer to 1 constrain the stochastic innovation and hence the change from to ; smaller discount factor values lead to greater random changes in these poisson levels .intervention to specify smaller discount factors at some time points , to reflect or anticipate higher levels of dynamic variation at those times , are sometimes relevant . in our network flow modelsbelow , we customize the specification of the sequence of discount factor to address issues that arise in cases of low flow levels .that extension of discount - based modeling defines the as time - varying functions of an underlying base discount rate , and the latter are then evaluated using mml measures .this model provides the basis for flows into network nodes ; we adapt and generalize it to define components of flexible multinomial dynamic models for flows between nodes in a network . with notation for inflows as in figure[ fig : networkcartoon ] , we adopt the general model of section [ sec : gammabetadm ] by adding suffices for network nodes and setting the poisson mean scaling factors to 1 .we now customize this model via specification of discount factor sequences . atany node the time inflow to node is independently across nodes and the latent levels follow node - specific gamma - beta discount models with discount factor at time the time update / evolve steps are : ( i ) the time prior updates to the posterior with and this then evolves to the time prior , and so on .specifying discount factors relates to the information content of gamma distributions as measured by the shape parameters ; evolution each time point reduces this by discount factor , the latter representing a per - time - step decay of information induced by the stochastic evolution .our specification of discount rates is motivated by the following considerations .first , baseline levels of variation on are likely to be node specific , so that each node should have its own baseline discount rate to be assessed in data analysis .second , in cases of zero flow rates for a period of time , is continually discounted and shrinks towards 0 while is incremented by 1 at each update step . that is , discounting is not balanced by the prior - posterior update and the generates more and more diffuse posteriors favoring lower and lower ideally , the posterior and prior should be very similar in cases of 0 flows , and we address this with the specification at each where is a _ constant baseline discount factor _ for node and a specified constant .the aim is that be close to the baseline unless information content is very low ; this our applied studies take ( so that be close to within 10% of the baseline when ) .then in cases of high information content , the effective is close to otherwise , will be closer to 1 in cases of low information content , so appropriately limiting the decay of information in such cases .node - specific mml measures that feed into model assessment to aid in selection of the baseline discount factors these measures of short - term predictive fit of the models can also be monitored sequentially over time for on - line tracking of model performance .this ability to flag anomalous data at one node or any subset of nodes is key to commercial application of the analysis , since that corresponds to new opportunities or new threats ( e.g. , offer concert tickets after the grammy awards , but not on david bowie s obituary ) .this view on anomaly detection is extended below , in section [ sec : monitor ] , using bayesian model monitoring concepts .one aspect of this is the ability to signal a need to _ temporarily _ reduce the value of the discount factor for a node at a time of degradation of predictive performance that may relate to changes in that are larger than the standard " baseline discount factor value determines .transitions from any node at time are inherently multinomial with time - varying transition probabilities .to build flexible and scalable models for dynamics and dependencies in transition probability vectors is a challenge , with computational issues for even simple models quickly dominating .novel models here adapt and extend the univariate poisson / gamma - beta random walk models to enable flexibility in modeling node - pair specific effects as they vary over time as well as scalability . considering flows from node node at time and using notation as in figure [ fig : networkcartoon ] , the core model is where the current node occupancy level is and is the -vector of transition probabilities ( including the external " node i.e. , leaving the fox news network at we structure _ decoupled _ bdfms in terms of positive flow rates underlying each specifically , independently , with independent gamma - beta evolutions for each latent level these bdfms for each node pair can be customized with node - pair specific discount factors , allowing greater or lesser degrees of variation by node pair .the set of models for elements of implies a dynamic model for the vector of transition probabilities having elements independence across nodes enables scaling , as the analyses can then be decoupled and run in parallel for the and then recoupled to infer the dependencies in patterns of changes in the are recovered in evaluating the posterior distributions and , as in section [ sec : dgm ] , in using this set of models to emulate gravity models that explicitly characterize interdependencies .a key and critical component of the model is the definition of the scaling factors in . in decoupling the multinomial flows from node into parallel poisson models for nodes the inherent dependency on total occupancy of node restore this in using this specific definition of scaling factors to explicitly correct for occupancy changes .this recognizes that the decoupled , scaled models are not predictive of overall occupancy rather , they are decoupled , tractable models that are relevant to tracking and short - term prediction of _ relative _ occupancy levels through the implied multinomial probabilities .the relevance of this scaling factor is most evident in cases of major changes in occupancy , when an abrupt increase in node occupancy at time relative to its prior value will lead to increased flows to other nodes at time even if the underlying transition probabilities are essentially constant .in such a case , the scaling factor will encourage the appropriate view that the are stable .then , inferences on the directly yield inferences on the transition probabilities of interest : theoretically , the conditional multinomial probabilities are simply not impacted by the scaling factors , i.e. , the theory and analysis details of section [ sec : gammabetadm ] and appendix [ app : betagammadm ] now apply with data and latent flow levels indexed by origin node and destination node as with inflow models , we have flexibility to choose discount factors specific to context . following the discussion of section [ sec : inflows ] , we specify at each where is a _ constant baseline discount factor _ for node pair and a specified constant .again this is later overlaid with intervention to adjust discount factor values as needed , based on sequential monitoring of flow patterns and using the mml measures now of course for each node pair as one formal guide to model adequacy . in sequential analysis of transitions ,the node - pair specific models generate full joint predictions one - step ahead ( or more , if desired ) for the theoretically exact set of multivariate flow vectors across all nodes .the one - step forecast distribution does not have an analytic closed form , but is trivially simulated to define forecasts .that is : ( i ) simulate directly from each of the gamma - beta evolutions for the ( ii ) transform sampled values to the conditional multinomial probabilities ; then ( iii ) sample the multinomial at these parameter values .similarly , for both on - line and retrospective inference about transition probabilities , samples from posteriors for the again simply transform to the required probability scale .the analysis was applied separately to data from each of the six days .we focus on the am period of february 23rd 2015 for initial summaries , and then make some comparisons across days .priors for the inflow rates are with and where is from inflows in the 5minutes _ prior _ to the start of model analysis at the priors for the node - node flows are , similarly , with and with where each is a point estimate based on that prior period . with priors are relatively diffuse , and for most nodes one or a few initial observations `` wash out '' the effect of the prior .while some node - node pairs have very low counts , they all see traffic that then updates shape parameters over a few early periods ; some network links have counts in the thousands , while the average is around 4050 across the time period . for the inflows to fox news nodes for the february 23rd 2015 am period ., width=432 ] the prior for each baseline discount factor is a smoothness encouraging truncated to ( 0.9,0.999 ) ; reanalysis using uniform priors on this range led to little in the way of noticeable differences , as the marginal likelihoods at dominate . running models in parallel across discrete grids of each discount factors and evaluating mml measures at marginal likelihoods that are mapped to posteriors .figure [ fig : dfall ] plots posteriors for the in the inflow models .some nodes exhibit higher volatility in flows than others , consistent with smaller discount factors ; these are particularly associated with domains with high flow counts ( e.g. , inflows to homepage ) .constraining the range to higher values is consistent with the expectations to generally smooth " trajectories for the processes , which turns out to be consistent with the majority of flows ; allowing smaller values has little or no impact on much of the reported analysis .however , for some node flows with patterns of higher levels of change and variation , lower discount factors would lead to posteriors that suggest more volatile trajectories in a few cases .these are better addressed in a model that uses a higher discount factor as standard , but then with interventions to allow increased uncertainty and adaptation in response to discrepant observations ( whether single or in batches ) ; our developments in section [ sec : monitor ] are heavily motivated by this .the model is , of course , open to specification of whatever priors a user may regard as relevant or wish to explore .summary inferences on selected model components are reported from models with discount factors set at posterior modes .figure [ fig : firsteg ] gives one example of learning about inflow the leisure domain .this exemplifies sequential learning about the flow rate together with its retrospectively updated trajectory and a visual assessment of one - step ahead forecasting aligned with the data .( homepage ) to ( politics ) . _data ( plus signs ) with one - step ahead forecast means and 95% intervals ._ trajectory of mean and 95% intervals from on - line posteriors . _lower : _ revised trajectory under , width=364 ] ( homepage ) to ( politics ) . _data ( plus signs ) with one - step ahead forecast means and 95% intervals ._ trajectory of mean and 95% intervals from on - line posteriors . _lower : _ revised trajectory under , width=364 ] a similar display in figure [ fig : phihomepol ] is an example of flow between two network nodes : from homepage to politics .as with figure [ fig : firsteg ] , we note the concordance of incoming data with the one - step predictive intervals as they are successively revised in the forward analysis , and the enhanced smoothing of trajectories in the retrospective analysis .pointwise intervals of the trajectories of transition probabilities ( homepage entertainment ) for each of the six mornings ., width=480 ] pointwise intervals of the trajectories of transition probabilities ( homepage entertainment ) for each of the six mornings ., width=480 ] on transition probabilities , it is natural to look at examples involving the homepage , the most popular single domain on fox news .for example , figure [ fig : trans2 ] shows that the probability of people leaving the fox news website from homepage increases in this 50 minute window for each of the six mornings .note that there are significant day effects ; e.g. , visitors were more likely to leave fox news on the morning of march 9th compared to the other mornings .more detailed insights based on the gravity model analysis are noted in section [ sec : dgmdata ] .similar figures ( not shown ) highlight patterns and day - to - day differences for other transitions .for example , most homepage visitors stay on homepage for a while and have a high probability of exiting the fox news site entirely from that page . across all six days ,the probability of staying on the homepage each time interval generally decreases over the course of the 50 minute morning period . as an illustration of a more detailed analysis of a very specific flow ,consider figure [ fig : trans5 ] . among the visitors who leave the homepage for other fox news domains ,entertainment is generally the most popular destination . across all mornings ,we see major differences in trajectories of transition probabilities ; in particular february 23rd and 24th have higher rates than the other days .it is noteworthy that the academy awards ceremony was held on the night of february 22nd , which may have driven this uptick .additional summaries of inference on trajectories of selected and processes appear in the mapping to dynamic gravity models ; see the upper row of frames in figures [ fig : gmhome_sci ] and [ fig : gmhome_ent ] , for examples .a more intricate , multivariate dynamic model involves node - specific main effects and node - node interaction terms , representing dependencies in patterns of flows linked to inflow / outflow and node - node relationships . for each within - network node andall the model is with : ( i ) a baseline process ; ( ii ) a node main effect process adjusting the baseline intensity of flows the origin or outflow parameter process for node ; ( iii ) a node main effect process representing the additional attractiveness " of node the destination or inflow parameter process for node ; and ( iv ) an interaction term , representing the directional affinity " of node for over time relative to the combined contributions of baseline and main effects .models of this and more elaborate forms have seen some use in transportation studies ( e.g. * ? ? ?* ; * ? ? ?* ) where the interaction term is typically structured as a function of physical distance between nodes ; there the gravity model " terminology relates to the role of small distances in defining large interactions and hence attraction " of traffic from node to node we refer to the interactions as affinities " . in dissecting the network flow activity , we are most interested in questions about which affinities are greater than one ( attracts flow from over and above the main effects ) , or less than one ( is relatively unattractive to ) , or not significantly different to one ( neutral ) .critically , affinities are time - varying , and any identified patterns of variation over time may be related to interpretable events or network changes . in a first fully bayesian approach to gravity models using mcmc methods , developed such models in the static case ; i.e. , with no dynamics in the model parameters , and applied the model to a large transportation flow network . explored a similar approach in studies of patient flows to hospitals .analysis via mcmc is computationally very demanding , and the burden increases quadratically in , and inherently non - sequentially .more recently , studied such models for spread of infectious diseases , and used gaussian process approximations for approximate inference rather than full mcmc or other computational methods .we share the spirit of this latter work , in using the simply and efficiently implemented bdfm as a path to fitting the gravity model now extended to time - varying effect parameter processes .however , we do not constrain the affinity parameters as a function of covariates of any kind , simply treating the dgm as a dynamic , random effects model .this leads to a _parameter mapping between the bdfm to the dgm ; as a result , the trivially generated simulations from the full posterior of the bdfm are mapped directly to full posterior samples from the dgm , providing immediate access to inference on main effect and affinity processes over time .given a set of flow rates for all at each time the mapping to dgm parameters in requires aliasing constraints to match dimensions .we adopt the common zero - sum constraint on logged values .define , , and using the notation to denote summation over the range of identified indices , constrain via , for all we then have a bijective map between bdfm and dgm parameters ; given the we can directly compute implied , identified dgm parameters .the dgm is saturated there are exactly as many parameters in the dgm as there are observations in the data set .however , the emulating bdfm enforces smoothness over time in parameter process trajectories , and this acts to substantially reduce the effective model dimension one key attribute of the emulation approach .note that this overall strategy inherently adopts the view that temporal structure for dgm parameter processes are those induced by the mapping from bdfms . in current form ,the evolution of latent rate processes in the latter are random walks with levels of variation defined by rate - specific discount factor sequences , so the evolutions for the induced dgm parameters will be more elaborate but still basically of random walk form . define for each at each time then at each time compute the following in the order given : * the baseline level where ; * for each the origin node main effect where ; * for each the destination node main effect where ; * for each and the affinity where in our data analysis below , we apply this to all simulated from the full posterior analysis under the bdfm to map to posteriors for the dgm parameter processes .a technical problem with this mapping arises in cases of sparse flows , i.e. , when multiple counts are zero or very small for multiple node pairs .in such cases the posterior for favors very small values and the log transforms are large and negative , which unduly impacts the resulting overall mean and/or origin or destination means . while one can imagine model extensions to address this , at a practical level it suffices to adjust the mapping as is typically done in related problems of log - linear models of contingency tables with structural zeros .this is implemented by simply restricting the summations in identifiability constraints to node pairs for which , for some small and adjusting divisors to count the numbers of terms in each summation . for our study, we use . with this adjustment ,very small appropriately lead to very small affinities , i.e. , small rates underlying very sparse flows .we first apply the gravity model decomposition to the morning data on february 23rd . following bdfm analysis as in section [ sec : bdfmdata ], posterior simulations ( 5,000 monte carlo samples ) of flow rates are mapped to posterior samples from the corresponding dynamic gravity model ..,width=624 ] .,width=624 ] inference on outflow ( origin ) parameters and inflow ( destination ) parameters for six chosen nodes are shown in figures [ fig : alphat ] and [ fig : betat ] .the posteriors for origin effects show that large - scale domains , such homepage , have higher values of , while domains with low or zero flows , such as science , naturally have lower values . across all domains ,subsets show similar patterns but there are also major differences apparent .in particular , the posterior analysis shows that several domains , such as homepage and entertainment , are substantially higher than the average as both origin and destination nodes .several nodes , such as opinion , have above ( or below ) average destination effects but origin effects about the norm .these distinctions between the two effects show the roles of and as representing common factors across the origin and destination of the flows node - by - node .they are also naturally related over time in most domains ; this captures the effect of the overall scale , or popularity , of some domains such as homepage and entertainment here , while also showing up in clearly similar patterns over time in less active domains , such as world .further , while some trajectories are relatively stable over time , others show marked changes in the node - specific effects over the morning period .opinion , for example , has a roughly constant and above - average inflow effect for much of the morning , but it decays toward the end of the morning period ; world starts off at a level slightly above the norm in both inflow and outflow effects , which both then increase substantially as the morning progresses ; science , in contrast , has roughly constant effects across the full time period . for the affinity effects , we have parameters ( one for each pair of nodes except the unobserved external external flow ) at each time .the number of effects becomes massive for large . even in this example for illustration , ,the number of for fixed is 528 , so it is impossible to examine all the results in this paper .for this reason , we pick up a few affinity effects that may interest readers in terms of interpretation . for affinity with retrospective posterior c.d.f , we introduce the _ bayesian credible value _ as a simple numerical measure of deviation from the neutral " value of 1 .this highlights the practical relevance of the affinity effect and its changes over time .traffic from homepage to other domains are central to understanding normal patterns of variation as homepage is usually the landing page for visitors . where users tend to go next , and how the flow patterns begin to evolve from homepage generally , is one key interest from the advertisement and marketing viewpoints .figure [ fig : gmhome_sci ] displays some relevant summaries for flows from homepage to science .first , overall counts and also relative frequencies of transitions tend to increase over this morning period .the bdfm appropriately tracks these slowly evolving trends ( while not , of course , predicting them ) .second , origin / outflow and destination / inflow parameter processes are relatively constant over time , although the former exhibits a slight increase in the later morning period . of more interest and highlighting the flexibility and incisiveness of the bdfm emulationmap the affinity process is clearly time - varying .initially at reduced levels fox news visitors tend to be much less likely to go to science from homepage during the first half or more of the morning period this trends upwards to be basically negligible in impact after about 35 - 40minutes . note that , while the overall outflow and inflow processes are roughly constant over time for this par of nodes , the raw data indicate continued growth in traffic towards the end of the time period , and thus the model responds by inferring the upward drift in the interaction / affinity process .the pattern over time of the affinity effect also relates to dynamic sparsity . "while we do not have models that are explicitly exploring sparsity in main or interaction effects , the emulation approach has enabled the identification of an interaction / affinity process that is relevant for some period of time but then , practically , irrelevant for others .in contrast to our easy and scalable methodology , other more formal bayesian approaches to dynamic sparsity modeling ( e.g. * ? ? ? * ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) are difficult or impossible to reliably implement in a sequential context , being reliant on intense mcmc methods for batch data processing .we do note , however , that we are not formally testing consistency of posteriors for affinities against the value of 1 , but simply exploring the trajectories to generate insights .more formal assessment is available , if desired , by considering differences in affinities over time from the full posterior sample .= homepage =science .as in figures [ fig : alphat ] and [ fig : betat ] , the + symbols indicate empirical values ( with cases of 0 occupancy leading to missing values ) ._ upper left : _ posterior trajectory for the latent flow level process with raw counts ( crosses ) . _upper right : _ posterior trajectory for the transition probability process with raw frequencies ( crosses ) ._ center left : _ posterior trajectory for the homepage origin ( outflow ) effect process _ center right : _ posterior trajectory for the science destination ( inflow ) effect process _ lower left : _ posterior trajectory for the homepage : science affinity / interaction process . _ lower right : _ corresponding trajectories of bayesian credible values assessing support for near 1 ., width=624 ] = homepage =entertainment , with details as in figure [ fig : gmhome_sci ] ., width=624 ] a second example , chosen to represent node pairs with high inflow and outflow levels , concerns transitions from homepage to entertainment ; see figure [ fig : gmhome_ent ] . hereagain we see that the trivially implemented emulation approach is able to identify a high level of stability over time in the main effects , while indicating subtle changes to reduced levels of affinity in the latter part of the morning period .considering the downward trending patterns in raw data / relative frequencies of flows from homepage to entertainment in the latter period , it would not otherwise be easy to isolate these patterns as idiosyncratic to the node pair .inferences reflected here on the trajectory of the affinity process clearly show significantly reduced levels later on , with falling from around 6.5 to around 5.5 ; relative to the network - wide structure , high affinity is maintained throughout at a practical level , but at a reduced level later on for this node pair .some heat - maps in figure [ fig : gmimagesfeb23am ] show aspects of relationships in some estimated dgm parameters across nodes and across time .these show patterns in the values of the posterior means of over time ; this includes all main effects and the directional affinities / interactions of all network nodes for flows from domain , the homepage . simply for nice " visual display ,the nodes are ordered in terms of correlation over time with the estimated homepage outflow effect in all three images .the values shown are standardized within each image so that the min / max across time are 0/1 .note common patterns that reflect interdependencies in dynamics across subsets of network nodes .the image reflects natural evolution in the morning period of traffic from network nodes , showing the increasing rates of transitions from some of the more popular , core domains ( homepage , politics , world , entertainment and others ) in later morning .the image shows consonant patterns in a subset of these core domains in that their attractiveness increases in later morning but with some clearly different cases . for the homepage affinity processes there are quite a few domains that see increased incremental traffic rates in the first half , or so , of the morning period , which then drop off to low levels later on .@ outflow effects & inflow effects + , and ( 1=homepage ) .shading runs from 0 ( dark grey / dark blue in on - line version ) to 1 ( white / yellow in on - line version ) on these standardized scales .the nodes are ordered based on correlation over time of the main outflow effects with that of homepage ; this is an arbitrary ordering chosen simply for visual presentation ., title="fig:",width=254 ] & , and ( 1=homepage ) .shading runs from 0 ( dark grey / dark blue in on - line version ) to 1 ( white / yellow in on - line version ) on these standardized scales .the nodes are ordered based on correlation over time of the main outflow effects with that of homepage ; this is an arbitrary ordering chosen simply for visual presentation . , title="fig:",width=254 ] + + the study covers morning ( 09:0010:00am ) and afternoon ( 01:0002:00pm ) periods on each of 6 days , as already discussed and explored in section [ sec : bdfmdata ] . moving to the dgm, we now explore additional features concerning time - of - day effects as well as day - to - day variation .this is based on running the coupled bdfm - dgm analysis separately on each time period / day .figure [ fig : mdall ] shows the dgm trajectories for the retrospective baseline parameter process for each of the 12 fifty - minute intervals . trajectories are similar across days but for notable differences on february 24th and march 3rd . on february 24th , the afternoon flow is significantly lower than the morning flow , while the morning flow that day is much larger than across other days .one plausible reason is increased morning traffic in response to discussions following the academy awards ceremony , with a resulting lull in the afternoon traffic . the reverse happens on march 3 where , although the morning traffic seems typical , the afternoon traffic is unusually high .this was the day on which fox news posted an article concerning hillary clinton s use of her personal email account for all correspondence during her tenure as secretary of state .it is plausible that this led to larger than usual afternoon traffic flows as the controversy unfolded . across days with 95% intervals .dark / full lines ( red in the on - line paper ) are for morning periods , lighter / dashed lines are for afternoons ., width=528 ] an advantage of the dgm representation is that it allows investigation of such speculative explanations .for example , examination of destination effects ( not shown ) confirm that the entertainment node was unusually popular on february 24th am , and that the politics and opinion nodes were unusually popular on march 3 pm , compared to similar flows on other days .in routine use of dynamic models in sequential monitoring of flows , one key interest is that of being sensitive to data patterns that seem outside the norm , i.e. , anomalous , and may reflect events and changes requiring investigation and intervention .we address this here with methodology based on the concepts and theory underlying bayesian model monitoring in conditionally normal dlms .while this theory of sequential bayesian model assessment is well - established , it does not seem to have been adapted to apply to dynamic models of counts ; our contributions in this paper include this extension and required customization of the approach .revert to the generic context of a poisson - gamma model , as in section[sec : gammabetadm ] and appendix [ app : betagammadm ] , with time count observation and underlying state .the general strategy here applies to all cases : inflows to any node when as well as transitions from a node to when the sequential bayesian testing approach in dlms ( * ? ? ?* chapter 11 ) is extended here to apply to the poisson - gamma dynamic model .regard the model as the standard model " at time relabeling the one - step predictive density at each time as the suffix 0 indicates the standard model , and we now explicitly recognize the dependence on the discount factors defining levels of stochastic change in the underlying state process .the two components of monitoring and adaptation are as follows . _a. alternative model predictions : _ a purely synthetic _ alternative model _ at time that requires only the specification of the alternative predictive p.d.f . , differing from the standard only in the current discount factor this implies that is more diffuse that but similarly located . in our poisson - gamma case , the implied generalized negative binomial p.d.f.s and have precisely the same mean but the latter has a larger variance and gives more support to both smaller and larger values of . _b. bayes factor comparisons : _ evaluation of bayes factors comparing the standard model predictive p.d.f.s with the alternative define the monitoring tests .these marginal likelihood ratios are computed based on both the time observation and recent consecutive observations to assess and compare consistency of this local data with predictions from the standard model relative to the more diffuse synthetic alternative .support for the standard model is regarded as a business as usual " signal . a signal of support for the alternative addresses the potential for : ( i ) the single observation to be discrepant , a possible outlier ; ( ii ) a relatively abrupt change in the process at time , beyond that predicted by the model with current discount rate and ( iii ) change in the process at higher levels than the norm , but that are not so abrupt and may have been developing at subtler levels over a few recent time points .define the following : * the time bayes factor assessing the current observation alone .* the _ lag- local bayes factor _ based on the most recent observations , including * the _ local cumulative bayes factor _ and corresponding _ run - length _ such that bayesian testing theory ( and chapter 11 of ) shows that the local test measures are trivially updated as time evolves . at time , the updated pair is = \begin{cases } [ h_t , \ , 1 ] , & \textrm{if } l_{t-1}\ge 1 , \\ [ h_t l_{t-1 } , \ , 1+l_{t-1 } ] , & \textrm{if } l_{t-1 } < 1 . \\ \end{cases}\ ] ] past consistency with the standard model means that the entire focus at time is on the single observation if , however , recent evidence weighs against the standard model , then that evidence continues to accumulate based on the new observation .the pair ] to continue monitoring in case of potential changes .* * if _ or _ + apply reduced discount factor to allow for adaptation to potential changes ; + update using time data as usual but with this increased prior uncertainty ; + reset monitor to and * * if _ and _ + proceed as usual with prior - posterior and monitor updates .* forecast ahead as desired , then proceed to time this process is displayed in schematic form in figure [ fig : flowchart ] of appendix [ app : monitoring ] , this modified from ( * ? ?* chapter 11 ) , which also discusses choices of thresholds .we follow the recommendations there for these choices . as discussed in sections [ sec :inflows ] and [ sec : transits ] , the discount factors in the standard models are based on where , in the generic notation here , is the shape parameter of the time posterior gamma distribution for and a baseline discount rate .we therefore select the alternative discount factor for the intervention analysis as where for some smaller baseline the studies of fox news network data now mentioned are based on whereas the standard models are based on values of running between 0.9 and 0.99 across the sets of inflow and transition flow models .one example from the fox news study is summarized in figure [ fig : monitorhome_world ] . while a rather extreme case in terms of one series of time points where the departure from the steady random - walk evolution of the bfdm is very apparent , this example nicely highlights the efficacy of the on - line monitoring and automated intervention strategy .the example is transition flows from homepage to world over the february 23rd am period .= homepage =world with bayesian model monitoring and discount - based intervention .data is from the february 23rd am period . _ upper : _symbol + indicates observations judged consistent with the standard model ; x indicates cases identified as potential outliers by low ; * indicates those flagged as potential change points via low ; o indicates cases with .the vertical arrows indicate times of automatic intervention .the full line and shaded region represent one - step forecast means and 95% intervals ._ center : _ tracks of ( above center ) and ( below center ) over time ._ lower : _ data ( + ) with one - step forecast means and 95% intervals from the standard bdfm analysis in light gray ( red in on - line version ) compared to the analysis with monitoring and intervention in black / dark gray ( gray in on - line version ) ., width=480 ] there are several periods in which but the evidence against the normal model is not so strong as to signal an exception and call for intervention .the period around 2325minutes saw a substantial upswing in flows that triggered interventions to adapt three times .interventions at about 32.5 and 48minutes were triggered by a cumulated run - length suggesting gradual drift from the standard model .we also note that this picture is very similar when shown in terms of the flow frequencies and conditional transition probabilities rather than raw counts and rates this is an example where there were ( at least ) two periods of change in transition characteristics beyond those defined by the bdfm , but that monitoring and intervention is able to adapt to on - line . in a real - life , sequential context , much more can and should be done , of course , at times of intervention .the analysis summary here simply serves to show the potential , recognizing that this is applied in parallel across all inflow and node - node transition models in a wholly automated manner .a second example in figure [ fig : monitorhome_science ] shows a somewhat more typical stable trajectory , with only two interventions that appropriately adapt to the modest and subtle level changes in the latter part of the time period .= homepage =science under monitoring and intervention , with details as in figure [ fig : monitorhome_world ] ., width=480 ]the bdfm framework is adaptive to time - varying rates of flows within dynamic networks and able to coherently quantify non - stationary changes in within- and into-/out of- network flow rate processes .this extends and customizes non - stationary process models for count data for transition flows in a network .novelties include use of relevant occupancy factors to appropriately scale poisson rates in sets of decoupled models , and the introduction of discount factor scheduling to appropriately address problems with , in particular , low flow rates .sequential analysis of the resulting bayesian dynamic flow model is fast and efficient .the decoupled analyses yield full posterior distributions for rate parameter process parameters across nodes and pairs of nodes in a scalable manner .our analysis of the fox news network time series data sets shows the utility of the bdfm in generating initial inferences on flow rate processes , in highlighting differences across days and in generating potential practical leads " . on the latter , for example, it is immediately clear from the bdfm results that most visitors go to just one domain , rather than traversing to multiple domains .this has potential decision implications for computational advertising , and also likely highlights a difference between on - line news consumers and traditional newspaper readers .the emulation map `` from the bdfm to the dgm represents a strategy of increasing interest in various areas , especially with regard to efficient computation and scalability .we fit a flexible , adaptive model in a set of decoupled analyses , and then directly map posterior samples to the more substantively interesting parameter processes in a model that is otherwise challenging to fit . applied to the fox news flow data , we show some ( of the many ) examples of how this isolates dynamics in node - specific and interaction effects . in some such cases ,this indicates time - varying sparsity '' in node - node interaction effects over time , highlighted with our use of bayesian credible values over time : some interaction effects ( affinities ) appear significant at some points in time but not in others . a number of the specific node - node inferences mentioned in the application section highlight additional results of substantive interest , some of which are initially unexpected .others include a sustained positive affinity of opinion for homepage , but a similarly sustained but negative affinity of science for homepage .additionally , comparisons across different times of the day and across days identified and quantified patterns related to anomalous flows corresponding to identifiable news events that appear to have driven traffic to specific nodes on the fox news site . computational demands for the full analysis scale as where is the monte carlo sample size .analysis is very fast , based on the core modeling and emulation strategy .a 2016 matlab implementation running on a standard laptop ( 2.3ghz cpu , 16 gb memory ) took less that 5minutes to run our one period context with and one current interest is in developing this in analysis of a more elaborate , sub - domain network of more than 1,000 nodes , currently under development .additional methodological development concerns the use of formal sequential bayesian model monitoring based on bayes factor tests , and the accompanying automated intervention analysis to allow models to adapt at time of potential change in underlying flow parameter processes beyond normal levels of variation .importantly , our use of decoupled / recouple models for within - network transitions allows statistically and computationally efficient development of sequential bayesian testing based on bayesian factors related to short - term prediction of each of the individual node - node flows in parallel . at each time point, the decoupled models are monitored , and any signals of significant departure from predictions may be linked across node pairs to explore for dependencies .an unusually significant decrease in inflow to entertainment at one time , for example , may come via increased transitions from homepage alone , or represent an entertainment effect evident in flows from other domains as well .monitoring and change detection in the decoupled bdfms can lead to intervention to modify posteriors one or more of the decoupled posteriors for the but the mapping to the dgm parameters will then quantify and highlight the potential relevance for dependencies as well as interaction / affinity effects .there are now opportunities to use and develop these models as a basis to characterize the stochastic dynamics of website flows , and hence feed into modeling and decision analysis that addresses the needs to respond to changing patterns in computational advertising .an ability to rapidly signal potential anomalies in a small subset of domains in real - time will be of huge interest in this field .more immediately , some of the evident questions arising from the current study concern the overlay of the unbiased " inferences about changes and structure in network flows with substantive covariate information . in many applications , including computational advertising but also capital and transportation flows , there are useful covariates that could inform the analysis .our perspective here has been mainly exploratory , aiming to define a formal basis for effectively characterizing non - stationary stochastic dynamics in flow data , and adapting models over time .one next step is to overlay any particular application with covariate information as descriptive / explanatory as we exemplified with some vignettes from the fox news study .one aspect of this is to consider random effects representing otherwise unexplained extra - poisson variation that seems relevant in some cases .a more predictive level would involve extensions to incorporate covariates in the bdfms , so future research in that direction is warranted .finally , we note that the general context of time - varying traffic flow analysis arises in multiple other fields . beyond in origin - destination analysis and related goals in studies of transportation networks( e.g. * ? ? ?* ) and physical traffic ( e.g. * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , such data in neural spike train experiments , other varieties of internet traffic , and network studies in areas as diverse as biological anthropology ( e.g. , grooming interactions in primate troops ) , human social networks , flows between institutions in finance and economic networks , and others .our work here represents new methodology of bayesian dynamic modeling in an application with at least conceptual intersections with some of these areas , and may well be explored in such applications .this appendix provides additional details and discussion of the gamma - beta steady " dynamic model for time - varying poisson rates , extending the background details underlying the core model summarized in section [ sec : gammabetadm ] . using generic notation ,a series of non - negative counts over is modeled via conditionally independently over time , where the underlying / latent poisson rate process follows a gamma - beta stochastic model and each is a scaling constant known at time this is effectively a non - stationary , non - gaussian random walk model , so it has enormous flexibility in adapting to changes over time .the extent of anticipated stochastic change over time is defined by a discount factor parameter , potentially different at each time .we detail the model concept and structure , and the implied machinery for bayesian learning and forecasting that includes the forward filtering , backward sampling ( ffbs ) algorithm for conditionally poisson time series coupled with the gamma - beta steady process model . at time an hypothetical latent state and use as a synthetic notation for all available initial information .specify an initial gamma prior , so where , are known . for each model and forward / sequential analysis are then as follows .[ [ posterior - at - time - t-1 ] ] posterior at time : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + standing at time , the posterior for the current poisson rate given the initial information and all data observed over past times is gamma , where the defining parameters are known , evaluated from past information [ [ evolution - to - time - t ] ] evolution to time : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the poisson rate evolves to time via the gamma - beta evolution where the random shock " , or innovation , is independent of this is a multiplicative random walk model in that hence the use of the steady model " terminology . a lower value of leads to a more diffuse beta innovation distribution and the ability to adapt to changing rates over time , while a value closer to one indicates a steady , stable evolution .the random walk nature of the model allows for changes , but does not anticipate specific directional changes .the model results in a fully bayesian solution to rather simple , flexible smoothing of discrete time series in the context of variation in the underlying latent process .note that the beta innovations distribution for at time depends in the accumulated information content about the time level through the shape parameter the discount factor acts to decrease the information content between times and in a natural way .that is , information loss rates are constant over time , rather than parameters of the innovation distribution .the specific choice of beta distribution ensures that the implied time prior has a conjugate gamma form .[ [ prior - for - time - t ] ] prior for time : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the time gamma posterior of couples with the beta innovation to give the time prior for the next state as here we see the discounting effect of the random walk model : the prior for the evolved rate is more diffuse than the time posterior , reflecting increased uncertainty due to evolution .[ [ one - step - ahead - predictions ] ] one - step ahead predictions : + + + + + + + + + + + + + + + + + + + + + + + + + + + predicting the data at time the one - step ahead forecast distribution is generalized negative binomial with p.d.f . on [ [ posterior - at - time - t ] ] posterior at time : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + observing , the resulting posterior is , which has the same form as that at time but with updated parameters and .a key ingredient of formal model assessment is the model marginal likelihood that , in this first - order markov model , is computed as the product of one - step forecast p.d.f.s evaluated at the realized data . at time , this product is the product is most usefully written in its one - step updated form where the contribution at time derives from the one - step ahead predictive density of evaluated at the datum these are trivially computed .one of the most useful roles of the marginal likelihood is in comparing models based on different ( sets of ) discount factor values . as one key special case ,suppose is fixed over the time period of interest . then gives the value of the marginal likelihood at any chosen value of in parallel analyses using a discrete set of values , the log of the marginal likelihood is linearly accumulated as data are sequentially processed . at any time can be mapped to a posterior and then normalized over the grid of values for inference on at any time of interest .this can be used to identify / choose a modal value for inference on the conditional on a chosen or for model averaging .the sequentially computed contributions to the marginal likelihood the realized p.d.f .ordinates be monitored sequentially over time to provide an on - line tracking of model performance , with potential uses in flagging anomalous data at one node or any subset of nodes , using standard bayesian model monitoring concepts ; see , west and harrison ( 1986 , 1989 and chapter 11 of 1997 ) , and ( * ? ? ?* section 4.3.8 ) . reaching the end time , we look back over time and revise the summary posterior distributions for the full trajectory of the latent gamma process based on all the observed data .this uses backward sampling based on theory in ( * ? ? ?* section 10.8 ) ; see also ( * ? ? ?* section 4.3.7 and problem 4 of section 4.6 ) .* sample the final rate from the time posterior .* recurse back over time , at each stage sampling from the implied via with a backward innovation " drawn from , independently of repeating the backward sampling generates a monte carlo sample of the trajectory from the full posterior for summary inferences .{\txt{time :\\ ? } } \ar[d]^{\txt{{{\color{royalblue3}yes } } } } \ar[rrd]^{\txt{{{\color{brickred}no } } } } & & \\ * + [ f]{\txt{ { \em and } ? } } \ar[d]^{\txt{{{\color{royalblue3}yes } } } } \ar[rrd]^{\txt{{{\color{brickred}no } } } } & & * + [ f]{\txt { { { \color{brickred}possible outlier ? } } \\ - reject - } } \ar[d ] \\ * + [ f]{\txt{{{\color{royalblue3}model ok } } } } \ar[d ] & & * + [ f]{\txt { { { \color{brickred}possible changes ? } } \\ intervene : adopt lower \\ priorposterior update}}\ar[lld ] \\ * + [ f]{\txt{priorposterior update\\ forecast etc } } \ar[d ] & & \\ * + [ f]{\txt{time : \\ ? } } \ar@{.>}[d]\ar@{.>}[rd ] & & \\ & & \\ } \ ] ]
|
traffic flow count data in networks arise in many applications , such as automobile or aviation transportation , certain directed social network contexts , and internet studies . using an example of internet browser traffic flow through domains of an international news website , we present bayesian analyses of two linked classes of models which , in tandem , allow fast , scalable and interpretable bayesian inference . we first develop flexible state - space models for streaming count data , able to adaptively characterize and quantify network dynamics effectively and efficiently in real - time . we then use these efficiently implemented models as _ emulators _ of more structured , time - varying gravity models that allow closer and formal dissection of network dynamics . this yields interpretable inferences on traffic flow characteristics , and on dynamics in interactions among network nodes . bayesian model monitoring theory defines a strategy for sequential model assessment and adaptation in cases of signaled departures of network flow data from model - based predictions . exploratory and sequential monitoring analyses of evolving traffic on a defined network of web domains in e - commerce demonstrate the utility of this coupled bayesian emulation approach to analysis of streaming network count data . _ key words : _ bayesian model emulation , decouple / recouple , dynamic network flow model , dynamic gravity model , monitoring and anomaly detection 0
|
for continuously varying one - parameter families of iterated maps in , discontinuous changes in the size of an attractor are the most easily observed type of global bifurcations , including changes in the basin boundary ( metamorphasis ) .these changes can occur as the result of a change in stability of the recurrent set .a more general situation occurs when discontinuous changes in attractors occur as the result of a discontinuous change in the size of the recurrent set itself .such a global bifurcation is called an * explosion*. for the last several years , we have been studying explosions and their properties , including a classification of explosions at heteroclinic tangencies for planar diffeomorphisms , and more recently a numerical study of the statistical properties of a certain kind of explosions that occur in dimension three and higher . our research , as well as that of many others , has been guided by a 1976 conjecture of newhouse and palis .( see also the restatements in ) . for over thirty years , this conjecture has managed to elude proof .the conjecture says that for for generic planar diffeomorphisms , all explosions occur through the following two local bifurcations : saddle - node bifurcations and homoclinic bifurcations . in this paper, we prove that the conjecture is true for smooth interval maps .in addition , we are able to give a full classification of which types of tangencies give rise to explosions .these are important and new result in themselves .they are for example in contrast with a recent result of horita , muniz , and sabini , showing that in a probabilistic sense , the newhouse palis conjecture is not true for circle maps . in a broader sense, we are hopeful that the insight gained from the one - dimensional case will give rise to insights leading to the proof of the planar case of the newhouse palis conjecture .it is clear that an isolated saddle node bifurcation for either a fixed point or periodic orbit gives rise to a local explosion , since new periodic points appear .however , in many cases in both one and higher dimensions , a saddle node bifurcation also gives rise to a global bifurcation .the set of recurrent points changes discontinuously as the parameter is varied at points not contained in the saddle node periodic orbit .for example , the period three window in the chaotic attractor for the logistic map corresponds to an explosion : for parameter values less than a bifurcation value , there is a global attractor which comprises the full recurrent set .it consists of an interval . after the bifurcation parameter ,the global attractor consists only of a period three orbit , and the full recurrent set is a nowhere dense cantor set within the attractor interval .an explosion occurs at the bifurcation parameter , and it is due to a saddle - node bifurcation at a point inside the global attractor .saddle node bifurcations an invariant circles are another well - studied examples of this phenomenon . in all dimensions , explosions due toa homoclinic or heteroclinic bifurcation essentially occur due to the creation of a homoclinic tangle when stable and unstable manifolds interesect transversally . in one and two dimensions ,the transition between no intersection and transverse intersections involve tangencies between the stable and unstable manifolds of fixed or periodic points ( _ cf ._ ) .for example , the hnon map and ikeda map contain well studied examples of explosions which are a result of homoclinic bifurcations . in higher dimensions, such bifurcations can occur without tangencies .such a bifurcation leads to unstable dimension variability . in our previous work for planar maps, we gave a precise classification for which types of tangencies for heteroclinic cycles will result in explosions .we called this class of cycles * crossing cycles * , because the different stable and unstable manifolds involved in the cycle lie _ across _ the tangency from each other .we show here that the same results hold for interval maps .our main results are as follows : [ t : explosions ] for generic one - parameter families of smooth maps of the interval with homoclinic orbits or heteroclinic cycles ( hypotheses h1 - 6 ) , explosions occur in if and only if there is an isolated crossing orbit .[ t : general ] explosions within generic one - parameter families of smooth maps of the interval ( hypotheses h1 - 3 ) are the result of either a tangency between stable and unstable manifolds of fixed or periodic points or a saddle node bifurcation of a fixed or periodic point .the paper proceeds as follows : in section [ s : def ] , we give basic definitions of explosions and homoclinic tangencies .we are considering the particular recurrence class of chain recurrent points , which are defined in this section as well . in section [s : homo ] , we prove the explosions at tangencies theorem . in section [ s :mainproof ] , we prove the general explosion classification theorem .our results rely on a very sophisticated and well - developed theory for the dynamics of interval maps .we have briefly stated the necessary results in the course of the proof .we now give some formal definitions of concepts described in the introduction .let be a smooth one - parameter family of maps .we exchangeably write two notations : . for the definition of an explosion it is more natural to use the concept of chain recurrence rather than recurrence .the relationship between chain recurrence and other types of recurrence is discussed in . for an iterated function , there is an * -chain * from to when there is a finite sequence such that , , and for all .if there is an -chain from to itself for every ( where ) , then is said to be * chain recurrent * .the * chain recurrent set * is the set of all chain recurrent points . for a one - parameter family , we say is chain recurrent if is chain recurrent for .if for every , there is an -chain from to and an -chain from to , then and are said to be in the same * chain component * of the chain recurrent set .the chain recurrent set and the chain components are invariant under forward iteration .the following definition is for an explosion bifurcation in the chain recurrent set .such a definition can be formulated for the non - wandering set as well . a * chain explosion point * is a point such that is chain recurrent for , but there is a neighborhood of such that on one side of ( i.e. either for all or for all ) , no point in is chain recurrent for .( all explosion points in this paper are chain explosions , so we sometimes drop the qualifier chain . ) in order the show that is _ not _ chain recurrent , it is sufficient to show that is in the closure of the hyperbolic periodic orbits for . in the above definition , at , is not necessarily an isolated point of the chain recurrent set .for example , at a saddle node bifurcation on an invariant circle , the chain recurrent set consists of two fixed points prior to bifurcation and the whole circle at and in many cases after bifurcation .the chain recurrent set is not invariant under backwards iteration of a noninvertible map .thus explosion points are not preserved under iteration , forward or backward .the following lemma states what is guaranteed by the fact that chain recurrence is preserved under forward iteration .[ l : preimage ] let be a chain explosion point for .specifically , there exists such that there is no chain recurrent point in for all , but is chain recurrent at .then is chain recurrent at , but may also be chain recurrent for .in contrast , if is a preimage of , then there is a such that no point in is chain recurrent for all .note that may not be chain recurrent at .we now give definitions of homoclinic and heteroclinic points .note that for a diffeomorphism , homoclinic and heteroclinic orbits require the existence of saddle points with stable and unstable manifolds of dimension at least one .thus they can only occur in dimension two or greater . however , for noninvertible maps , it is possible to have fixed or periodic points with one - dimensional unstable manifolds and a zero - dimensional stable manifolds .marotto terms such points snap - back repellers .it is not possible to reverse these stable and unstable manifold dimensions ; the existence of a homoclinic orbit to an attracting fixed point requires a multivalued map .the following definition of homoclinic points for interval maps is depicted in figure [ f : fig1 ] .where .the point is a homoclinic point , since ( for ) , and there exists a sequence of successive preimages of converging to .,scaledwidth=30.0% ] let be a smooth function with a repelling fixed point .let be a point in the unstable manifold of and an integer such that .then is a homoclinic point to .if is periodic with least period , then the same definition applies by replacing with .if is a hyperbolic fixed point , and the forward limit set of a point in the unstable manifold of includes , then there are two possibilities : ( 1 ) a finite iterate of is equal to , ( 2 ) the limit set of contains points other than . in this second case ,it is not possible for the limit set of to consist of alone .therefore is not referred to as a homoclinic point , since its limit set is larger than just .this case is considered in later sections of this paper . for diffeomorphisms , all orbits through homoclinic pointsare homoclinic orbits . for one - dimensional maps, there may be many non - homoclinic orbits through a homoclinic point .let and be as in the above definition of a homoclinic point .an orbit is a homoclinic orbit through if the following conditions are satisfied : , for some , and for all , , and .since the stable manifold of a homoclinic point is zero - dimensional , a homoclinic tangency is a tangency of the graph of the map at a homoclinic point .homoclinic tangencies are depicted in figures [ f : fig2 ] and [ f : fig3 ] . with .the point is a homoclinic tangency point . for the particular tangency depicted here , is contained in a _ non - crossing orbit _( definition [ d : crossing ] ) , and thus by theorem [ t : ncb ] is not an explosion point.,scaledwidth=80.0% ] , the tangency point does not need to be the immediate preimage of the fixed point . here, is the second preimage of .,scaledwidth=40.0% ] let , and be as in the above definition of a homoclinic orbit .the point is a homoclinic tangency point if the graph of is tangent to the horizontal line at .this section classifies explosions occuring via homoclinic tangencies .the results are stated for fixed points , but the same results hold for homoclinic orbits for periodic points with least period if is replaced by . throughout this section, we make the following hypotheses .the first hypothesis is a smoothness assumption .the second and third hypotheses are generic assumptions for one - parameter families .the fourth is a notational convention for the existence of a homoclinic orbit .the fifth hypothesis is generic for one - parameter families containing a homoclinic orbit . 1 . is a smooth family of interval maps . for a fixed parameter ,denote .2 . assume h1 .assume that there are no intervals on which is constant .3 . assume h1 .assume that for , there is at most one of the following : 1 . a non - hyperbolic fixed point or periodic orbit .we assume generic behavior as a parameter is varied .that is , non - hyperbolic point is a codimension one period doubling or saddle - node bifurcation .2 . one critical point which comprises a tangency between stable and unstable manifolds of fixed points or periodic orbits .we assume generic behaviour as a parameter is varied .that is , with variation of parameter , the critical point moves from above to below ( or below to above ) the placement of the periodic point .4 . assume h1 , and that for each , is a repelling fixed point for .denote .assume that for , is homoclinic to .assume h1 and h4 .at , the homoclinic orbit containing contains only one critical point .the following theorem says that if at a repelling periodic point a map has negative derivative then no homoclinic points are explosion points .[ t : negative ] assume that is a family of maps satisfying h1 - 5 , and , then is not an explosion point .we show that is in contained in the closure of the hyperbolic periodic points of .if is contained in a homoclinic orbit without a tangency , then the homoclinic orbit is preserved under perturbation , which automatically implies that is not an explosion point .thus we assume that is contained in a homoclinic orbit containing a homoclinic tangency point .as before , denote this orbit by , where , , and and are such that and .fix a neighborhood of .by h5 , there exists a sequence of neighborhoods , such that : _ ( i ) _ , _ ( ii ) _ for , and the map is injective , and _ ( iii ) _ for , .let .by h2 , is an interval on one side of . by assumption , so if the are sufficiently small , is an interval on the other side of . that is , is a neighborhood of . for any sufficiently large , , where .in addition , , where or .thus for , contains a periodic point .thus there is a periodic point in .since was arbitrary , there are periodic points with period such that and .by h3 , for sufficiently large , the periodic points are hyperbolic .therefore is not an explosion point .now consider the positive derivative case .let be a homoclinic tangency point contained in homoclinic orbit . since the eigenvalue of is positive, there is a sufficiently large such that lies entirely on one side of .that is , the homoclinic orbit converges to the fixed point along one branch of the unstable manifold .we denote this by saying that the homoclinic orbit is contained in the local right or left branch of , as formalized in the following definition .let be a repelling fixed point for , with .locally , the left and right branches of the unstable manifold of are disjoint .define and to be the respective unions of images of local left and right manifold branches .the union of and is the entire unstable manifold of . by the intermediate value theorem and intervals .if is not empty , then the intersection must contain either or .for example , may contain points both to the left and to the right of .see figure [ f : fig4 ] and [ f : fig5 ] .is contained in , and is thus not an explosion point . however , the preimages and of are explosion points.,scaledwidth=40.0% ] from the proof of theorem [ t : negative ] , it is clear that to study chain explosions in homoclinic orbits , it is sufficient to consider homoclinic orbits containing tangency points .we formalize the notation in the following hypothesis : 1 . for a family satisfying h1 and h4 , at ,point is a homoclinic tangency point to contained in ( at least one ) homoclinic orbit .let be such that .[ t : branch ] assume h1 - 6 .assume that and that .if for any neighborhood , the sets and contain points in common , then is not an explosion point .the details of this proof are similar to the negative derivative case .preimages of any small neighborhood of are contracting and in the local righthand manifold branch .thus the image of includes a shrunk preimage of .therefore contains a periodic point , which by h3 and h5 is hyperbolic .since is arbitrary , is not an explosion point .the theorem above is also true when is replaced by .[ t : intersection ] assume h1 - 6 , and that is contained in .then is not an explosion point .the image of a neighborhood of under contains either points of or . but is contained in both and , so by theorem [ t : branch ] , is not an explosion point . as mentioned in the introduction , in our previous work we gave a useful geometric method of approaching chain explosions in homoclinic orbits in two and three dimensions , termed crossing cycles . in two dimensions, we showed that a crossing cycle is necessary and sufficient for a chain explosion to occur .the analoguous statements are true in one dimension , as in the theorem below . crossing andnon - crossing orbits are shown in figures [ f : fig5 ] and [ f : fig2 ] respectively .is a homoclinic tangency point contained in a crossing orbit .thus lies in , but not . by theorem [ t :crossing ] , is an explosion point .as depicted here , the preimage of is also an explosion point.,scaledwidth=40.0% ] [ d : crossing ] assume h1 - 6 .note that for , is a point of tangency .if for sufficiently large , _ , and the graph of is locally _ above _ the horizontal line at , or _( ii ) _ , andthe graph of is locally _ below _ the horizontal line at , then we call the homoclinic orbit a * crossing orbit*. a homoclinic orbit that is not crossing is called a * non - crossing orbit*. [ t : ncb ] if h1 - 6 hold and is an explosion point , then is a crossing orbit .if the graph of is locally above the horizontal line at , then of any neighborhood of is contained in . likewise ,if the graph is locally below the line , of a neighborhood of is contained in .the result now follows from theorem [ t : branch ] .the results stated so far give necessary conditions for an explosion point .we now give a converse to these .we first make a generic hypothesis . by theorem [ t : intersection ] , to classify explosion points , it suffices to only consider the case of a tangency point contained only in one manifold branch of , which we state in the following hypothesis . 1 .assume h1 - 6 .assume that at the tangency point is contained exclusively in one manifold branch of .assume h1 - 7 .then at , either tangency point , and contains no tangencies to fixed points or periodic orbits , or tangency point , and contains no tangencies to fixed points or periodic orbits .this follows from the fact that is only contained in one of and , so by h3 there is no tangency in the other manifold branch .the following theorem gives sufficient conditions for an explosion .[ t : crossing ] assume h1 - 7 .if every homoclinic orbit containing is a crossing orbit , then is an explosion point . for specificity ,let .since is only contained in crossing orbits , is not contained in , and the image of any small neighborhood of is contained only in .this implies that , but that the subset is strict . as a result ,prior to tangency , is contained in but not .thus the right endpoint of is .the left endpoint of can not map to , since that would imply a homoclinic tangency in .in fact , no points in map to , since there are assumed to be no tangencies and no intersections with . if maps into , then there is a neighborhood of which also maps into . if is a fixed point , then by h3 and h7 , it is hyperbolic .it has no critical points to mapping to it from the interior of .thus is an attracting fixed point .prior to tangency , there is an such that no -chains carry points from to , and thus from to itself .thus is an explosion point .we are interested not only in explosion points which are themselves tangency points , but also in explosion points far from tangencies which are caused by tangencies .theorem [ t : negative ] contained such a result , but the other results were specifically about the tangency points .it is now straightforward to combine the previous results with lemma [ l : preimage ] to get results for general homoclinic points .since the chain recurrent set is invariant under forwards iteration , the image of non - crossing tangency point is a non - explosion point .however , there may be explosion points with iterates that are non - explosion points .for example , figure [ f : fig4 ] shows points of explosion which are not tangency points but are preimages of tangency points .assume h1 - 6 .if for all , is an explosion point , then the orbit is a crossing orbit .[ t : noncrossing ] if the orbit is a non - crossing orbit , then there are hyperbolic periodic orbits limiting on every point in the orbit .[ t : preimageinv ] if h1 - 6 hold , and is an explosion point , then any pre - image in the homoclinic orbit of is such that is an explosion point . if is an explosion point , then by lemma [ l : preimage ] , is not chain recurrent for . at , contained in a homoclinic orbit , and thus chain recurrent .this theorem only mentions preimages of which are contained in a homoclinic orbit at .there may be other preimages of which are not contained in any homoclinic orbit , and are thus not chain recurrent at .assume h1 - 6 .assume that for , is a crossing orbit , but is also contained in some other homoclinic orbit which is a non - crossing orbit .thus is not an explosion point .then the following statements hold : 1 . for all , is not an explosion point .2 . assume h6 and h7 .if there exists such that is contained neither in nor in any other non - crossing orbit , then for all , is an explosion point .the first statement holds since at , any image of is contained in , which is a non - crossing orbit .thus by theorem [ t : noncrossing ] , this image is not an explosion point .let be as in part _ ii _ of the theorem . for specificity ,assume that .since is contained in a crossing orbit , it is not contained in . a neighborhood of maps into under .if the tangency point is in , the rest of this proof is exactly the same as the proof of theorem [ t : crossing ] , and we conclude that is an explosion point .even if , if the forward orbit of is in the interior of , the proof follows as before .furthermore the forward orbit of can not contain , the left endpoint of : namely , since is the endpoint of an invariant interval , this either implies a double homoclinic tangency in the forward orbit of , which is ruled out by h3 ; or it implies both a tangency at and a transverse homoclinic intersection at in the orbit of , which would mean that is contained in a non - crossing orbit .finally , using the same proof as given for theorem [ t : preimageinv ] , for all , is an explosion point .this completes the classification of when homoclinic points are explosion points . unlike the planar case , in one dimension the heteroclinic case reduces to the homoclinic casethat is , if there is a transverse heteroclinic cycle including an orbit from to , which are hyperbolic periodic orbits , then both periodic orbits must be repellers .further , the unstable manifold of contains the unstable manifold of .since we assume only one tangency , a heteroclinic tangency point is also a homoclinic tangency point .there are many previous results on the structure of -limit sets and chain recurrent sets for interval maps .block and coppel showed that the chain recurrent set for maps of the interval can be classified as the set of points , where is the intersection of all asymptotically stable sets containing the limit set .they showed that is either an asymptotically stable periodic orbit , a set of asymptotically stable iterated intervals , or special type of set known as a solenoidal set .however , this classification is not as useful as it would appear , since is not the set of points in the chain component containing . block also proved that an interval map has a homoclinic point if and only if it has a periodic point with period not a power of two . block and hart improved on this result to show the existence of a homoclinic point to a given power of two implies a cascade of homoclinic bifurcations .further , if a family of maps changes from zero to positive entropy , then there is a cascade of homoclinic bifurcations .of most relevance to the topic of this paper are the works of sarkovskii , ma , blokh , and blokh , bruckner , humke , and smital , where the detailed structure of all possible -limit sets is studied .the -limit set can be a cantor set known as a basic set ( definition [ d : basic ] ) .another interesting case occurs when the -limit set of a point is a limit of a period doubling cascade , known as a solenoid ( definition [ d : solenoid ] ) .see for a detailed characterization of solenoids .blokh showed that for -smooth maps , -limit sets are either periodic orbits , periodic transitive intervals , subsets of basic sets , or solenoids .we use this result to systematically show that for all possible explosions , there are saddle - node or tangency bifurcations .the following definition describes points at which jumps in an -chain are required .we call these points barricades , as they serve to obstruct orbits . for example , at the bifurcation parameter , a saddle node point blocks the points on one side from reaching the other side .[ d : barricade ] assume h1 and h2 .let be any point .let .a point is called a * barricade * for if it is blocking the orbit of .that is , let .then contains points not contained in .we consider a point such that the -limit set is a fixed point or periodic orbit .[ t : toper ] assume h1 - 3 .assume that for , is a point such that is equal to a fixed point or periodic orbit , and is a barricade .then is either non - hyperbolic or is the image of a critical point .assume that is a fixed point , since otherwise we can let . since is a barricade , , and ( using the notation from definition [ d : barricade ] ) .this implies that can not be an attracting fixed point , since for an attracting fixed point . thus if is hyperbolic , it must be repelling , meaning that is a preimage of .define by . since is a barricade , for small , is an interval on one side of , implying that is in the orbit of a critical point .the previous theorem only indicates that a critical point exists .is still remains to be shown that the critical point is actually a homoclinic or heteroclinic point .we use the fact that all -limit sets for interval maps have been classified .the following theorem is useful in the proof of several subsequent theorems .[ t : useful ] assume h1 - 3 , and that is an invariant interval under , such that for all , there is an -chain from a point to a point .then there is an -chain from an endpoint of to , where is fixed or period two .furthermore , if is not an endpoint of , then either is non - hyperbolic ; or is repelling , and there is an orbit of a critical point in mapping onto .since , the only way for an -chain to exit is through an -jump across one of the endpoints .call the endpoint .thus there is a chain from to .assume does not map to itself or to the other endpoint of .then is contained in the interior of .but this means that no small -jump at exits , which is a contradiction .thus can be chosen to be either a fixed point or a period two point .the orbit of can not be attracting , since then there would not be -chains from to any other point . if the orbit if is hyperbolic , then it is repelling , and there is a chain from to .if the orbit of includes , and is not an endpoint of , then the orbit of contains a critical point , since is invariant . if the orbit of does not include , then the limit set of contains more than just a repelling periodic orbit .the following result shows what happens if the backward limit set of a point contains a periodic orbit .[ t : fromper ] assume h1 - 3 .let be a fixed point or periodic orbit for which is hyperbolic .let be a point such that for all there is an -chain from to , but is not in the unstable manifold of .then there is a homoclinic tangency to a periodic orbit .assume without loss of generality that is a fixed point ( since otherwise , we can use the same proof for an iterate of ) .since is hyperbolic , it is repelling .the unstable manifold of is an invariant interval , denoted .since , there is a barricade point for . by theorem [ t : useful ] , there must be a barricade point which is a periodic endpoint of , with a critical point in mapping to the endpoint .that is , there is a homoclinic tangency point for the periodic endpoint . combining theorems [ t : toper ] and [ t : fromper ] ,we conclude that if an explosion occurs at a point such that is equal to a periodic orbit , then there is either a saddle - node bifurcation point or a tangency between stable and unstable manifolds .we now consider more general -limit sets .first consider the case where the -limit set of a point is an interval .assume h1 - 3 , for , is such that is an interval , and that there exists such that is a barricade for . then is an endpoint of the interval , is fixed or period two , and if is hyperbolic , then there is a homoclinic tangency to .since , is transitive on .therefore the unstable manifold of any repelling periodic orbit in contains all of .a barricade must not be in the interior of .thus is an endpoint of .the result now follows from theorem [ t : useful ] .we now consider the case of an -limit set that is contained in an invariant interval but is nowhere dense .[ d : basic ] assume h1 .let be an -periodic cycle of intervals for a function .define for every open interval of in .if is infinite , then it is called a basic set .assume h1 , h2 , and h3a .assume that is an explosion point , and is nowhere dense and is contained in a basic set .then is a periodic orbit .assume that is an explosion point , and that .if is contained in an interval complementary to , then by blokh , is a periodic orbit .assume is non - periodic , meaning is contained in the basic set . by , is contained in the closure of the periodic orbits for .using h3a , there is a sequence of hyperbolic periodic points converging to . thus is not an explosion point . by the above theorem combined with theorems [ t : toper ] and [ t : fromper ] ,if an explosion point has an -limit set which is a basic set , then there is either a saddle - node point or a tangency .the last possibility for an -limit set is a solenoid , as in the following definition .[ d : solenoid ] assume h1 .let be a nested sequence of cycles of intervals for a function with least period , .thus , is increasing , and for each , .if the set is nowhere dense , then is called a solenoid or feigenbaum - like set .jimnez lpez has shown that solenoids are the boundary of chaos and order .blokh demonstrated that solenoids and basic sets are disjoint .we prove the following result .assume h1 , h2 , and h3a .assume that is an explosion point , and is a solenoid .then there is an infinite sequence periodic orbits which are barricades with associated tangencies .since solenoids and basic sets are closed , invariant , and disjoint , there is a neighborhood of solenoid containing no basic sets . by definition, there is an infinite nested sequence of invariant cycles of intervals containing .blokh proved that the periodic orbits are dense in a neighborhood of , meaning that is not contained in .thus for all sufficiently large , for all there is an -chain from points in to , but is not contained in . by theorem [ t : useful ] , there exists , an endpoint of an interval the cycle of which is periodic . by hypothesis h3a , for sufficiently large , is hyperbolic . since for all there is an -chain from to , is a repeller for large .there is an orbit of a critical point in mapping onto .furthermore , is not contained in the unstable manifold of , since there is a nested sequence of invariant not containing .theorem [ t : fromper ] implies that the critical point to is a point of homoclinic tangency .if has a finite number of homoclinic and heteroclinic tangencies , as assumed in h3b , then the above theorem shows that there are no forward chains from solenoid to a point outside of .[ r : sn ] we have shown that there is either a tangency or a non - hyperbolic critical point contained in the same chain component as . under a generic hypothesis ( h3a ) ,a non - hyperbolic periodic orbit is either codimension - one saddle - node or period doubling bifurcation .in fact , such an orbit is not a period doubling point , since the periodic orbit at a period - doubling bifurcation point is attracting .we now combine the results of this section to give a proof of the general explosion classification theorem .( of the general explosion classification theorem ) assume that is an explosion point .the only possibilities for are a periodic orbit , a cycle of intervals , a nowhere dense basic set , and a solenoid .above , we have shown that in any of these cases , there is a periodic barricade point for which is either non - hyperbolic , or there is a homoclinic or heteroclinic tangency .in fact , the case of a solenoid is ruled out by h3b . by remark [ r : sn ], a non - hyperbolic periodic orbit must necessarily be a saddle - node point .this completes the proof the theorem .e.s . would like to thank alexander blokh and victor jimnez lpez for their helpful suggestions .was partially supported by nsf grant dms0616585 . *ams subject classification : 37 . *+ e. sander department of mathematical sciences , george mason university , 4400 university dr . ,fairfax , va , 22030 , usa . + e - mail : ` sander.gmu.edu ` + j.a .yorke , ipst , university of maryland , college park , md 20742 , usa . + e - mail : ` yorke.umd.edu ` +
|
a discontinuous change in the size of an attractor is the most easily observed type of global bifurcation . more generally , an _ explosion _ is a discontinuous change in the set of recurrent points . an explosion often results from heteroclinic and homoclinic tangency bifurcations . newhouse and palis conjectured in 1976 that planar explosions are generically the result of either tangency or saddle node bifurcations . in this paper , we prove this conjecture for one - dimensional maps . furthermore , we give a full classification for all possible tangency bifurcations and whether they lead to explosions .
|
given two isometric poses of the same non - rigid object as triangular meshes and with known point - to - point correspondences , we aim to find a smooth isometric deformation between the poses .interpolating smoothly between two given poses is called _morphing_. we achieve this by finding shortest paths in an appropriate shape space similar to the approach by kilian et al .we propose a novel shape space .a deformation of a shape represented by a triangular mesh is isometric if and only if all edge lengths are preserved during the deformation .this property holds because each face of the mesh is a triangle .a deformation of a shape is called _ most isometric _ if the sum of the squared differences between the corresponding edge lengths of the two shapes is minimized . in this paper, we examine isometric deformations of general _ triangular meshes _ in and of _ triangulated polygons _ , which are triangular meshes with no interior vertices .we introduce a new shape space for triangulated polygons that has the property that interpolating linearly in shape space corresponds to the most isometric morph in .we then extend this shape space to arbitrary triangulations in using a heuristic approach .furthermore , we discuss a modification of the shape space that is useful for isometric skeleton morphing .computing a smooth morph from one pose of a shape in two or three dimensions to another pose of the same shape has numerous applications .for example in computer graphics and computer animation this problem has received considerable attention . a recent survey on this topicwas written by alexa .we only review the work most relevant to this paper . before considering morphing three - dimensional mesh models ,the two - dimensional version of the problem has received considerable attention . in the case where the input is sampled over a regular domain , this problemis called _ image morphing_. image - morphing is widely studied and surveyed by wolberg . in the case where the input is sampled over an irregular domain ,the problem is to interpolate between two simple polygons in the plane .we only review work that makes use of intrinsic representations of the polygons .sederberg et al . propose to interpolate an intrinsic representation of two - dimensional polygons , namely the edge lengths and interior angles of the polygon .surazhsky and gotsman morph by computing mean value barycentric coordinates based on an intrinsic representation of triangulated polygons .this method is guaranteed to be intersection free .iben et al . morph planar polygons while guaranteeing that no self - intersections occur using an approach based on energy minimization .this approach can be constrained to be as isometric as possible .sun et al . morph between three - dimensional manifold meshes .they extend the approach by sederberg et al . to three dimensions by extending the intrinsic representation to polyhedra . however , the developed methods are computationally expensive . we propose a different intrinsic representation of triangular meshes that offers the advantage of producing the most isometric morph and of being efficientsorkine and alexa propose an algorithm to deform a surface based on a given triangular surface and updated positions of few feature points .the surface is modeled as a covering by overlapping cells .the deformation aims to deform each cell as rigidly as possible .the overlap is necessary to avoid stretching along cell boundaries .the deformation is based on minimizing a global non - linear energy function that is simple to implement .the energy is guaranteed to converge . however , since the energy function may have multiple minima , the algorithm is not guaranteed to find the global minimum .the approach tends to preserve the edge lengths of the triangular mesh .this property depends upon finding a global minimum of the energy function .one can not guarantee to find this global minimum .recently , zhou et al . proposed a new method to deform triangular meshes based on the laplacian of a graph representing the volume of the triangular mesh .the method is shown to prevent volumetric details to change unnaturally . recently , kilian et al . used shape space representations to guide morphs and other more general deformations between shapes represented as triangular meshes . each shape is represented by a point in a high - dimensional shape space and deformations are modeled as geodesics in shape space .the geodesic paths in shape space are found using an energy - minimization approach . before kilian presented the use of a shape space for shape deformation and exploration of triangular meshes , shape space representations were developed to deform shapes in different representations .cheng et al . proposed an approach that deforms shapes given in skin representation , which is a union of spheres that are connected via blending patches of hyperboloids , with the help of a suitable shape space .furthermore , algorithms for deforming curves with the help of shape space representations were proposed by younes and klassen et al .eckstein et al . propose a generalized gradient descent method similar to the approach by kilian et al .that can be applied to deform triangular meshes .all of these approaches depend on solving a highly non - linear optimization problem with many unknown variables using numerical solvers .it is therefore not guaranteed that the globally optimal solution is found . in this paper ,we propose a novel shape space with the property that interpolating linearly in shape space approximates the most isometric morph in . for triangulated polygons , we prove that the linear interpolation in shape space corresponds exactly to the most isometric morph in . for arbitrary triangulated manifolds in ,we provide a heuristic approach to find the morph .this heuristics is an extension of the approach developed for polygons .the proposed methods do not require solving a minimization problem .this section introduces a novel shape space for triangulated polygons with the property that interpolating linearly in shape space corresponds to the most isometric morph in .the dimensionality of the shape space is linear in the number of vertices of the deformed polygon .we start with two triangulated polygons and corresponding to two near - isometric poses of the same non - rigid object .we assume that the point - to - point correspondence of the vertices and are known .furthermore , we assume that both and share the same underlying mesh structure . hence , we know the mesh structure with two sets of ordered vertex coordinates and in , where is an outer - planar graph .we will show that we can represent and as points and in a shape space , such that each point that is a linear interpolation between and corresponds to a triangular mesh isometric to and in . as we know the point - to - point correspondence of the vertices and , we can find the best rigid alignment of the two shapes by solving an overdetermined linear system of equations and by modifying the solution to ensure a valid rotation matrix using an approach similar to the one used for camera calibration .let consist of vertices .as is a triangulation of a polygon with vertices , has edges and triangles .we assign an arbitrary but fixed order on the vertices , edges , and faces of .the shape space is defined as follows .the first coordinates of a point correspond to the coordinates of the first vertex in .coordinates and of correspond to the direction of the first edge of incident to in spherical coordinates .the next coordinates of are the lengths of the edges in in order .the final coordinates of describe the outer normal directions of the triangles in in spherical coordinates , in order .hence , the shape space has dimension . in the following ,we prove that interpolating linearly between and in shape space yields the most isometric morph .to interpolate linearly in shape space , we interpolate the edge lengths by a simple linear interpolation .that is , , where is the coordinate of .the normal vectors are interpolated using geometric spherical linear interpolation ( slerp ) .that is , , where is the angle between the two directions that are interpolated .note that the relative rigid alignment of and in has an influence on the linear interpolation .that is , the interpolating shape space point varies as the relative rigid alignment of and in changes .the change occurs because the angles between the normal vectors of change as a result of the rigid transformation .this is the reason we choose to find the best rigid alignment of and before transforming the polygons into .to study interpolation in shape space , we make use of the _ dual graph _ of .the dual graph has a node for each triangle of .we denote the dual node corresponding to face of by .two nodes of are joined by an arc if the two corresponding triangles in share an edge .we denote the dual arc corresponding to an edge of by . note that because meshes a polygon , it is an outer - planar triangular graph andso the dual graph of is a binary tree .an example of a mesh with its dual graph is shown in figure [ dual_graph ] . with its dual graph .__,width=188 ] let be the underlying mesh structure of the triangulated polygons and .the linear interpolation between and in shape space for has the following properties : 1 .the mesh that corresponds to is uniquely defined and has the underlying mesh structure .we can compute this mesh using a traversal of the binary tree in time .2 . if and are isometric , then is isometric to and .if and are not perfectly isometric , then each edge length of linearly interpolates between the corresponding edge lengths of and .3 . the coordinates of the vertices of are a continuous function of .[ theorem_polygon ] * part 1 : * to prove uniqueness , we start by noting that the first vertex of is uniquely determined by the first three coordinates of .the direction of the first edge of incident to is uniquely determined by coordinates and of , because each point on the unit sphere determines a unique direction in .the length of each edge of is uniquely determined by the following coordinates .furthermore , the outer normal of each triangle is uniquely determined by the following coordinates , because each point on the unit sphere determines a unique direction in .hence , the edge is uniquely determined . for a triangle containing , we now know the position of two vertices of , the plane containing , and the three lengths of the edges of . assuming that the normal vectors in shape space represent right - hand rule counterclockwise traversals of each triangle , this uniquely determines the position of the last vertex of . we can now determine the coordinate of each vertex of uniquely by traversing .we start the traversal of at .recall that the coordinates of the vertices of triangle are known .hence , when traversing an arc incident to , we know the vertex coordinates of the shared edge between the two triangles corresponding to endpoints of . denote the endpoint of not corresponding to by . for , we now know the position of two vertices of , the plane containing , and the three lengths of the edges of .hence , we can compute the position of the last vertex of . because we now know the coordinates of all the vertices of , we can traverse all of the arcs in incident to . in this fashion, we can set all of the vertex coordinates of by traversing . because is a tree , it is cycle - free . hence , the coordinates of each vertex of are set exactly once . because the complexity of is , the algorithm terminates after steps .it remains to prove that is a valid triangular mesh , that is , that the three edge lengths of each triangle of satisfy the triangle inequality .we assume that both input meshes were valid triangular meshes .hence , for any triangle with edge lengths in and in , the following inequalities hold : and in , due to the linear interpolation of the end positions . hence , similarly, we can show that and .hence , is a valid triangular mesh . *part 2 : * the edge lengths of are linear interpolations between the edge lengths of and . hence , the claim follows .* part 3 : * when varying continuously , the point varies continuously .hence , the coordinate of the lengths of all the edges vary continuously . because a direction ^t $ ] varies continuously if and vary continuously , the normal directions vary continuously . because all the vertex positions of the mesh are uniquely determined by continuous functions of those quantities ,all vertex positions of vary continuously .note that we do not need to solve minimization problems to find the shortest path in shape space as in kilian et al . . the only computation required to find an intermediate deformation pose is a graph traversal of .because has complexity , we can traverse in time .hence , we can compute intermediate deformation poses in time each .we denote this the _ polygon algorithm _ in the following .a very similar shape space to the one presented in section [ polygon ] can be used to isometrically morph between two topologically equivalent skeletons .let a _ skeleton _ in a set of joints connected by links arranged in a tree - structure .that is , we can consider a skeleton to be a tree in consisting of vertices and edges . the shape space presented in section [ polygon ] can be simplified to a shape space for skeletons in with the property that interpolating linearly in shape space corresponds to the most isometric morph in .the dimensionality of the shape space is linear in the number of links of the skeleton .we start with two skeletons and corresponding to two near - isometric poses .we assume that the point - to - point correspondence of and are known .hence , we know the tree structure with two sets of ordered vertex coordinates and in . as before , we first find the best rigid alignment of the two skeletons .the skeleton shape space is defined in a similar way as .we assign an arbitrary but fixed root to and traverse the edges of in a depth - first order .we assign an arbitrary order to the edges incident on each vertex of .the first coordinates of correspond to the coordinates of the root in .the next coordinates of are the lengths of the edges in in depth - first order .the final coordinates of describe the unit directions of the edges in spherical coordinates in depth - first order .all edges are oriented such that they point away from the root .note that the shape space has dimension .interpolating linearly between points in is performed the same way as interpolating linearly between points in .namely , edge lengths are interpolated linearly and unit directions are interpolated via slerp . with a technique similar to the proof of theorem [ theorem_polygon ], we can prove the following theorem . in the proof, we do not need to consider a dual graph , but we can simply traverse the tree in depth - first order to propagate the information .let be the underlying tree structure of the skeletons and .the linear interpolation between and in shape space for has the following properties : 1 .the skeleton that corresponds to is uniquely defined and has the underlying tree structure .we can compute this tree using a depth - first traversal of the tree in time .2 . if and are isometric , then is isometric to and .if and are not perfectly isometric , then each edge length of linearly interpolates between the corresponding edge lengths of and .the coordinates of the vertices of are a continuous function of .[ theorem_skeleton ] this theorem allows us to morph isometrically between the skeletons of two shapes corresponding to two postures of the same articulated object in time .figure [ skeleton ] shows an example of such a morph . [cols="^,^,^ " , ]we presented an approach to morph efficiently between isometric poses of triangular meshes in a novel shape space .the main advantage of this morphing method is that the most isometric morph is always found in linear time when triangulated polygons are considered . for general triangular meshes , the approach can not be proven to find the optimal solution .however , this paper presents an efficient heuristic approach to find a morph for general triangular meshes that does not depend on solving a non - linear optimization problem .the presented experimental results demonstrate that the heuristic approach yields visually pleasing results .the approach is not invariant with respect to the order of the vertices of the mesh , but can be modified to have this property at the cost of a higher running time .an interesting direction for future work is to find an efficient way of morphing triangular meshes while guaranteeing that no self - intersections occur . for polygons in two dimensions , this problem was solved using an approach based on energy minimization .we thank martin kilian for sharing his insight on the topic of shape spaces with us .we thank pengcheng xi for providing us the data for the head experiment .thomas w. sederberg , peisheng gao , guojin wang , and hong mu .2-d shape blending : an intrinsic solution to the vertex path problem . in _siggraph 93 : proceedings of the 20th annual conference on computer graphics and interactive techniques _ , pages 1518 , 1993 .
|
we present a novel approach to morph between two isometric poses of the same non - rigid object given as triangular meshes . we model the morphs as linear interpolations in a suitable shape space . for triangulated polygons , we prove that interpolating linearly in this shape space corresponds to the most isometric morph in . we then extend this shape space to arbitrary triangulations in using a heuristic approach and show the practical use of the approach using experiments . furthermore , we discuss a modified shape space that is useful for isometric skeleton morphing . all of the newly presented approaches solve the morphing problem without the need to solve a minimization problem .
|
nonuniform codes are known to be good in terms of approaching the capacity of a channel affected by additive white gaussian noise ( awgn ) .they have been used already in early - state signal transmission , _e.g. _ , in the so - called _ codec _ transmission , and are present more recently in the digital video broadcasting next generation handheld ( dvb - ngh ) standard . unfortunately , nonuniform codes are in general subject to brute - force maximum - likelihood ( ml ) decoding methods , resulting in a linear decoding complexity in the codebook size .recently , in and , a family of nonlinear and nonuniform _ fuchsian codes _ were constructed based on fuchsian groups of the first kind defined from quaternion algebras over .the decoding procedure of these codes is based on a modified point reduction algorithm having logarithmic complexity in the codebook size , which was shown to imply logarithmic decoding complexity for the fuchsian codes , . in this paper, we will study one of the key features of a _ code _ , namely its ability to carry information .in other words , how many bits per channel use we will be able to send when using a given code . here, we will do this in conjunction with fuchsian codes . to this end , let us start with the following intuitive definition , which will be formalized later ( cf .[ defncoderate ] ) .let denote a codebook of size , and let denote the number of independent integers embedded in each codeword .the _ code rate _ is ( independent integer ) symbols per channel use ( spcu ) .the _ data rate _ is bits per channel use ( bpcu ) .the code rate can be thought of as the ability of the code to compress information , and the data rate as the transmission speed enabled by the code . when encoding over multiple channel uses , as is the case for lattice codes and space time codes , the suitable definition for code rate is , where is the number of channel uses . with ,this coincides with the definition above .apart from the sub - linear decoding complexity , another advantage of the fuchsian codes in is that they allow , in the sense of the above definition , to compress information .namely , they enable us to embed three independent integers in one complex number to be transmitted , having thus code rate . in comparison to the usual way of transmitting a complex signal , _e.g. _ , by using the quadrature amplitude modulation ( qam ) consisting of a finite subset of gaussian integers =\{c = a+bi\ , |\ , a , b\in\mathbb{z}\}\ ] ] embedding only two independent integers , the rate of a fuchsian code is 50% higher .the algebraic reason for the higher rate will become evident in section 3 . in what follows , we will consider quaternion algebras defined over totally real field extensions of of degree bigger than . as we will see, this implies that we can increase the code rate even further .since the point reduction algorithm works in general for arithmetic fuchsian groups , we have adapted it to some explicit groups derived from the arithmetic fuchsian groups of signature .these groups allow us to construct fuchsian codes with higher rates .we will show explicit examples of a rate six code , and our method can indeed produce fully explicit codes of rate up to .the motivation for increasing the code rate initially came for lattice coding . for lattice codes ,higher code rate typically implies higher data rates ( or equivalently , bigger codebook , cf .def . 1.1 . ) without having to increase the transmission power or to compromise the minimum distance . as the transmission power is determined by the euclidean norm of the transmitted codeword , this results from the fact that a higher rank lattice with a unit volume has more points within a euclidean hypercube of a given edge length than a lower rank lattice with a unit volume . to get an intuition, one can think of how many integer points are there in the real line between , say , 0 and 10 , versus how many integral points are there in a cube in .the same is naturally valid for euclidean hyperspheres .therefore , it is desirable to maximize the code rate .for the proposed higher rate fuchsian codes there is a caveat : due to the nonlinear and nonuniform structure , there is no _ a priori _ reason why higher rate should imply a bigger codebook .it seems difficult to give a rigorous proof for this , so we have settled with numerical experiments to see how the rate affects the codebook size . in our example cases ,higher rate seems to indeed imply a bigger codebook , given the minimum distance and the hypersphere radius .our interest in fuchsian groups as a basis for code construction stems from a series of recent papers by palazzo __ in , among others , various interesting connections between fuchsian groups and signal constellation design are presented . in ,the authors construct fuchsian groups suitable for signal constellation design . in ,the authors consider the unit disk model of the hyperbolic plane as the signal space , and the noise is modeled as a hyperbolic gaussian random variable . by using some results of hyperbolic geometry they construct a hyperbolic equivalent to qam and psk constellations and point out that , when the channel model is hyperbolic ( this is the case _e.g. _ in power transmission line communications ) , the proposed hyperbolic constellations provide higher coding gains than the classical euclidean variants . building on this work , in authors construct dense tessellations and count dirichlet domains attached to certain families of these tessellations . in authors use units of quaternion orders to construct space - time matrices with the potential use case being wireless multi - antenna ( mimo ) communications .we refer the reader to as the early references to the use of division algebras and maximal orders in mimo .although codes related to fuchsian groups had been considered before , our approach in was original in that it described a complete construction and decoding process , whereas earlier work had largely concentrated on the constellation design while giving little attention to the decoding and performance aspects .another key difference to the aforementioned works was , as is the case of the present paper , that we are studying codes on the _ complex plane _ arising from quaternion algebras and fuchsian groups , and our aim is to apply the codes to the classical ( euclidean ) channel models such as the aforementioned awgn channel , with possible future extension to fading channels .we do not use hyperbolic metric as our design metric , but use the fuchsian group as a starting point to the code generation .nevertheless , our decoder will rely on hyperbolic geometry as opposed to the classical decoders based on euclidean geometry .the paper is organized as follows : in section 2 we give some background and notation on fuchsian groups acting on the complex upper half - plane , specially those coming from quaternion algebras over a number field , in a more general setting than and . in section 3 , we generalize the construction of fuchsian codes in order to obtain codes of arbitrarily high rates . in particular , by using quaternion algebras over totally real extensions , we prove that the code rate is at least , where ] .we will assume that is a division algebra , that is , and that ramifies precisely at out of the completions of with respect to the galois embeddings of into , the archimedean places .namely , satisfies the following condition in the case of quaternion -algebras , this means that is an indefinite quaternion algebra .such algebras were used to consider fuchsian groups and to define associated fuchsian codes in .[ reg - rep ] the following map is a monomorphism of -algebras , giving a left regular representation of the quaternion algebra in a matrix algebra : notice that for any , , and . if , then . in particular ,if we restrict to quaternion elements in with reduced norm equal to 1 , then the image under is contained in .an -order in is a finitely generated -submodule , _ i.e. _ , a subring such that .the elements in are integral , namely for any , its characteristic polynomial has coefficients in . for an order , a fuchsian group is obtained by using the map : in the case of the base field being , it is interesting to consider eichler orders .eichler orders are intersections of two maximal orders . when is indefinite , the discriminant of the quaternion algebra is defined as the product of all finite primes where is ramified .the discriminant is an invariant of the quaternion algebra .the group is denoted by , where is the level of the eichler order , and for a maximal order .the group is well - defined up to conjugation . after weil s results on the classification of classical groups ,the list of all arithmetic subgroups of is exhausted up to commensurability by fuchsian groups coming from quaternion algebras over totally real number fields , cf . , where two groups and are said to be commensurable if has finite index both in and in .thus the fuchsian groups derived from quaternion algebras are in the main focus when studying tessellations .fuchsian groups derived from quaternion algebras and their quotients also led to the theory of shimura curves in the sixties .the case of matrix algebras corresponds to classical modular curves .since we assumed that is a division algebra , the quotient is already compact and there are no cusps . thus the classical problems of finding fundamental domains and reducing points to a given fundamental domain call for algorithmic solutions different from those available for the modular case . for more results on fundamental domains ,see , , .as already mentioned , a general algorithm for the point reduction problem was recently proposed in .some specific examples can be found in . as the decoding of fuchsian codesis based on the point reduction algorithm , let us summarize here its main features .see for validity and complexity proofs .first , let us recall that the construction of a fundamental domain for following fords s method ( cf . ) is based on the use of _ isometric circles _ , _i.e. _ , hyperbolic lines in the upper half - plane associated to the matrices : [ mdef ] [ rem - g ] for any fuchsian group and any fixed fundamental domain , let us denote by the minimal subset of such that the edges of are included in the set of isometric circles defined by the elements of . as a presentation of the group arises from the pairing of the edges , we can assume that the generators of are included in .the set splits into two subsets denoted by and according to whether the fixed fundamental domain is located in the interior or in the exterior of each isometric circle , respectively .hence , if and denote the exterior and the interior of the isometric circle , the fundamental domain is the closure of now we are ready to introduce the point reduction algorithm ( pra ) .it gives a solution to the reduction point problem ; namely , it reduces a given point to a point , and yields a transformation such that .[ algorithm ] * pra ( point reduction algorithm ) * + * * step 1 * initialize : and . * * step 2 * check if . if , return and .if , return such that : , if , if .* * step 3 * compute and .go to step 2 .the pra will be used for the decoding of fuchsian codes in the sequel . in order to study the complexity of the algorithm when applied to fuchsian codes , we state the following remark and definition .consider , , . by construction , to run the pra with input or will output different points or , respectively , but the same matrix , in the same number of steps .[ depth ] given a matrix , the _ depth _ of , denoted by , is the minimal number of iterations of the pra to reduce to the fundamental domain for any .in , we described in detail how to construct and decode fuchsian codes in an awgn channel . in order to make this paperself - contained , we shortly restate the process in this more general setting , focusing on some essential properties .let be a fuchsian group as in section 2.1 .the first step in the construction of the code is to fix a fundamental domain , which determines a tessellation of the complex upper half - plane , and a set whose corresponding isometry circles are the edges of the fundamental domain , following the notation in section 2 .the main step in the process is to choose a set of different elements in , , what is equivalent to choosing different tiles in the tessellation .moreover , we choose to be an interior point of ; this condition ensures that for all . finally ,considering the action of the group in the complex upper half - plane , we obtain the codewords in .the condition on ensures .we can double the number of points by expanding to the lower half - plane in a natural way by including the opposites .thus , the codebook consists of the complex points constructed by using and , and the symmetry with respect to the origin .based on the outlined process , we give a formal definition of a fuchsian code below , and summarize the construction process in table 1 .let be a fuchsian group .given a fundamental domain , a set , and a point in the interior of , we define the associated _ fuchsian code _ as .the set of codewords is also referred as a -_nonuniform fuchsian constellation _, -nuf in short , where is the size of the code .the point will be called the _ center _ of the code .[ construction - figure ] this construction was stated in in the case of groups derived from quaternion algebras over .we refer the interested reader there for explicit examples , including details about representations of the groups , fundamental domains , centers of the codes , lists of codewords , as well as some experimental results .the general construction stated above now allows us to construct more general codes , as long as we are able to determine the respective fundamental domains .in particular , it can be applied to groups derived from quaternion algebras over a totally real field . the behavior and performance of fuchsian codeswill essentially depend on the choices in the intermediate step .an algebraic and geometric study of fuchsian groups and their fundamental domains will therefore be useful for developing a general understanding of various code parameters , such as the minimum distance and average transmission power .we refer again to for a more detailed exposition .let be a -nuf constellation with center , associated to a fixed fuchsian group with a fixed fundamental domain .it is clear that given , , the pra described in section 2.3 computes such that , which is equivalent to finding the tile containing in the tessellation of induced by the fundamental domain . if , it is enough to consider and then apply the pra . in the context of awgn channels ,let be the transmitted codeword and the received signal , , where is the gaussian noise .the basic idea underlying our decoding technique is that , provided that the channel is of sufficiently good quality , the received signal will belong to the tile determined by . in other words , when we apply the pra to , it will return and the transmitted codeword can be recovered . in order to measure the decoding complexitywhen employing the pra , we define ( also cf .[ depth ] ) : the _ depth _ of the code is .next , we describe the encoding and decoding process of fuchsian codes in detail . in order to remain in the upper half - plane whilst decoding , we initialize the algorithm with if , and with if .since has measure zero in , the case occurs with probability zero .[ algorithm ] * encoding and decoding of fuchsian codes * + * * step 1 * assign a matrix .* * step 2 * compute the codeword . ** step 3 * transmit using the awgn channel .the receiver obtains , where is the gaussian noise . ** step 4 * decode the signal : if , apply pra to y , obtain . if , save the sign information , , apply pra to , obtain .* * step 5 * .the following theorem proves the existence of fuchsian codes with logarithmic decoding complexity .let be a fuchsian group containing a non - abelian free subgroup .there exist fuchsian codes associated to such that the decoding algorithm for has logarithmic complexity in , namely , the number of arithmetic operations satisfies the proof is analogous to the corresponding results in , so we only provide a sketch of the proof .the first part of the proof is to count the maximal number of arithmetic operations when running the pra . in each iteration , we got that the number of operations only depends on the fundamental domain , and not on the code size .as the number of iteration is bounded by the depth , . secondly , by using the technical condition given by the existence of a non - abelian free subgroup , a fuchsian code can be constructed in such a way that .the key point is to choose as large as possible while controlling the depth of their elements .combining these two parts , we deduce the existence of fuchsian codes with logarithmic complexity .actually , for a fixed fuchsian group the complexity of the decoding algorithm depends only on the selection of the subset .the choice of the center of the code will influence the performance of the code , being related to the minimum border distance , as stated in ( see the code design criterion therein ) .let us reduce to the case of fuchsian groups derived from quaternion algebras over , as in section [ sec - alg ] . to this end , let be a totally real number field with ring of integers and =n ] , a complex number to be transmitted is identified with the matrix ( recall that we take an interior point of the fundamental domain ) .writing with , we can identify with the -tuple , which is subject to the normic equation thus , the -tuple consists of algebraically independent integers .this is equivalent to say that the set of -tuples satisfying the normic equation has algebraic degrees of freedom .notice that this is precisely the dimension of the algebraic set defined by the normic equation ( which is not empty , since it contains all the infinite -tuples attached to the fuchsian group ) .the concept of algebraic code rate , denoted , will be hence defined so that for this code we have symbols per channel use ( spcu ) .we can easily generalize this notion of code rate for fuchsian codes over totally real number fields .let be a totally real number field of degree with ring of integers , let be a quaternion -algebra satisfying condition [ cond - quat - alg ] and , with a maximal -order .each matrix has the form with .fixing a -basis of , we can identify with an -tuple and analogously with , , and .hence , we can identify the matrix with a -tuple .the fact that the -tuple satisfies the normic equation ( defined over ) is equivalent to the fact that the corresponding -tuple satisfies a certain system of polynomial equations ( defined over ) .this system of polynomial equations defines an algebraic set which we denote by .notice that this algebraic set has infinitely many elements , hence , its algebraic dimension is well defined .the algebraic code rate in symbols per channel use ( spcu ) , or code rate from now on , of the fuchsian code attached to a fuchsian group satisfying condition [ cond - quat - alg ] is the algebraic dimension of the algebraic set .[ defncoderate ] the code rate defined this way , measures how many degrees of freedoms are there in the set of -tuples attached to the fuchsian group .we can think of the code rate , hence , as the number of algebraically independent ( non - redundant ) symbols in each -tuple , or as we said in the introduction , the maximal number of independent symbols embedded in each codeword , with .notice that if an undetermined system of polynomial equations in variables has solutions , then the set of all complex solutions is an algebraic set of dimension at least . in particular , the code rate of a fuchsian code will be at least , being the number of equations defined over which are equivalent to the normic equation , which is defined over .the main result on the existence of fuchsian codes of arbitrarily high rates is the following theorem .the proof consists on proving the two propositions stated after the theorem .[ thm - rate ] let be a totally real number field of degree . there exist infinitely many fuchsian codes with rate at least attached to . in order to prove our main theorem we first prove that , for a fixed quaternion algebra satisfying [ cond - quat - alg ] , there are fuchsian codes of rate at least .this is proposition [ prop - rate-3n ] .second , we prove the existence of such quaternion algebras in proposition [ ex - quat - alg ] .[ prop - rate-3n ] let be a totally real number field of degree , with ring of integers , and a quaternion -algebra satisfying condition [ cond - quat - alg ] . then, a fuchsian code associated to the natural order ] of the quaternion algebra .then is determined by -tuples satisfying .let be a -basis of with .writing , , , , each of these algebraic integers can be identified with its coordinates in the integral basis .thus any 4-tuple can be identified as a -tuple of rational integers .let us expand the normic equation such that it corresponds to a system of polynomial equations defined over .we set , with ] quadratic homogeneous polynomials .the condition now becomes equivalent to this system defines the algebraic set . since a -tuple corresponding to an element of the fuchsian group bears restrictions , and the algebraic set contains infinitely many solutions , we see that the algebraic dimension of , or equivalently , the code rate of the fuchsian code attached to is at least spcu , proving the proposition .the following proposition addresses the question whether there exist quaternion algebras to which the above proposition can be applied .[ ex - quat - alg ] let be a totally real number field of degree .there exist infinitely many quaternion algebras over satisfying the condition [ cond - quat - alg ] .if is odd , define to be the set of all but one archimedean absolute values of .otherwise , define as the set of all but one archimedean absolute places and add a non - archimedean absolute value attached to a prime ideal of .thus , is of even cardinality and , by the well - known classification theorem of quaternion algebras , there exists a unique quaternion algebra up to isomorphism such that ramifies exactly for each .this is equivalent to say for all .therefore splits only at one archimedean absolute value satisfying the condition [ cond - quat - alg ] .the corresponding result holds for constructed from by adding an even number of non - archimedean absolute values as before .therefore , there exists infinitely many quaternion algebras satisfying the desired condition .let be an odd prime , a primitive -th root of unity , and consider the -th cyclotomic field , =p-1 ] .this will allow us to construct infinitely many fuchsian codes of rate at least provided that we can find quaternion algebras over satisfying the required condition , namely .let us denote by the set of all possible absolute values attached to , archimedean or not .the following technical lemma will be used in order to make easier the construction of quaternion algebras over the field satisfying the condition [ cond - quat - alg ] and to be more explicit in the description of such quaternion -algebras over which we can construct fuchsian codes .let be a number field , and a finite subset of even cardinality .suppose that is a non - square element in for any .then , there exists an element such that the quaternion algebra splits precisely at the absolute values .[ wiese ] for every prime , there exists such that [ infnite ] let us define a set of absolute values of places in the following way : * if , take to be the set of all archimedean absolute values minus one . *if , take to be the set of all archimedean absolute values minus one , adjoining a finite place , , attached to a prime ideal of over a rational prime such that * * , * * .+ notice that such a prime always exists , due to the chinese remainder theorem .indeed there are infinitely many , due to the theorem by dirichlet on primes in arithmetic progressions . in the first case, contains only archimedean absolute values . since for any , , it turns out that is not a square in , and by applying lemma [ wiese ] , we have that there exists such that . in the second case , likewise , for all the archimedean places , is not a square in . the same hold for the the non - archimedean absolute value : first , by a well known result on cyclotomic fields ( cf . 2.13 ) , factors in distinct prime ideals in .hence , will factor in distinct prime ideals in , i.e , is totally split .hence , denoting by the ring of integers of , and ( abusing notation ) , by the unique maximal ideal of , we have that .hence , if were a square in ( hence in ) , by reducing modulo , we would have that , a contradiction with the fact that . this way , is a non - square for each absolute value , and applying lemma [ wiese ] again , the result holds .applying theorem [ thm - rate ] and this proposition , we deduce the following corollary . for any odd prime ,such that , there exist infinitely many fuchsian codes with rate at least , related to .the corresponding quaternion algebras can be taken of the form , for an infinite family of elements . in the proof of proposition [ infnite ], we have freedom to choose among infinitely many primes satisfying the two conditions * , * . for each of these primes , we choose a prime ideal above , yielding a quaternion algebra , ramifying at and at some archimedean primes . since, due to the classification theorem , quaternion algebras ramifying at the same places are isomorphic , the elements have to be distinct all of them .now , since the ring of integers of is ] , where . for a fixed ,the previous proposition provides a method for choosing quaternion algebras over in order to construct fuchsian codes with rate .however , there are also other means to construct fuchsian codes . for instance , for , by using magma , we see that the quaternion algebra over given by also satisfies the conditions and leads to fuchsian codes of rate at least .in this section , we give an explicit construction for a particular family of fuchsian groups derived from the so called arithmetic fuchsian groups of signature . some of these fuchsian groups were also considered in . here, we will deal with some fuchsian groups derived from the arithmetic fuchsian groups of signature defined over totally real fields . in the first subsection, we explicitly construct the fuchsian codes attached to the arithmetic fuchsian codes of signature , while in the second subsection we provide numerical data to demonstrate that , at least in some example cases , higher code rate allows us to increase the codebook size ( equivalently , data rate ) for a fixed minimum distance and maximum transmission power ( cf . section 1 ) .future work consists of giving a rigorous proof for this fact .arithmetic fuchsian groups were characterized in by takeuchi , who moreover classified and gave a complete list of them in the case of signature in , determining the associated quaternion algebra up to isomorphism .we summarize below the main properties useful for this paper .we refer to the same references for algebraic details and proofs .let be an arithmetic fuchsian group of signature associated to a division quaternion algebra .we can assume .then , a. the genus of the compact riemmann surface is 1 .b. there exist satisfying and , such that the group admits a presentation of the form c. a fundamental triple of generators of is uniquely determined by , up to -conjugation .[ prop - t2 ] consider a group as above , determined by the generators and as above .denote by the subgroup of generated by .then , a. is a normal subgroup of , and =4 $ ] .b. .c. is a fuchsian group derived from a quaternion algebra , where , and . in particular , is contained in the image by the regular representation of the group of units of reduced norm of a maximal order of .the generators for the groups will be made explicit by using the following result on the groups , proved by sijsling , cf . .[ sijsling ] [ generatorstak ] let be an arithmetic fuchsian group of signature generated by and .then , after a change of variables , we can suppose that with an algebraic integer and .we are interested in the explicit construction of codes attached to for several groups in the list given by takeuchi .table [ table1 ] gives the parameters of the sample groups we will consider , using the same notation as in the previous propositions . for each group , the code construction process can be made explicit .first , we can easily find the generators of , by applying proposition [ sijsling ] to compute the explicit matrices and .namely , given the trace triple , the equation will determine ; then , we obtain by solving ; and finally , will give , determining .thus , we have an explicit presentation of each group . the next step to construct our codesis to determine a fundamental domain for in each case .fundamental domains for several arithmetic fuchsian groups over can be found in .for the general totally real case , it is more complicated . in the present work ,we have effectively computed them with the aid of _ mathematica _ by using the explicit generators of the groups computed above .let us consider .the generators of the group , given in proposition [ prop - t2 ] , are obtained from : a fundamental domain for is displayed in figure [ fdtal ] .its edges are given by the isometric circles of the following transformations : now we turn the attention to the design of the codes .we fix as the center of the code , and for each code size , we have chosen a set of matrices with the aim of minimizing the average energy .we display in tables [ table4cons ] and [ table16cons ] the choices for the set of elements for the sample groups , giving 4-nuf and 16-nuf constellations ..choices for the -nuf codes [ cols="^,<",options="header " , ]in this paper , we have generalized the construction of the fuchsian codes presented in and to the general case of fuchsian groups over totally real fields .one of the main features of the codes in is that they have logarithmic decoding complexity .we have shown that by adapting the point reduction algorithm introduced in to the present , more general case , the decoding complexity of the corresponding fuchsian codes remains logarithmic in the codebook size , provided that we have a fundamental domain and a representation of the fuchsian group . in the case of fuchsian groups associated to quaternion algebras defined over a totally real number field , with an additional hypothesis about their ramification , we have proved that the fuchsian code attached to has rate at least , where is the degree of the base field .this corresponds to at least -fold ( resp .-fold ) information compression in terms of the number of independent integers transmitted per codeword compared to the commonly used pam ( resp .qam ) alphabet .moreover , we have deduced that there exist infinitely many fuchsian codes of rate . in particular , by considering subfields of cyclotomic fields , we have made explicit the existence of infinitely many fuchsian codes with rate .we have explicitly constructed fuchsian codes attached to the groups for arithmetic fuchsian groups of signature classified by takeuchi . finally , the relevance of the code rate in terms of information compression and data rate has been numerically demonstrated .further research will consist of rigorously finding and proving a relation between the code rate and the data rate , and of the construction of an error - correcting outer system for our codes .possible enabler of this is the excess of code rate that could be alternatively utilized for error correction , e.g. , by using some kind of an analogy of a parity - check method .this would make our codes more suitable for the low - moderate snr regime ( snr stands for the signal - to - noise ratio describing the channel quality ) , while at the moment the relevant application is the high - snr regime .one instance of this is an optic - fiber channel .blanco - chacn , i. ; hollanti , c. ; remn , d. : fuchsian codes for awgn channels . preproceedings . the international workshop on coding and cryptography , wcc 2013 .p. 496507 .bergen ( 2013 ) .isbn : 978 - 82 - 308 - 2269 - 2 .digital video broadcasting consortium , _dvb.org_. de souza , m. , faria , m.b . , palazzo , r. , firer , m. : edge - pairing isometries and counting dirichlet domains on the densest tessellation ( 12g-6,3 ) for signal set design .journal of the franklin institute * 349 * , 11391152 ( 2012 ) vieira , v.l ., palazzo , r. , faria , m.b . : on the arithmetic fuchsian groups derived from quaternion orders .proceedings of the international telecommunications symposium ( its 2006 ) , fortaleza - ce ( brazil ) ( 2006 )
|
recently , so - called fuchsian codes have been proposed in [ i. blanco - chacn et al . , `` nonuniform fuchsian codes for noisy channels '' , j. of the franklin institute 2014 ] for communication over channels subject to additive white gaussian noise ( awgn ) . the two main advantages of fuchsian codes are their ability to compress information , i.e. , high code rate , and their logarithmic decoding complexity . in this paper , we improve the first property further by constructing fuchsian codes with arbitrarily high code rates while maintaining logarithmic decoding complexity . namely , in the case of fuchsian groups derived from quaternion algebras over totally real fields we obtain a code rate that is proportional to the degree of the base field . in particular , we consider arithmetic fuchsian groups of signature to construct explicit codes having code rate six , meaning that we can transmit six independent integers during one channel use . = eufm10 scaled 0 = msbm10 scaled 0 = cmr10 scaled 0 = cmb10 scaled 0
|
for many random constraint satisfaction problems ( csp ) , such as random graph coloring , random -sat , random max -sat , and hypergraph 2-coloring , by now , we have asymptotically tight estimates for the largest constraint density for which typical instances have solutions ( see ) . at the same time ,all known efficient algorithms for each problem fair very poorly , i.e. , they stop finding solutions at constraint densities _ much _ lower than those for which we can prove that solutions exist . adding insult to injury ,the best known algorithm for each problem asymptotically fairs no better than certain extremely naive algorithms for the problem .for example , it has been known for nearly twenty years that the following very simple algorithm will find a satisfying assignment of a random -cnf formula with clauses for : if there is a unit clause satisfy it ; otherwise assign a random value to a random unassigned variable .while it is known that random -cnf remain satisfiable for , no polynomial - time algorithm is known to find satisfying assignments for for some function .similarly , for all , the following algorithm will -color a random graph with average degree : select a random vertex with fewest available colors left and assign it a random available color .while it is known that random graphs remains -colorable for , no polynomial - time algorithm is known that can -color a random graph of average degree for some fixed and arbitrarily large .equivalently , while it is trivial to color a random graph using twice as many colors as its chromatic number , no polynomial - time algorithm is known that can get by with colors , for some fixed .random -sat and random graph coloring are not alone .in fact , for nearly every random csp of interest , the known results establish a completely analogous state of the art : 1 .[ u ] there is a trivial upper bound on the largest constraint density for which solutions exist . 2 .[ l ] there is a non - constructive proof , usually via the second moment method , that the bound from ( [ u ] ) is essentially tight , i.e. , that solutions do exist for densities nearly as high as the trivial upper bound .[ s ] some simple algorithm finds solutions up to a constraint density much below the one from ( [ l ] ) .4 . [ a ]no polynomial - time algorithm is known to succeed for a density asymptotically greater than that in ( [ s ] ) . in this paperwe prove that this is not a coincidence .namely , for random graph coloring , random -sat , and random hypergraph 2-coloring , we prove that the point where all known algorithms stop is precisely the point where the geometry of the space of solutions undergoes a dramatic change. this is known as a `` dynamical '' phase transition in statistical physics and our results establish rigorously for random csps a large part of the `` 1-step replica symmetry breaking '' hypothesis . roughly speaking, this hypothesis asserts that while the set of solutions for low densities looks like a giant ball , at some critical point this ball shatters into exponentially many pieces that are far apart from one another and separated by huge `` energy barriers '' .algorithms ( even extremely simple ones ) have no problem finding solutions in the `` ball '' regime , but no algorithm is known that can find solutions in the `` error - correcting code '' regime .we believe that the presence of dynamical phase transitions in random csps is a very general phenomenon , whose qualitative characteristics should be problem - independent , i.e. , _ universal_. the fact that we can establish the exact same qualitative picture for a problem with binary constraints over -ary variables ( random graph -coloring ) and a problem with -ary constraints over binary variables ( hypergraph 2-colorability ) certainly lends support to this notion .that said , we wish to emphasize that determining for each random csp the location of its dynamical phase transition ( as we do in this paper for the three problems mentioned , in order to show that the transition coincides with the demise of all known algorithms ) requires non - trivial , problem - specific ideas and computations .perhaps the following is an intuitive model of how a dynamical phase transition comes about . in random graph coloring , rather than thinking of the number of available colors as fixed and the constraint density ( number of edges ) as increasing ,imagine that we keep the constraint density fixed , but we keep decreasing the number of available colors .if we start with available colors where , it is reasonable to imagine that the set of valid -colorings , viewed as a subset of , has a nice `` round '' shape , the rounder the greater is relative to . by the same token ,when we restrict our attention to the set of those -colorings that only use colors , we are taking a slice of the set of -colorings . with each slicing the connectivity of the set at handsdeteriorates , until at some point the set shatters .for example , slicing the 2-dimensional unit sphere through the origin yields a circle , but slicing the circle , yields a pair of points .we conclude the introduction with a few words about the technical foundation for our work . to prove the existence ( and determine the location ) of a dynamical phase transition oneneeds access to statistical properties of the uniform measure over solutions .a geometric way of thinking about this is as follows .given a csp instance , say a -cnf formula with clauses chosen uniformly at random , consider the function on that assigns to each truth assignment the number of clauses it violates .in this manner , defines a `` landscape '' in which satisfying assignments correspond to valleys at sea - level .understanding statistical properties of the uniform measure over solutions amounts to understanding `` the view '' one enjoys from such a valley , a probabilistically formidable task .as we discuss in section [ departure ] , we can establish the following : the number of solutions of a random csp is sufficiently concentrated around its exponentially large expectation for the view from a random sea - level valley to be `` the same '' as the view from an `` artificial '' valley .that is , from the valley that results by first selecting a random and then forming a random formula , also with clauses , but now chosen uniformly among the clauses satisfied by , i.e. , the view from the _ planted _ satisfying assignment in the planted model .this is a _ much _ easier view to understand and we believe that the `` transfer '' theorems we establish in this paper will significantly aid in the analysis of random csps .to present our results in a uniform manner we need to introduce some common notions .let be a set of variables , all with the same domain , and let be an arbitrary set of constraints over the variables in .a csp instance is a subset of .we let denote the hamming distance between and we turn into a graph by saying that are adjacent if . for a given instance , we let be the function counting the number of constraints of violated by each .the * height * of a path is .we say that is a solution of an instance , if .we will denote by the set of all solutions of an instance .the * clusters * of an instance are the connected components of .a * region * is a non - empty union of clusters .the term cluster comes from physics .requiring to say that are adjacent is somewhat arbitrary ( but conceptually simplest ) and a number of our results hold if one replaces 1 with .we will be interested in distributions of csp instances as the number of variables grows .the set will typically consist of all possible constraints of a certain type , e.g. , the set of all possible hyperedges in the problem of 2-coloring random -uniform hypegraphs .we let denote the set of all csp instances with precisely distinct constraints from and we let denote the uniform distribution on the set of all instances .we will say that a sequence of events holds _ with high probability _ ( w.h.p . ) if = 1 ] . as per standardpractice in the study of random structures , we will take the liberty of writing to denote the underlying random variable and , thus , write things like the probability that ... " [ def_icy ] we say that the set of solutions of * shatters * if there exist constants such that w.h.p . can be partitioned into regions so that : 1 .the number of regions is at least .2 . each region contains at most an fraction of all solutions .the hamming distance between any two regions is at least .every path between vertices in distinct regions has height at least .our first main result asserts that the space of solutions for random graph coloring , random -sat , and random hypergraph 2-colorability shatters and that this shattering occurs just above the largest density for which any polynomial - time algorithm is known to find solutions for the corresponding problem .moreover , we prove that the space remains shattered until , essentially , the csp s satisfiability threshold . more precisely : a random graph with average degree , i.e. , , is w.h.p .-colorable for , where .the best poly - time -coloring algorithm w.h.p . fails for , where .[ thm_icy_col ] there exists a sequence , such that the space of -colorings of a random graph with average degree shatters for all a random -cnf formula with variables and clauses is w.h.p .satisfiable for . the best poly - time satisfiability algorithm w.h.p. fails for . in , non - rigorous , but mathematically sophisticated evidenceis given that a different algorithm succeeds for , but not higher .[ thm_icy_sat ] there exists a sequence such that the space of satisfying assignments of a random -cnf formula with clauses shatters for all a random -uniform hypergraph with variables and edges is w.h.p .2-colorable for .the best poly - time 2-coloring algorithm w.h.p. fails for . in , non - rigorous , but mathematically sophisticated evidenceis given that a different algorithm succeeds for , but not higher .[ thm_icy_hyper ] there exists a sequence such that the space of 2-colorings of a random -uniform hypergraph with edges shatters for all as the notation in theorems [ thm_icy_col],[thm_icy_sat],[thm_icy_hyper ] is asymptotic in , the stated intervals may be empty for small values of .in this extended abstract we have not optimized the proofs to deliver the smallest values of for which the intervals are non - empty .quick calculations suggest for hypergraph 2-colorability , for -sat , and for -coloring .the regions mentioned in theorems [ thm_icy_col ] , [ thm_icy_sat ] and [ thm_icy_hyper ] can be thought of as forming an error - correcting code in the solution - space of each problem . to make this precise we need to introduce the following definition and formalize the notion of `` a random solution of a random instance '' .given an instance , a solution and a variable , we say that in : * is -*rigid * , if every such that has . * is -*loose * , if for every , there exists such that and .we will prove that while before the phase transition , in a typical solution , every variable is loose , after the phase transition nearly every variable is rigid . to formalize the notion of a random / typical solution , recall that denotes the set of all instances with constraints over variables and let denote the set of all instance solution pairs , i.e. , .we let be the probability distribution induced on by the following : choose an instance uniformly at random .+ if , select uniformly at random .we will refer to instance - solution pairs generated according to as * uniform * instance - solution pairs .we note that although the definition of uniform pairs allows for to be typically empty , i.e. , to be in the typically unsatisfiable regime , we will only employ the definition for constraint densities such that w.h.p . contains exponentially many solutions .hence , our liberty in also using the term a `` typical '' solution .[ thm : froz ] let be a uniform instance - solution pair where : * is a graph with edges , where is as in , and is a -coloring of , or , * is a -cnf formula with clauses , where is as in , and is a satisfying assignment of , or , * is a -uniform hypergraph with edges , where is as in , and is a 2-coloring of . w.h.p .the number of rigid variables in is at least , for some sequence .theorem [ thm : froz ] is tight since for every finite constraint density , a random instance w.h.p . has variables that are not bound by any constraint .the picture drawn by theorem [ thm : froz ] , whereby nearly all variables are rigid in typical solutions above the dynamical phase transition , is in sharp contrast with our results for densities below the transition for graph coloring and hypergraph 2-colorability . while we believe that an analogous picture holds for -sat ,see conjecture [ conj : sat ] , for technical reasons we can not establish this presently .( we discuss the additional difficulties imposed by random -sat in section [ departure ] . )[ thm : loose ] let be a uniform instance - solution pair where : * is a graph with edges , where , and is a -coloring of , or , * is a -uniform hypergraph with edges , where , and is there exists a sequence such that w.h.p .every variable in is -loose .we note that in fact , for all and as in theorem [ thm : loose ] , w.u.p.p . is such that changing the color of any vertex to any color only requires changing the color of other vertices .[ conj : sat ] let be a uniform instance - solution pair where is a -cnf formula with clauses , where , and is a satisfying assignment of .there exists a sequence such that w.h.p .every variable in is -loose .attempts for a `` quick improvement '' upon either of the naive algorithms mentioned in the introduction for satisfiability / graph coloring , stumble upon the following general fact . given a csp instance , consider the bipartite graph in which every variable is adjacent to precisely those constraints in which it appears , known as the factor graph of the instance . for random formulas / graphs , factor graphs are locally tree - like , i.e. , for any arbitrarily large constant , the depth- neighborhood of a random vertex is a tree w.h.p . in other words , locally , random csps are trivial , e.g. , random graphs of any finite average degree are locally 2-colorable . moreover , as the constraint density is increased , the factor graphs of random csps get closer and closer to being biregular , so that degree information is not useful either . combined , these two facts render all known algorithms impotent , i.e. , as the density is increased , their asymptotic performance matches that of trivial algorithms .in , mzard , parisi , and zecchina proposed a new satisfiability algorithm called survey propagation ( sp ) which performs extremely well experimentally on instances of random 3-sat .this was very surprising at the time and allowed for optimism that , perhaps , random -sat instances might not be so hard .moreover , sp was generalized to other problems , e.g. , -coloring and max -sat .an experimental evaluation of sp for values of even as small as 5 or 6 is already somewhat problematic , but to the extent it is reliable it strongly suggests that sp does not find solutions for densities as high as those for which solutions are known to exist .perhaps more importantly , it can be shown that for densities at least as high as , if sp can succeed at its main task ( approximating the marginal probability distribution of the variables with respect to the uniform measure over satisfying assignments ) , so can a much simpler algorithm , namely belief propagation ( bp ) , i.e. , dynamic programming on trees .the trouble is that to use either bp or sp to find satisfying assignments one sets variables iteratively .so , even if it is possible to compute approximately correct marginals at the beginning of the execution ( for the entire formula ) , this can stop being the case after some variables are set . concretely , in , montanari et al. showed that ( even within the relatively generous assumptions of statistical physics computations ) the following gibbs - sampling algorithm fails above the barrier , i.e. , step [ bp_comp ] below fails to converge after only a small fraction of all variables have been assigned a value : 1 .[ begin ] select a variable at random .[ bp_comp ] compute the marginal distribution of using belief propagation .3 . set to according to the computed marginal distribution ; simplify the formula ; go to step [ begin ] .the idea of deterministically embedding a property inside a random structure is very old and , in general , the process of doing this is referred to as `` planting '' the property . in our case , we plant a solution in a random csp , by only including constraints compatible with . juels and peinado were perhaps the first to explore the relationship between the planted and the uniform model and they did so for the clique problem in dense random graphs ,i.e. , where each edge appears independently with probability 1/2 .they showed the distribution resulting from first choosing and then planting a clique of size is very close to and suggested this as a scheme to obtain a one - way - function .since the planted clique has size only , the basic argument in is closely related to subgraph counting .in contrast , the objects under consideration in our work ( -colorings , satisfying assignments , etc . ) have an immediate impact on the _ global _ structure of the combinatorial object being considered , rather than just being local features , such as a clique on vertices .coja - oghlan , krivelevich , and vilenchik proved that for constraint densities well above the threshold for the existence of solutions , the planted model for -coloring and -sat is equivalent to the uniform distribution _ conditional _ on the ( exponentially unlikely ) existence of at least one solution . in this conditional distribution as well as in the high - density planted model ,the geometry of the solution space is very simple , as there is precisely one cluster of solutions . in first steps were made towards understanding the solution - space geometry of random -cnf formulas by proving the existence of shattering and the presence of rigid variables for .this was a far cry from the true threshold for the onset of both phenomena , as we establish here .besides the quantitative aspect , there is also a fundamentally important difference in the methods employed in vs. those employed here . in those works ,properties were established by taking a union bound over all satisfying assignments .it is not hard to show that the derived results are best possible using those methods and , in fact , there is good reason to believe that the results are genuinely tight , i.e. , that for densities the derived properties simply do not hold for _ all _ satisfying assignments . here, we instead establish a systematic connection between the planted model and the process of sampling a random solution of a random instance .this argument allows us to analyze `` typical '' solutions while allowing for the possibility that a ( relatively small , though exponential ) number of `` atypical '' solutions exist .therefore , we are for the first time in a position to analyze the extremely complex energy landscape of below - threshold instances of random csps , and to estimate quantities that appeared completely out of reach prior to this work .as mentioned , the results in this paper are enabled by a set of technical lemmas that allow one to reduce the study of `` random solutions of random csp instances '' to the study of `` planted csp solutions '' .the conceptual origin of these lemmas can be traced to the following humble observation .let be an arbitrary - matrix with the property that all its rows have the same number of 1s and all its columns have the same the number of 1s . a moments reflection makes it clear that for such a matrix , both of the following methods select a uniformly random 1 from the entire matrix : 1 .select a uniformly random column and then a uniformly random 1 in that column .2 . select a uniformly random row and then a uniformly random 1 in that row .an example of how we employ this fact for random csps is as follows .let be the set of all -cnf formulas with variables and distinct clauses ( chosen among all possible -clauses ) .say that nae - satisfies a formula if under , every clause of has at least one satisfied and at least one falsified literal .let be the matrix where iff nae - satisfies . by the symmetry of ,it is clear that all rows of have the same number of 1s .imagine , for a moment , that the same was true for all columns .then , a uniformly random solution of a uniformly random instance would be distributed _ exactly _ as a `` planted '' instance - solution pair : first select uniformly at random ; then select distinct clauses uniformly at random among all clauses nae - satisfied by .our contribution begins with the realization that exact row- and column - balance is not necessary .rather , it is enough for the 1s in to be well - spread " .more precisely , it is enough that the marginal distributions induced on the rows and columns of by selecting a uniformly random 1 from the entire matrix are both `` reasonably close to '' uniform .for example , assume we can prove that columns of have 1s , where is the average number of 1s per column .indeed , this is precisely the kind of property implied by the success of the second moment method for random nae--sat . under this assumption ,proving that a property holds w.u.p.p . for a uniformly random solution of a uniformly random instance ,reduces to proving that it holds w.h.p . for the planted solution of a planted instance , a dramatically simpler task .there is a geometric intuition behind our transfer theorems which is more conveniently described when every constraint is included independently with the same probability , i.e. , we take . for all and ,it was shown in that the resulting instances w.u.p.p .have exponentially many solutions for .consider now the following way of generating _ planted _ nae -sat instances .first , select a formula by including each clause with probability , exactly as above . then , select uniformly at random and remove from all constraints violated by .call the resulting instance .our results say that as long as , the instance is nearly indistinguishable " from a _ uniform _ instance created by including each clause with probability .( we will make this statement precise shortly . ) to see how this happens , recall the function counting the number of violated constraints under each assignment . clearly , selecting specifies such a function , while selecting and removing all constraints violated by amounts to modifying so that .one can imagine that such a modification creates a gradient in the vicinity of , a crater " with at its bottom .what we prove is that as long as already had an exponential number of craters and the number of craters is concentrated , adding one more crater does not make a big difference .of course , if the density is increased further , the opened crater becomes increasingly obvious , as it takes a larger and larger cone to get from the typical values of down to 0 .hence the ease with which algorithms solve planted instances of high density . to prove our transfer theorems we instantiate this idea for random graph -coloring , random -uniform hypergraph -coloring , and random -sat .for this , a crucial step is deriving a lower bound on the number of solutions of a random instance .for example , in the case of random graph -coloring , we prove that the number of -colorings , , for a random graph with vertices and edges is `` concentrated '' around its expectation in the sense that w.h.p . to prove this , we use the upper bound on the second moment from to show that w.u.p.p .then , we perform a sharp threshold analysis , using theorems of friedgut , to prove that ( [ eqcollower ] ) holds , in fact , with _ high _ probability .a similar approach applies to hypegraph -coloring .the situation for random -sat is more involved .indeed , we can prove that the number of satisfying assignments is _ not _ concentrated around its expectation in the sense of ( [ eqcollower ] ) .this problem is mirrored by the fact that the second moment of the number of satisfying assignments exceeds the square of the first moment by an exponential factor ( for any constraint density ) .nonetheless , letting denote a uniformly random -cnf formula with variables and clauses , combining techniques from with a sharp threshold analysis , we can derive a lower bound on the number of satisfying assignments that holds w.h.p ., namely , where exponentially with .this estimates allows us to approximate the uniform model by the planted model sufficiently well in order to establish theorems [ thm_icy_sat ] and [ thm : froz ] .due to the space constraints , in the remaining pages we give proof sketches of our results for -coloring , to offer a feel of the transfer theorems and of the style of the arguments one has to employ given those theorems ( actual proofs appear in the appendix ) .the proofs for hypergraph 2-coloring are relatively similar , as it is also a `` symmetric '' csp and the second moment methods works on its number of solutions . for -sat , though , a significant amount of additional work is needed , as properties must be established with exponentially small error probability to overcome the large deviations in the number of satisfying assignments ( proofs appear in the appendix ) .we consider a fixed number and assume that for some sufficiently large .we denote as .we are interested in the probability distribution on resulting from first choosing a random graph and then a random -coloring of ( if one exists ) . to analyze this distribution, we consider the distribution on induced by following expermient .: : generate a uniformly random -partition . p2 .: : generate a graph with edges chosen uniformly at random among the edges bicolored under . p3 .: : output the pair .the distribution is known as the _ planted model_. [ thm_colorexchangemain ] suppose that .there exists a function such that the following is true .let be any graph property such that has with probability , and let be any property of pairs .if for all sufficiently large then suppose that .recall that a graph with vertex set is said to be -choosable if for any assignments of color lists of length at least to the elements of , there is a proper coloring in which every vertex receives a color from its list . to prove theorem [ thm : loose ] , we consider the property that all vertices are -loose and the following condition : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for any set of size the subgraph induced on is -choosable . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here is some function such that , where is the function from theorem [ thm_colorexchangemain ] .a standard argument shows that a random graph , where , satisfies w.h.p . by theorem [ thm_colorexchangemain ] ,we are thus left to establish ( [ eqcolorexchangemain ] ) .let be a uniformly random -partition , and let be a random graph with edges such that is a -coloring of .since is uniformly random , we may assume that the color classes satisfy .let be any vertex , and let be the `` target color '' for .our goal is to find a coloring such that and .if has no neighbor in , then we can just assign this color to .otherwise , we run the following process . in the course of the process ,every vertex is either _ awake _ , _ dead _ , or _ asleep_. initially , all the neighbors of in are awake , is dead , and all other vertices are asleep . in each step of the process , pick an awake vertex arbitrarily and declare it dead ( if there is no awake vertex , terminate the process ) .if there are at least five colors available such that has no neighbor in , then we do nothing . otherwise , we pick five colors randomly and declare all asleep neighbors of in awake for .[ lemma_fewdeadmain ] with probability at least there are at most dead vertices when the process terminates .the proof of lemma [ lemma_fewdeadmain ] is based on relating our process to a subcritical branching process .the basic insight here is that when it is very likely that a vertex has five immediately available colors .more precisely , for any the number of neighbors in any class with is asymptotically poisson with mean .hence , the probability that does _ not _ have a neighbor in is about .as there are colors in total , we expect about colors available for , i.e. , a lot . to obtain a new coloring in which takes color we consider the set of all dead vertices .we let for all .moreover , conditioning on the event , we can assign to each a color from the list .thus , the new coloring differs from on at most vertices .suppose that . to prove theorem [ thm : froz ] for coloring we apply theorem [ thm_colorexchange ] as follows .we let be sufficiently small numbers and denote by the following property of a pair : also , we let be the property that the maximum degree is at most . we shall prove that for a pair chosen from a subgraph as in ( [ eqg*main ] ) exists w.h.p.if that is so , then every vertex in has at least one neighbor in every color class other than its own .therefore , it is impossible to just assign a different color to any vertex in . in fact , since all vertices in have _ a lot _( namely , at least ) of neighbors with every other color , the expansion properties of the random graph imply that recoloring any vertex in necessitates the recoloring of at least further vertices .loosely speaking , the conflicts resulting from recoloring spread so rapidly that we necessarily end up recoloring a huge number of vertices .thus , all vertices in are -rigid .note that we can not hope for much better , as we can always recolor by swapping two color classes , i.e. , vertices . to prove the existence of the subgraph , we establish the following . [ lemma_coremain ] condition ( [ eqcolorexchangemain ] ) holds for and as above . to obtain lemma [ lemma_coremain ] ,let be a random pair chosen from the distribution .we may assume that for all . to obtain the graph , we perform a `` stripping process '' . as a first step ,we obtain a subgraph by removing from all vertices that have fewer than neighbors in any color class other than their own . if is sufficiently small , then the expected number of vertices removed in this way is less than for a , because for each vertex the expected number of neighbors in another color class is bigger than .then , we keep removing vertices from that have `` a lot '' of neighbors outside of .given the event , we then show that with probabiltiy the final result of this process is a subgraph that satisfies ( [ eqg*main ] ) .theorem [ thm_icy_col ] concerns the `` view '' from a random coloring of .basically , our goal is to show that only a tiny fraction of all possible colorings are `` visible '' from , i.e. , lives in a small , isolated valley . to establish the theorem , we need a way to measure how `` close '' two colorings are .the hamming distance is inappropriate here because two colorings can be at hamming distance , although simply results from permuting the color classes of , i.e. , although and are essentially identical .instead , we shall use the following concept . given two coloring ,we let be the matrix with entries to measure how close is to we let be the squared frobenius norm of . observe that this quantity reflects the probability that a single random edge is monochromatic under both and , i.e. , the correlation of and , precisely as desired .hence , is a map from the set of -partitions to the interval , where .thus , the _ larger _ , the more resembles .furthermore , for a fixed and a number we let ^n : f_\sigma(\tau)=x\wedge h(\tau)\leq\lambda n\}|.\ ] ] in order to show that with decomposes into exponentially many regions , we employ the following lemma . [ lemma_cluster_colmain ] suppose that . there are numbers and such that with high probability a pair chosen from the distributoin has the following two properties . 1 . for all we have .2 . the number of colorings such that is at most . let be a random graph and call _ good _ if both ( 1 ) and ( 2 ) hold .then lemma [ lemma_cluster_colmain ] states that w.h.p.a -fraction of all are good .hence , to decompose into regions , we proceed as follows . for each let furthermore , each such region is separated by a linear hamming distance from the set , because is `` continuous '' with respect to distance .thus , theorem [ thm_icy_col ] follows from lemma [ lemma_cluster_colmain ] . finally , by theorem [ thm_colorexchangemain ] , to prove lemma [ lemma_cluster_colmain ] it is sufficient to show the following .[ lemma_cluster_col_plmain ] suppose that .there are and such that with probability at least a pair chosen from the distributoin has the two properties stated in lemma [ lemma_cluster_colmain ] .the proof of lemma [ lemma_cluster_col_plmain ] is based on the `` first moment method '' .that is , for any we compute the _ expected _ number of assignments such that and .this computation is feasible in the planted model and yields similar expressions as encountered in in the course of computing the second moment of the number of -colorings .therefore , we can show that the expected number of such assignments is exponentially small for a regime , whence lemma [ lemma_cluster_col_pl ] follows from markov s inequality .99 d. achlioptas , e. friedgut . _ a sharp threshold for -colorability _ , random struct .algorithms * 14 * , 6370 , 1999 .d. achlioptas and m. molloy , _ the analysis of a list - coloring algorithm on a random graph _ , in proc .of focs 1997 , 204212 .d. achlioptas and c. moore , _ random -sat : two moments suffice to cross a sharp threshold _ , siam journal on computing , * 36 * ( 2006 ) , 740762 .d. achlioptas and a. naor , _ the two possible values of the chromatic number of a random graph _ , annals of mathematics , * 162 * ( 2005 ) , 13331349 .d. achlioptas , a. naor , and y. peres , _ rigorous location of phase transitions in hard optimization problems _ , nature , * 435 * ( 2005 ) , 759764 .d. achlioptas and y. peres , _ the threshold for random -sat is _ , journal of the american mathematical society * 17 * ( 2004 ) , 947973 .d. achlioptas , f. ricci - tersenghi , _ on the solution space geometry of random constraint satisfaction problems _ , in proc .38th acm symp . on theory of computing ( 2006 ) , 130139 .d. battaglia , m. kolar , r. zecchina , _ minimizing energy below the glass thresholds _ , phys .e. * 70 * ( 2004 ) , 036107 .a. braunstein , r. mulet , a. pagnani , m. weigt , r. zecchina , _ polynomial iterative algorithms for coloring and analyzing random graphs _e. * 68 * ( 2004 ) , 036702 .chao and j. franco , _ probabilistic analysis of two heuristics for the 3-satisfiability problem _ , siam j. comput . * 15 * ( 1986 ) , 11061118 .v. chvtal and b. reed , _ mick gets some ( the odds are on his side ) _ , in proc .33th annual symposium on foundations of computer science ( 1992 ) , 620627 .a. coja - oghlan , m. krivelevich , d. vilenchik , _ why almost all -colorable graphs are easy _ , in proc .24th stacs ( 2007 ) 121132 .a. coja - oghlan , m. krivelevich , d. vilenchik , _ why almost all k - cnf formulas are easy _ , in proc .13th international conference on analysis of algorithms .e. friedgut , _ sharp thresholds of graph proprties , and the -sat problem ._ j. amer .* 12 * ( 1999 ) , 10171054 .e. friedgut , _ hunting for sharp thresholds ._ random struct .algorithms * 26 * ( 2005 ) 3751 a. m. frieze and s. suen , _ analysis of two simple heuristics on a random instance of -sat _ , journal of algorithms * 20 * ( 1996 ) , 312355 .a. gerschenfeld , a. montanari ._ reconstruction for models on random graphs ._ in proc .focs 2007 , 194204 .grimmett , c.j.h .mcdiarmid , _ on colouring random graphs _ , math .cambridge philos .soc . , * 77 * ( 1975 ) , 313324 .a. juels , m. peinado : hiding cliques for cryptographic security . in proc .soda 1998 , 678684 .f. krzakala , a. montanari , f. ricci - tersenghi , g. semerjianc , l. zdeborova , _ gibbs states and the set of solutions of random constraint satisfaction problems .. national academy of sciences * 104 * ( 2007 ) 1031810323 .m. mzard , t. mora , and r. zecchina , _ clustering of solutions in the random satisfiability problem _ , phys .* 94 * ( 2005 ) , 197205 .m. mzard , g. parisi , and r. zecchina , _ analytic and algorithmic solution of random satisfiability problems _ , science * 297 * ( 2002 ) , 812815 . a. montanari , f. ricci - tersenghi , g. semerjian . _solving constraint satisfaction problems through belief propagation - guided decimation . _ in proc .45th allerton ( 2007 ) . in this appendix we present the proofs of our results for graph coloring and random -sat , thereby presenting the most important techniques .we omit the proofs for hypergraph 2-coloring , as these are similar to but simpler than the -sat proofs , due to the fact that the transfer theorem for hypergraph 2-coloring is as strong as that for -coloring .we generally assume that is sufficiently large .in this section we consider a fixed number and assume that for some sufficiently large .we are interested in the probability distribution on . to analyze this distribution, we consider the distribution on induced by following expermient ( `` planted model '' ) .: : generate a uniformly random -partition . p2 .: : generate a graph with edges chosen uniformly at random among the edges bicolored under . p3 .: : output the pair .[ thm_colorexchange ] suppose that .there exists a function such that the following is true .let be any graph property such that has with probability , and let be any property of pairs .if for all sufficiently large we have then for a given assignment we let be the set of all graphs with edges for which is a proper coloring .then it is immediate that hence , there is a constant such that the following is true .let be chosen uniformly at random .then let be sufficiently small .moreover , let , , and . then .therefore , since for a random the numbers are multinomially distributed , with probability we have .hence , letting , we conclude that there is a constant such that thus , by stirling s formula with probability at least we have since and , in the case we have hence , choosing sufficiently small , we can ensure that [ cor_lambda ] we have .[ lemma_plantedbound ] suppose that there is a number such that then we have because for all .hence , corollary [ cor_lambda ] yields as desired .let be the expected number of -colorings of . combining the second moment argument from with arguments from , we obtain the following result ( see appendix [ sec_friedgutcolor ] ) .[ lemma_friedgutcolor ] there is a function such that with high probability ._ proof of theorem [ thm_colorexchange ] ._ assume that a random pair chosen according to the uniform model has with probability at least , while for an arbitrarily small . since has the property w.h.p ., we conclude that therefore , lemma [ lemma_friedgutcolor ] entails that since , this contradicts lemma [ lemma_plantedbound ] . to prove lemma [ lemma_friedgutcolor ] , we combine the second moment argument from with a sharp threshold argument .let be a random graph and let be the number of _ balanced _ colorings of , i.e. , colorings such that for all . recall that denotes the set of all -colorings of .a direct computation involving stirling s formula shows that in addition , ( * ? ? ?* section 3 ) shows that there is a constant such that applying the paley - zigmund inequality , we thus conclude that there is a number such that whence in addition , is easily computed : we have where .thus , we obtain the following .[ lemma_friedgutcolor2ndmoment ] let . then . to complete the proof of lemma [ lemma_friedgutcolor ] , we combine lemma [ lemma_friedgutcolor2ndmoment ] with a sharp threshold result .let be the property that a graph on vertices has less than -colorings .[ lemma_sharp ] for any fixed the property has a sharp threshold .that is , there is a sequence such that for any we have we shall prove lemma [ lemma_sharp ] in appendix [ apx_sharp ] .lemma [ lemma_friedgutcolor ] is an immediate consequence of lemmas [ lemma_friedgutcolor2ndmoment ] and [ lemma_sharp ] .the property is monotone under the addition of edges .therefore , it is sufficient to prove that has a sharp threshold in the random graph , in which edges are added with probability independently .let .the argument builds upon .we denote the set of -colorings of a graph by .[ lemma_constraints ] let be some number ( that may depend on ) , and let be fixed .further , let be an integer .suppose that .moreover , assume that for a list of colors the following is true : if we pick vertices indenpendently and uniformly at random , then with probability the random graph has at most -colorings in which receives a color different from for all .then with probability the random graph has at most -colorings in which receives a color different from for all .let be a ( very ) slow growing function of .moreover , let be the event that the first constraints : `` must not receive color '' for , cause the number of -colorings to be at most . then given that has more than -colorings ( i.e. , coditional on the event ) , the probability of is at least .hence , conditional on , we have we consider two cases : if , then we are done .hence , assume that .then .note that is the fraction of vertices such that forbidding color at reduces the number of colorings to at most ( after the addition of the first constraints ) .we call such vertices _ good _ , and denote the set of colorings that spoils by and its size by .let us consider two cases .let be the number of colorings of that respect the first constraints .we consider two cases . in each of these two caseswe shall prove that adding a small number of random edges to reduces the number of colorings that respect the first constraints to at most with probability at least .case 1 : .: : since for each of the colorings the probability that a new random edge spoils this coloring is , we can reduce the number of colorings to at most by adding random edges ( use markov s inequality ) .case 2 : .: : if are good , then . therefore , adding an edge between two good vertices causes the number of colorings to drop to at most . furthermore , the probability that a random edge joins two good vertices is .therefore , after adding edges , we have reduced the number of proper colorings to at most with probability .finally , adding an additional edges reduces the number of colorings to at most by the same argument as in case 1 .now , note that instead of _ first _ imposing the constraints and _ then _ adding the random edges as in cases 1 and 2 we could _ first _add a set of random edges to .as is of smaller order than the standard deviation of the number of edges of , the resulting distribution is within from the originial distribution in total variation distance .therefore , we conclude that actually just imposing the constraints suffices to increase the probability of having -colorings to . [ cor_constraints ]let be some number ( may depend on ) , and let be fixed .further , let be an integer .suppose that .then there is no list of colors such that the following is true : if we pick vertices indenpendently and uniformly at random , then with probability the random graph has at most -colorings in which receives a color different from for all .applying the lemma times , we can reduce the number of constraints that is necessary to reduce the number of colorings to to . to prove that has a sharp threshold , we assume for contratiction that this is not so .hence , there exists an edge probability such that the probability that has is exactly equal to for a small .further , by ( * ? ? ?* theorem 2.4 ) there exists a fixed graph on vertices such that with probability the following is true .if we first pick and then insert a random copy of into , then the resulting graph has .furthermore , this graph is -colorable .in fact , by monotonicity we may assume that is uniquely -colorable .the experiment of inserting a random copy of into is actually equivalent to the following ( because is symmetric with respect to vertex permutations ) .we let denote a random graph obtained by first inserting a copy of into the first vertices , and then adding edges with probability independently ( among all vertices ) .then the probability that has is at least .hence , while by the choice of let signify the subraph of induced on the vertices .then , and ( [ eqafwid2 ] ) implies that furthermore , we can relate the -colorings of and the -colorings of as follows .let be the set of edges from the set to . then w.h.p . and no vertex in is incident to more than one edge in .furthermore , since admits a unique -coloring , each edge in forbids its endpoint in exactly one color .hence , the edges in impose constraints on randomly chosen vertices as in lemma [ lemma_constraints ] .therefore , ( [ eqafwid1 ] ) implies that but then corollary [ cor_constraints ] implies that furthermore , as we may add another random edges to without shifting the distribution by more than in total variation distance , and since each of these edges reduces the expected number of colorings by , markov s inequality entails that which contradicts ( [ eqafwid2a ] ) .suppose that , and that for a sufficiently large .let and recall that a graph is -choosable if for any assignments of color lists of length at least to the vertices of the graph there is a proper coloring such that each vertex receives a color from its list . to prove theorem [ thm : loose ], we consider the property that all vertices are loose and the following condition : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for any set of size the subgraph induced on is -choosable . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ here is a constant and , where is the function from theorem [ thm_colorexchange ] .[ lemma_choosable ] with high probability the random graph satisfies .since , this follows from a standard first moment argument . by theorem [ thm_colorexchange ] , we just need to establish ( [ eqcolorexchange ] ) . thus , let be a coloring such that the color classes satisfy , and let be a random graph with edges such that is a -coloring of .let be any vertex ; without loss of generality we may assume that .in addition , let be the `` target color '' for . if has no neighbor in , then we can just assign this color to .otherwise , we run the following process . in the course of the process ,every vertex is either _ awake _ , _ dead _ , or _ asleep_. initially , all the neighbors of in are awake , is dead , and all other vertices are asleep . in each step of the process , pick an awake vertex arbitrarily and declare it dead ( if there is no awake vertex , terminate the process ) .if there are at least colors such that has no neighbor in , then we do nothing .otherwise , we pick colors randomly and declare all asleep neighbors of in awake for .[ lemma_fewdead ] with probability at least there are at most dead vertices when the process terminates .we show that the aforementioned process is dominated by a branching process in which the expected number of successors is less than one .then the assertion follows from chernoff bounds . to set up the analogy , note that the expected number of neighbors of any in is asymptotically .hence , the probability that has no neighbor in is at least .therefore , the expected number of classes in which has no neighbor is at least .furthermore , the number of such classes is asymptotically binomially distributed .therefore , assuming that is sufficiently large , we conclude that the probability that there are less than classes in which has no neighbor is less than .given that this is so , the number of neighbors of in each of the chosen classes has mean at most .therefore , the expected number of newly awake vertices resulting from is at most .thus , the above process is dominated by a branching process with successor rate .therefore , the assertion follows from stochastic domiance and chernoff bounds . _proof of theorem [ thm : loose ] ._ let be the set of dead vertices left by the aforementioned process . by lemma [ lemma_fewdead ] we may assume that .hence , conditioning on , we may assume that is -choosable .now , we assign lists of colors to the vertices in as follows .the list of just consists of its target color . to any other assign the list .now , we can color the subgraph by assigning color to and a color from to any other .we extend this to a coloring of by assigning color to any .let signify the resulting coloring .we claim that is a proper coloring of . for boththe subgraph induced on and the subgraph induced on are properly colored . moreover , by construction no is adjacent to a vertex of color in .finally , and are at hamming distance at most .hence , the assertion follows from theorem [ thm_colorexchange ] .let , and assume that for a large enough .suppose that . to prove theorem [ thm : froz ] for coloring , we apply theorem [ thm_colorexchange ] as follows .we let be a sufficiently small number and denote by the following property of a pair . also , we let be the property that the maximum degree is at most .[ lemma_core ] condition ( [ eqcolorexchange ] ) is satisfied with and as above ._ proof of theorem [ thm : froz ] for coloring . _ given a random coloring of a random graph , lemma [ lemma_core ] and theorem [ thm_colorexchange ] imply that w.h.p .there is a subgraph satisfying ( [ eqg * ] ) .in addition , we assume that has the following property . standard 1st moment argument shows that ( [ eqgsparse ] ) holds in w.h.p .assume for contradiction that there is another coloring such that the set has size .let and .then every has at least neighbors in .hence , if , then all of these neighbors lie inside of . we claim that this implies that . for assume that and set .then , and spans at least edges , in contradiction to ( [ eqgsparse ] ) .thus , we conclude that for _ all _ , in contradiction to ( [ eqrigid1 ] ) .hence , all the vertices in are -rigid .let be a random pair chosen from the distribution .we may assume that for all and let . to simplify the analysis , we shall replace the random graph , which has a _ fixed _ number of edges , by a random graph in which is obtained by including each edge with with probability independently . here is chosen so that the _ expected _ number of edges of equals . [ lemma_independentedges ] for any property we have .we defer the proof to section [ sec_independentedges ] .thus , in the sequel we will work with rather than .let be a sufficiently small number , and let . moreover , for a vertex and a set let signify the number of --edges in .we construct a subgraph of as follows . 1 .let , , and .2 . let and .3 . let .while there is a vertex that has at least neighbors in , add to .4 . let . for each vertex and each color expected number of neighbors of with color is .hence , the sets contain those vertices form that have a lot fewer neighbors with color than expected .[ lemma_wijbound ] there is a number such that with probability we have for any .hence , , and . in the random graph for each number is binomially distributed .hence , the probability that is at most , where depends only on and .furthermore , as in edges occur independently , is binomially distributed as well ( with mean ) .therefore , the assertion follows from chernoff bounds .each of the vertices in has _ a lot _ of neighbors in the small set .therefore , since the random graph is a good expander , we expect to be much smaller than .[ lemma_ubound ] given that occurs , with probability at least the set contains at most vertices .we postpone the proof to section [ sec_ubound ] .[ lemma_zbound ] with probability the set contains at most vertices .assume that this is not the case .let contain all vertices of and the first vertices added to by step 3 of the construction of . then and .however , a simple first moment argument shows that the probability that a set with these two properties is present in is at most . combining lemma [ lemma_wijbound ] , [ lemma_ubound ] , and [ lemma_zbound ] ,we conclude that contains at least vertices ( provided that is sufficiently larger ) .moreover , the construction of ensures that this graph satisfies ( [ eqg * ] ) .given that has exactly edges , is just a uniformly random graph with planted coloring .that is , given that the number of edges is , is identically distributed to .therefore , furthermore , since , with probability the maximum degree of as well as of is at most .therefore , have with probability . hence , ( [ eqbinomex1 ] ) yields as claimed . to analyze the sets from the second step of the construction of ,consider with probability we have .the definition of the set depends solely on the --edges .therefore , the --edges are indepenent of the random set , which with probability has size by the lemma [ lemma_wijbound ] . assuming that this is indeed the case , we conclude that for any vertex the number is binomially distributed with mean .hence , the probability that has neighbors inside is at most thus , .finally , as is binomially distributed , the assertion follows from chernoff bounds .conditional on the event , with probability we have .we just need to analyze the bipartite subgraph .the set consists of all vertices that have degree in this subgraph . to investigate , we condition on the degree sequence of this bipartite graph .since we also condition on the event , the maximum degree is .hence , we can generate the random bipartite graph with degree sequence via the configuration model , and the probability that the resulting multigraph happens to be a simple graph is .thus , we just need to study a random configuration .now , in a random configuration the probability that a vertex has neighbors in the set is , because the total number of edges of is concentrated about .therefore , the ( conditional ) expected size of is .consequently , azuma s inequality yields that with probability the size of is .finally , lemma [ lemma_ubound ] follows immedately from the fact that . to prove the coloring part of theorem [ thm_icy_col ], we need to come up with an appropriate way to measure how `` similar '' two -colorings of a given graph are .a first idea may be to just use the hamming distance .however , if we construct a coloring simply by permuting the color classes of another coloring , then and can have hamming distance , although they are essentially identical . therefore , instead of the hamming distance we shall use the following concept . given two coloring ,we let be the matrix with entries then to measure how close is to we let be the squared frobenius norm of . hence, is a map from the set of -colorings to the interval , where .( thus , the _ larger _ , the more resembles . ) furthermore , for a fixed and a number we let ^n : f_\sigma(\tau)=x\wedge h(\tau)\leq\lambda n\}|.\ ] ] in order to show that with decomposes into exponentially many regions , we employ the following lemma . [ lemma_cluster_col ] suppose that .there are numbers and such that with high probability a pair chosen from the distributoin has the following two properties . 1 . for all we have .2 . the number of colorings such that is at most . let be a random graph and call _ good _ if 1 . and 2 .. then lemma [ lemma_cluster_col ] states that with high probability a -fraction of all is good . hence , to decompose into regions , we proceed as follows . for each we let then starting with the set and removing iteratively some for a good from yields an exponential number of regions .furthermore , each such region is separated by a linear hamming distance from the set , because is continuous with respect to distance ( that is , for any there is such that for all satisfying ) .thus , the property stated in theorem [ thm_icy_col ] follows from lemma [ lemma_cluster_col ] . to establish lemma [ lemma_cluster_col ], we employ the planted model .[ lemma_cluster_col_pl ] suppose that .there are and such that with probability at least a pair chosen from the distributoin the two properties stated in lemma [ lemma_cluster_col ] .thus , lemma [ lemma_cluster_col ] follows from lemma [ lemma_cluster_col_pl ] and theorem [ thm_colorexchange ] ._ proof of lemma [ lemma_cluster_col_pl ] . _the proof is based on the first moment method .let be a fixed assignment of colors to the vertices .we may assume that for all , because all but an exponentially small fraction of all assignments in have this property .further , let be a graph with edges such that is a -coloring of chosen uniformly at random from the set of all such graphs .a direct computation shows that for an assignment the probability that is where . to prove the lemma, we shall compute the _ expected _ number of assignments such that and for a suitable . to this end, we have to take into account the number of possible colorings .we parameterize the set of all possible by a matrix , where . then by ( [ eqcolorfirst1 ] ) the contribution of a matrix to the first moment is at most ( the accounts for the fact that we consider the coloring fixed ) . taking logarithms , we obtain for a given number we let be the set of all matrices such that , , and .since there are at most possible matrices , for any given the expected number of colorings such that is at most hence , by continuity it suffices to show that for some the expression is strictly negative for a small enough .let and .then theorem 9 from shows that the maximum is attained for a matrix with entries asymptotically as grows .an explicit computation shows that for this matrix the value is strictly negative , provided that is sufficiently small .therefore , we can apply markov s inequality to complete the proof .consider the distribution on the set of pairs , where is a -sat formula with variables and with clauses , and is a satisfying assignment of .: : generate a random assignment . p2 .: : generate a random -cnf formula with clauses chosen uniformly among those satisfied by . p3 .: : output the pair .[ thm_satexchange ] there is a sequence such that the following holds .let for some , and let be any function that such that .let be any property such that has with probability , and let be any property of pairs .if for all sufficiently large we have then assume for contradiction that there is a fixed such that for infinitely many .then lemma [ lemma_numberksat ] implies that the set has size on the other hand , as is just the uniform distribution on the set , ( [ eqsatexchange ] ) implies that as , this contradicts ( [ eqsatex1 ] ) for sufficiently large .recall that denotes a random -sat formula on variables . for a fixed number we let denote the property that a -sat formula on the variables has less than satisfying assignments .the following lemma shows that has a sharp threshold .let .equations ( [ eq : b ] ) and ( [ epdef ] ) show that is a continuous function .therefore , for any there is a such that satisfies let .setting , the second moment argument from shows in combination with the paley - zigmund inequality that >0.\ ] ] therefore , lemma [ lemma_sharpthr ] implies that for sufficiently large .consequently , for large we have hence , lemma [ lemma_sharpthr ] yields =1.\ ] ] thus , with probability the number of satisfying assignments of satisfies since this is true for any , the assertion follows .as shown in , the solution to satisfies plugging these bounds into ( [ eq : b ] ) and performing a tedious but straightforward computation , we obtain that since , the assertion thus follows from lemma [ thm_amin ] . to prove lemma [ lemma_sharpthr ] , we need a bit of notation .if is a formula on a set of variables disjoint from , then we let denote the set of all formulas that can be obtained from by substituting distinct variables among for .moreover , for a formula on we let , where is chosen uniformly at random from .note that is a _ monotone _ property , i.e. , if has the property and is another formula on the variables , then has the property as well .therefore , we can use the following theorem from friedgut to prove by contradiction that has a sharp threshold .let for concreteness .[ thm_ehud ] suppose that does _ not _ have a sharp threshold .then there exist a number , a formula , and for any numbers , and a formula with variables such that the following is true .in the sequel we assume the existence of , , , , and satisfying conditions * t1**t3*. to conclude that has a sharp threshold , we shall show that then condition * t4 * can not hold .clearly , we may assume that is sufficiently large ( by choosing appropriately ) .let .any -sat formula that contains at most as many clauses as variables is satisfiable .hence , to establish the lemma , we will show that the probability that contains a subformula on variables with at least clauses is smaller than ; then the assertion follows from * t3*. to prove this statement , we employ the union bound .there are ways to choose a set of variables , and ways to choose slots for the clauses of the subformula .furthermore , the probability that the random clauses in these slots contain only the chosen variables is at most .hence , the probability that has variables that span a subformula with at least clauses is at most further , * t2 * implies that , because for the expected number of satisfying assignments of is less than .thus , assuming that is sufficiently large , we see that ( [ eqphisat ] ) implies , as claimed .thus , fix a satisfying assignment of .then we say that a satisfying assignment of is _ compatible _ with a tuple if for all .furthermore , we call a tuple _ bad _ if has less than satisfying assignments that are compatible with . the formula is obtained by substituting randomly chosen variables for the variables of and adding the resulting clauses to .since by * t1 * with probability at least the resulting formula has at most satisfying assignments , a uniformly chosen tuple is bad with probability at least . thus , there are at least bad tuples . b1 . : : for each there is a -tuple of variables such that if , and if . b2 . : : for any function \rightarrow[k] ] of size the following is true . if -tuples ^k ] the tuple belongs to . assuming that is sufficiently large , we apply theorem [ thm_es ] to , , and the set ^l ] such that .moreover , by * b2 * the tuple is bad .hence , the map ^l ] is at most .consequently , has at most satisfying assignments in total . the formula is obtained from by attaching random clauses .let be the formula resulting by attaching the first random clauses .then by corollary [ cor_esbad1 ] with probability at least the formula has at most satisfyng assignments .conditioning on this event , we form by attaching another random clauses to . since for any satisfying assignment of the probability that these additional clauses are satisfied as well is , the expected number of satisfying assignments of is at most provided that is sufficiently large .therefore , markov s inequality entails that thus , we obtain as desired . let be a random -sat instance .to each assignment we assign the set due to lemma [ lemma_numberksat ] , a similar argument as in the proof of theorem [ thm_icy_col ] in section [ sec_icy_col ] yields theorem [ thm_icy_sat ] .let be small but fixed .let be a random -sat formula with clauses .then for any we have because of the independence of all clauses .furthermore , if is a second assignment at hamming distance from , then indeed , there is a function such that and let signify the number of assignments at hamming distance from such that .there are ways to choose an assignment at hamming distance from . therefore , due to the formulas derived above , we have setting and simplifying , we obtain the assertion .[ cor_xalpha ] there are numbers and such that with probability at least in a pair chosen from the distribution there is no assignment such that such that and .if is chosen from , then is distributed as a random formula given that is a satisfying assignment .therefore , the corollary follows from lemma [ lemma_xalpha ] and markov s inequality , where we use the fact that is a continuos function .if is a -sat formula and an assignment , then we say that a variable _ supports _ a clause if changing the value of would render unsatisfied .suppose that is sufficiently large and .let be sufficiently small numbers .let signify a sufficiently small constant .let be chosen from the distribution .we may assume that the random pair satisfies ( [ equ ] ) .moreover , a 1st moment computation shows that the random formula has the following property w.h.p. now , assume for contradiction that there is a satisfying assignment of such that the set has size .each supports in at least clauses that contain no variable from . since these clauses are satisfied in , although , each such contains another variable from .hence , contains at least clauses containing at least two variables from , in contradiction to ( [ eqksatdisc ] ) .we may assume that for a fixed .moreover , without loss of generality , we may assume that the assignment sets all variables to true .let denote a random formlua with clauses satisfied by , and let signify the set of all uniquely satisfied clauses of . consider the following process . 1 .let be the set of all variables that support fewer than clauses .2 . let .while there is a variable that supports at least clauses from that contain a variable from , add to .the _ expected _ number of uniquely satisfied clauses is at least .hence , each variable is expected to support at least clauses .therefore , if is sufficiently small , then there is a contant such that the probability that a variable supports fewer than clauses is at most .hence , by chernoff bounds we have with probability at least .thus , assume that .we claim that then the final set resulting from step 2 has size at most . for assume that .then step 2 removed at least variables , whence there are at least clauses that contain two variables from .however , a standard 1st moment argument shows that the probability that there exists a set with this property is .hence , with probability at least we have .setting concludes the proof .
|
for many random constraint satisfaction problems , by now , we have asymptotically tight estimates of the largest constraint density for which they have solutions . at the same time , all known polynomial - time algorithms for many of these problems already completely fail to find solutions at much smaller densities . for example , it is well - known that it is easy to color a random graph using twice as many colors as its chromatic number . indeed , some of the simplest possible coloring algorithms already achieve this goal . given the simplicity of those algorithms , one would expect there is a lot of room for improvement . yet , to date , no algorithm is known that uses colors , in spite of efforts by numerous researchers over the years . in view of the remarkable resilience of this factor of 2 against every algorithm hurled at it , we believe it is natural to inquire into its origin . we do so by analyzing the evolution of the set of -colorings of a random graph , viewed as a subset of , as edges are added . we prove that the factor of 2 corresponds in a precise mathematical sense to a phase transition in the geometry of this set . roughly , the set of -colorings looks like a giant ball for , but like an error - correcting code for . we prove that a completely analogous phase transition also occurs both in random -sat and in random hypergraph 2-coloring . and that for each problem , its location corresponds precisely with the point were all known polynomial - time algorithms fail . to prove our results we develop a general technique that allows us to prove rigorously much of the celebrated 1-step replica - symmetry - breaking hypothesis of statistical physics for random csps . _ key words : _ algorithms , random structures , constraint satisfaction problems , phase transitions [ theorem]algorithm * *
|
a classical problem in linear elasticity concerns the computation of the displacement , stress and strain due to force and dislocation sources with suitable boundary conditions imposed on a half - space .the case of zero normal stress is of particular importance in geophysical applications , for which the exact solution was derived by mindlin .we will concentrate here on the question of accelerating the evaluation of the field due to a collection of such force and dislocation vectors .more precisely , we will describe two new analytic representations for the image structure in mindlin s solution that can be incorporated into a fast multipole method ( fmm ) .with sources and sensor / target locations , the fmm reduces the cost of evaluating the fields from to .the fmm can also be used to accelerate integral - equation based methods for elastostatic boundary value problems on surfaces embedded in the half - space , avoiding the ill - conditioning associated with finite element and finite difference discretizations of the underlying partial differential equations .we will begin with a discussion of the mathematical foundations for the new scheme , followed by a brief description of the full fmm implementation . for readers unfamiliar with fast multipole methods, we suggest the papers to gain some familiarity , although the mathematical treatment here is largely self - contained . in section [ sec : numerical ], we present numerical experiments both for collections of singular sources and for the evaluation of layer potentials using the quadrature method of .to fix notation , let us first consider the displacement at an observation point due to a force vector acting at the source point in free - space .the solution is given by the well - known kelvin solution : where , , and are the lam coefficients .( in the preceding expressions , and throughout the paper , we will generally make use of the standard summation convention . on occasionwe will write out the formulas explicitly when it makes the analysis clearer . )formulas for the strain and stress tensors can be obtained from partial derivatives of the preceding formulas for displacement with respect to each component : a number of fast methods for the kelvin solution have been developed , based either on the fft or the fmm . in a half - space ,the solution is more complicated , involving several image sources .we assume that the -axis points up and that sources and targets are in the lower half - space ( ) . with a slight modification of okada s notation ,we let corresponding to the usual cartesian components of the vector from the image source to the target , with the sign flipped in the component .note that .we denote the distance from the image to the target point by mindlin showed that the exact solution to the half - space problem with zero normal stress can be written in the form , where with \right\ } \ , , \label{uijbc } \\{ c_i^j } = & \frac{1}{4 \pi \mu } ( 1- 2\delta_{i3 } ) \left\ { ( 2-\alpha ) \frac{r_i \delta_{j3 } - r_j \delta_{i3}}{r^3 } + \right .\nonumber \\ & \left .\quad \alpha \xi_3 \left [ \frac{\delta_{ij}}{r^3 } - \frac{3r_ir_j}{r^5 } \right ] \right\}.\end{aligned}\ ] ] we will refer to as the single - layer kernel in a half - space . the first contribution to in formula ( [ mindlinfull ] ) is the direct arrival " from the source in a uniform infinite medium , given by the kelvin formula ( [ slpkernel ] ) .the second piece has the same form , but with the roles of and reversed . since , this is the arrival at target " from a source at with modified lam coefficients .thus , interactions governed by both the and contributions can be computed using the free - space " single - layer kernel . and are quite different and their analysis is the principal contribution of this paper .a simple algebraic trick permits the computation of the contributions .namely , we set and .it is easy to check that thus , (x_1,x_2,-x_3 ) \, , \ ] ] where $ ] denotes the kelvin formula with the dependence on the lam coefficients made explicit .[ rmk_slp_signflip ] note that the argument has been replaced by , so that some care is required when evaluating terms such as which appear in the stress and strain tensors .the double - layer kernel in a half - space is given by \nu_k .\label{dlpkernel}\ ] ] this kernel describes the displacement field due to a dislocation vector across a surface with orientation vector : ( typically , the orientation vector is normal to the surface . )the dislocation vector is sometimes called a _double - force vector_. to compute , we note first that where \frac { \delta_{3k}r^2 + r_k ( 2r+r_3)}{r^3(r+r_3)^2 } \label{bijk } \\ & \qquad + \left [ \frac{r_i \delta_{jk } + r_j \delta_{ik}}{r(r+r_3)^2 } - \right .\nonumber \\ & \qquad \qquad \left .\left . r_ir_j \frac{2 \delta_{3k}r^2 + r_k ( 3r+r_3)}{r^3(r+r_3)^3 } \right ] ( 1-\delta_{i3})(1-\delta_{j3 } ) \right ] \right\ } , \nonumber \end{aligned}\ ] ] + \alpha \left [ \frac{\delta_{ij}}{r^3 } - \frac{3 r_i r_j}{r^5 } \right ] \delta_{3k } + \label{cijk } \\ &\qquad\qquad \left .3 \alpha \xi_3 \left [ \frac{r_i\delta_{jk } + r_j \delta_{ik } + r_k \delta_{ij}}{r^5 } - \frac{5r_ir_jr_k}{r^7 } \right ] \right\}. \nonumber\end{aligned}\ ] ] as for the single - layer kernel , (x_1,x_2,-x_3 ) \ , .\ ] ] we need to compute the contribution of to the double - layer kernel according to ( [ dlpkernel ] ) : \nu_k d_j \ , .\ ] ] suppose that we invoke the _ free - space _ double - layer kernel with and dislocation vector , so that we actually compute \nu_k d_j \\ & = \left [ - \tlambda \delta_{jk } \frac{\partial { a_i^n}}{\partial \xi_n } + \mu \left ( \frac{\partial { a_i^j}}{\partial \xi_k } + \frac{\partial { a_i^k}}{\partial \xi_j } \right ) \right ] \nu_k d_j \ , .\end{aligned}\ ] ] fortunately , the difference is a simple harmonic function : \nu_k d_j \nonumber \\ & = ( \tlambda+ \lambda ) \frac{\partial { a_i^n}}{\partial \xi_n } ( { \bf \nub \cdot { d } } ) \nonumber \\ & = \frac{1}{8\pi\mu } ( 2\lambda + 4\mu ) ( { \bf \nub \cdot { d } } ) \left\ { \alpha \frac{r_i}{r^3 } - ( 2- \alpha )\frac{3r_i + r_i } { r^3 } + 3 ( 2-\alpha ) \frac{r_i r^2}{r^5 } \right\ } \nonumber \\ & = \frac{1}{8\pi\mu } ( 2\lambda + 4\mu ) ( { \bf \nub \cdot { d } } ) ( 2\alpha-2 ) \frac{r_i}{r^3 } \nonumber \\ & = \frac{1}{2\pi } ( { \bf \nub \cdot { d } } ) \frac{-r_i}{r^3 } = \frac{1}{2\pi } ( { \bf \nub \cdot { d } } ) \frac{\partial}{\partial x_i } \frac{1}{r } \ , . \label{diffcomp}\end{aligned}\ ] ] this difference can be computed using a single call to the fmm for the laplace equation , since the result is simply the gradient of the field due to a point source with strength . for those keeping careful track of indices , note that , using the okada notation , it is indeed the gradient that is required .we have moved to in the free space call .thus , justifying the last equality in ( [ diffcomp ] ) . the difficulty in developing a fast algorithm for the mindlin solution , however , lies not in handling the free space kernel or the simple image .rather , it lies in the kernels and .ignoring the scaling factor , the components of displacement induced by the image can be written in the form : without entering into a detailed derivation , mindlin s basic observation was that the image could be derived from a consideration of all second derivatives of a scalar potential .more precisely , we have the following lemma . [ l3.1 ]let denote the scalar potential given by . then where the subscript denotes differentiation with respect to the corresponding variable .it follows that the contribution to the displacement induced by the image in the single - layer kernel is given by \, , \ ] ] where denotes the gradient with respect to the target location and denotes the gradient with respect to the image source location at .[ slprmk ] note that if were the potential due to a simple charge source , then would be the gradient of the potential induced by a dipole with orientation and strength given by .a straightforward but tedious calculation yields [ l3.2 ] let .then the contribution to the displacement induced by the image in the double - layer kernel is given by where \nonumber \\ & \qquad - 2\lambda ( { \bf \nub \cdot { f } } ) \bs_{\xi_3\xi_3}. \label{bdlp}\end{aligned}\ ] ] [ dlprmk ] note that the formula for in lemma [ l3.2 ] is , in essence , a quadrupole field of a -type source with specific second derivative contributions defined in ( [ bdlp ] ) .it is easy to verify that is a scalar harmonic function in the lower half - space .it is also clear , however , that it can not describe the field due to a bounded collection of charges , since is growing as . in this section ,we describe some new far field representations that are somewhat involved , but permit much more efficient computation . we begin by considering the -type sources contained in the boxes and in fig .[ downlist ] .they are separated from the target boxes and by at least one box length , so that far field and/or local expansions should be rapidly convergent . from above , the displacement in the lower half - space due to the image sources in , say , is given by , where the scalar is given by where denotes the number of image sources in and denotes the vector from the image source to the target point . within the box ,however , the field is smooth and harmonic , and can be written in the form of a local expansion : with the spherical coordinates of with respect to the box center of . here, is the usual spherical harmonic of degree and order where the associated legendre functions are defined by the rodrigues formula and is the legendre polynomial of degree .the coefficients of the local expansion can be computed by projection onto the spherical harmonic basis ( integrating over the surface of a sphere enclosing the box and centered at the box center ) .that is , this can be carried out in work , where is the order of the expansion in ( [ local ] ) by using a tensor product grid with gauss - legendre nodes in the variable and equispaced nodes in the variable . in order to develop a more efficient fast algorithm , however , we would like to have outgoing representations from the source box that can make use of the full framework of the fmm .one such representation is based on the plane wave formula ( , p. 1256 ) for the potential at a target due to a simple charge source at : valid for .the following theorem provides an expression for the displacement induced by single and double - layer sources in terms of plane waves ( that is , complex exponentials of the components ) .[ t3.1 ] let denote the displacement induced by a single - layer force vector located at the image source that lies in a source box centered at .then where and theorem [ t3.1 ] can be proven by fourier analysis and contour deformation , as in the derivation of the representation ( [ exprep ] ) in .[ r3.3 ] alternatively , we recall that . we may write this relation in the form it is straightforward to check that constants of integration can be ignored since they would only permit linear functions of , , and to appear in and these are annihilated by the second derivative operators which arise in computing the displacement , according to lemma [ l3.1 ] .note now that the operator corresponds in ( [ exprep ] ) to division by .this results in a divergent integral , but using lemma [ l3.1 ] again , the displacement clearly corresponds to multiplication by a factor of either , , or ( the signatures of , , and , respectively ) .this argument , of course , is not entirely rigorous , but can be made so . by superposition, we obtain a plane wave expansion for the field due to a set of sources , summarized in the following lemma .[ l3.3 ] let denote the displacement induced by a collection of single - layer force vectors at image source locations lying in a source box centered at .then the components of displacement are given by the plane wave representation of theorem [ t3.1 ] , with a plane wave expansion can be obtained for the double - layer kernel as well .the proof is analogous .[ t3.2 ] let denote the displacement induced by a double - layer force vector with orientation vector , located at the image source that lies in a source box centered at .then , where - 2 \lambda ( { \bf \nub } \cdot { \bf d } ) ) , \end{aligned}\ ] ] and the are defined in ( [ sdef ] ) .[ l3.4 ] let denote the displacement induced by a collection of double - layer force vectors at image source locations with orientation vectors lying in a source box centered at .then the components of displacement are given by the plane wave representation of theorem [ t3.2 ] , with - 2 \lambda ( { \nub^n } \cdot { \bf d}^n ) ) .\end{aligned}\ ] ] quadratures have been developed for these plane wave formulas in , valid so long as the source and target boxes are separated in the -direction by at least one intervening box length . referring to fig .[ downlist ] , and are well separated from but only is well separated from . it is demonstrated in that 3 digits of accuracy can be achieved with about 100 plane waves , 6 digits can be achieved with about 560 plane waves , and 10 digits can be achieved with about 1800 plane waves . more concretely , suppose we wish to enforce a maximum error of .given a well - separated image source and target , we have } , \label{disc18}\ ] ] where , and the weights , nodes and values are given in table [ tab2 ] .( the total number of exponentials required is . )the weights and nodes correspond to a discretization of the outer integral in ( [ exprep ] ) .the inner integral in ( [ exprep ] ) is discretized using the trapezoidal rule with nodes .the quadratures are designed under the assumption that and .this corresponds to their usage in the fast multipole method , where by convention , boxes at every level of the fmm hierarchy are rescaled to have unit size ..columns 1 and 2 contain the eighteen weights and nodes for discretization of the outer integral in ( [ exprep ] ) at six digit accuracy .column 3 contains the number of discretization points needed in the inner integral , denoted by ( from ) .[ tab2 ] [ cols=">,>,>",options="header " , ]in this paper , we have presented a fast multipole method for elastostatic interactions using mindlin s solution the green s function that satisfies the condition of zero normal stress in a half - space .we hope that the algorithm will prove useful in geophysical modeling .we thank shidong jiang , michael minion , michael barall , terry tullis , jim dieterich , and keith richards - dinger for useful conversations .this work was supported by the national science foundation under grant dms-0934733 and by the department of energy under contract defg0288er25053 .x. wang , j. kanpka , w. ye , n.r .aluru , j. white , algorithms in fast stokes and its application to micromachined device simulation , ieee trans .comput . aided des .. syst . * 25 * , 248257 ( 2006 ) .
|
in this paper , we present a fast multipole method ( fmm ) for the half - space green s function in a homogeneous elastic half - space subject to zero normal stress , for which an explicit solution was given by mindlin ( 1936 ) . the image structure of this green s function is unbounded , so that standard outgoing representations are not easily available . we introduce two such representations here , one involving an expansion in plane waves and one involving a modified multipole expansion . both play a role in the fmm implementation . * key words . * fast multipole method ; linear elasticity ; mindlin s solution .
|
recently , we developed a formulation for determining binary elements with astrometric observations .the simplified solution is written in terms of elementary functions , and therefore requires neither iterative nor numerical methods ( asada et al .this solution has been generalized to a binary system in open ( hyperbolic or parabolic ) orbits as well as closed ( elliptic ) ones ( asada 2007 ) .an extension to observational data has been also discussed ( asada et al . 2007 ) .the solution gives an explicit form of binary elements such as the eccentric anomaly and the major axis of elliptic orbits .oyama et al .( 2008 ) made an attempt to use this solution for discussing some uncertainty in binary elements because of large scatter of their data points , when they measured proper motions of maser sources in the galactic center with vera .on the other hand , the remaining parameters of the time of periastron passage and the longitude of ascending node are not discussed in the simplified solution .hence , the solution is rather simplified .however , these parameters are needed to make a comparison between the simplified solution and conventional ones .in addition , the lack of information on the remaining parameters apparently suggests a certain incompleteness of the simplified solution . in this brief article , therefore , we shall derive , in the framework of the simplified solution , both the time of periastron passage and the longitude of ascending node in order to complete the solution .astrometry plays a fundamental role in astronomy through providing useful star catalogs based on precise measurements of the positions and movements of stars and other celestial bodies . for instance , astrometric observations provide an useful method of determining mass of various unseen celestial objects currently such as a massive black hole ( miyoshi et al .1995 ) , an extra - solar planet ( benedict et al . 2002 ) and two new satellites of pluto ( weaver et al .astrometry of sharpless 269 with vera detects a trigonometric parallax corresponding to a distance of 5.28 kpc , which is the smallest parallax ever measured , and puts the strongest constraint on the flatness of outer rotation curve ( honma et al .2007 ) . accordingly, astrometry has attracted renewed interests , since the hipparcos mission successfully provided us the precise catalog at the level of a milliarcsec .in fact , there exist several projects of space - borne astrometry aiming at a accuracy of a few microarcseconds , such as sim ( shao 2004 ) , gaia ( mignard 2004 , perryman 2004 ) and jasmine ( gouda et al .2007 ) . in this paper , we focus on an astrometric binary , for which only one of the component stars can be visually observed but the other cannot , like a black hole or a very dim star . in this case , it is impossible to directly measure the relative vector connecting the two objects , because the secondary is not directly observed .the position of the star is repeatedly measured relative to reference stars or quasars . on the other hand ,the orbit determination of resolved double stars ( visual binaries ) , which are a system of two visible stars , was solved first by savary in 1827 and by many authors including kowalsky , thiele and innes ( binnendijk 1960 , aitken 1964 for a review on earlier works ; for the state - of - the - art techniques , e.g. , eichhorn and xu 1990 , catovic and olevic 1992 , olevic and cvetkovic 2004 ) . the relative vector from the primary star to the secondary has an elliptic motion with a focus at the primary .this relative vector is observable only for resolved double stars . in conventional methods of orbit determination ,the time of periastron passage is one of important parameters because it enters the kepler s equation as where , , and denote the time of periastron passage , orbital period , eccentricity and eccentric anomaly , respectively ( e.g. , danby 1988 , roy 1988 , murray and dermott 1999 , beutler 2004 ) .the simplified solution does not use the kepler s equation in order to avoid treating such a transcendental equation .this paper is organized as follows .our notation in the simplified solution will be summarized in 2 .the time of periastron passage in the simplified solution will be derived in 3 . the longitude of ascending node will be obtained in 4 .our notation in the simplified solution is briefly summarized as follows .we neglect motions of the observer and the common center in our galaxy .namely , we take account only of the keplerian motion of a star around the common center of mass of a binary system .let us define as the cartesian coordinates on a celestial sphere , in such a way that the apparent ( observed ) ellipse on the celestial sphere can be expressed in the standard form as where .the eccentricity is .this eccentricity may be different from , the eccentricity of the actual elliptic orbit , because of the inclination of the orbital plane with respect to the line of our sight .the star is located at on the celestial sphere at the time of for .we use a fact that the law of constant - areal velocity still holds , even after a keplerian orbit is projected onto the celestial sphere . here , the area is swept by the line interval between the star and the projected common center of mass but not a focus of the apparent ellipse ( see fig . ) .this fact is expressed as where and denote the area swept during the time interval , for , and the total area of the apparent ellipse , respectively .the swept area is expressed as ( asada et al .2004 , asada 2007 ) .\label{areas}\ ] ] the eccentric anomaly in the apparent ellipse is given by .the orbital elements can be expressed explicitly as elementary functions of the locations of four observed points and their time intervals ( asada et al .let us take four observed points , , and for . the location of the projected common center is given by where , and are elementary functions of , and for , , . the eccentric anomaly in the actual ellipse ( on the orbital plane ) is denoted as ( see eq . ) . given , , and , we can analytically determine the parameters , , and as ( asada et al .2004 ) where order to determine and for an actual ellipse , the simplified solution requires neither the time of periastron passage nor the longitude of ascending node , ( asada et al .if one wishes to know and , however , they can be determined as follows ( see also fig .first , we discuss in this section .the projected position of the periastron on the celestial sphere , , is determined as because the ratio of the semimajor axis to the distance between the center and the focus of the ellipse remains unchanged , even after the projection ( asada et al .the eccentric anomaly of the periastron in the apparent ellipse is introduced as where is given also by eq .( ) .thereby , we can determine ( mod ) . by using eq .( ) , we obtain where we can determine because the eccentric anomaly in the apparent ellipse at , denoted as , is nothing but , which has been determined by eqs .( ) and ( ) .therefore , eq . is solved for as where the r.h.s .is obtained from observed quantities .let us consider the projected periastron at on the apparent ellipse . in the simplified solution , expressed as eq .( ) . herewe make a translation of in such a way that the common center of mass can be located at the origin of new coordinates .namely , the axis is taken to lie along the major axis of the apparent ellipse in the celestial sphere , and the axis is perpendicular to the axis in the celestial sphere ( see fig . ) . in the coordinates , the position of the projected periastron becomes on the other hand , by projecting the actual ellipse onto the celestial sphere , we obtain where the coordinates are chosen so that the ascending node can be in the -direction ( see fig . ) .the longitude of ascending node , which is the angle between the and axes , relates the two coordinates of and by rotation .therefore , from eqs . and ,we obtain this relation determines ( mod ) . for instance, we obtain explicitly where , , and in the r.h.s .have been determined by eqs .( ) , ( ) , ( ) and ( ) . in conventional methods , determining tightly coupled with and . on the other hand, it can be done separately from and in the simplified solution .it should be noted that in practical applications a reference direction chosen by observers may be different from the major axis of the apparent ellipse .in such a practical case , is the angle from the reference direction to the direction of the ascending node . to compute the longitude of ascending node ,therefore , the angle from the reference direction to the major axis is added into the angle measured from the major axis . in short ,the longitude of ascending node is generally the sum of and , where is the angle determined by using eq .the expression of is obtained in the straightforward manner , for instance as eq .( 6 ) in asada et al .( 2007 ) , where they denoted as .tables 1 and 2 give an example to show the flow of actual determination of all six orbital elements .this would be helpful to the readers who code computing routines in practical applications to check their programs .table 1 shows some given values for all six orbital elements and the orbital period .based on these elements , we first prepare a virtual observation data set , which is listed in table 2 .the elements are then reproduced from the data by using the proposed method . here , we discuss how to determine in the simplified solution a position of a component star at arbitrary time ( mod ) . for , and ,( ) becomes this is a transcendental equation for on the celestial sphere .this situation seems similar to that the kepler s equation is transcendental in on the orbital plane .here we should note that the time of periastron passage is needed in order to treat kepler s equation , whereas it is not for eq .( ) .this is because we employ the time interval , while the kepler s equation needs the time itself instead of the interval .regarding this point , thiele s method for visual binaries is closer to the simplified solution , in the sense that they use the time interval in order to delete the time of periastron passage .a crucial difference is that thiele s method uses kepler s equation on the orbital plane ( thiele 1883 ) , while the simplified one does the constant areal velocity in the apparent ellipse on the celestial sphere . in this sense ,the simplified solution more respects measured quantities on the celestial sphere than conventional ones ( see fig . ) .it is verified numerically that the above procedure enables us to determine in the simplified solution locations of a star at arbitrary time ( see fig . for an example ) .in this paper , we obtain in the simplified solution both the time of periastron passage and the longitude of ascending node in order to complete the solution ( see fig . ) . in conclusion ,the simplified solution requires neither iterative nor numerical methods when we determine all the elements including and .it does only when we wish to determine the star s position at arbitrary time . before closing this paper , it is worthwhile to mention that eqs .( ) , ( ) and ( ) can be applied to a case of open orbits in the straightforward manner . for open orbits ,expressions of , , , , and have been already derived in the framework of the simplified solution ( asada 2007 ) .we would like to thank hiroshi kinoshita for stimulating conversations .we wish to thank toshio fukushima for his useful comments on the earlier version of the manuscript .this work was supported by a japanese grant - in - aid for scientific research from the ministry of education , no .19035002 .aitken , r. g. 1964 _ the binary stars _( ny : dover ) asada , h. , akasaka , t. , & kasai , m. 2004 , pasj , 56 , l35 asada , h. 2007 , celest .astron . , 97 , 151 .asada , h. akasaka , t. , & kudoh , k. 2007 , aj , 133 , 1243 .benedict , g. f. et al .2002 , apj , 581 , l115 .beutler , g. 2004 _ methods of celestial mechanics _ ( berlin : springer ) binnendijk , l. 1960 _ properties of double stars _( philadelphia : university of pennsylvania press ) .catovic , z. , & olevic , d. 1992 in _ iau colloquium 135 , asp conference series vol . 32 _( eds mcalister h. a. , hartkopf w. i. , ) 217 - 219 ( san francisco , astronomical society of the pacific ) .danby , j. m. a. , 1988 _ fundamentals of celestial mechanics _( va : william - bell ) eichhorn , h. k. , & xu , y. 1990 , apj , 358 , 575 gouda , n. et al .2007 , advances in space research , 40 , 664 .honma , m. et al .2007 , pasj , 59 , 889 mignard , f. ` overall science goals of the gaia mission ' , _ proc .the three - dimensional universe with gaia _ , 4 - 7 october 2004 , paris ( netherlands : esa publications ) miyoshi , m. et al .1995 , nature , 373 , 127 murray , c. d. & dermott , s. f. 1999 _ solar system dynamics _( cambridge : cambridge univ . press ) olevic , d. , & cvetkovic , z. 2004 , a&a , 415 , 259 oyama , t. , miyoshi , m. , deguchi , s. , imai , h. , & shen , z. 2008 , pasj , _ in press _ [ arxiv:0710.1393 ] perryman , m. a. c. ` overview of the gaia mission ' , _ proc . the three - dimensional universe with gaia _ , 4 - 7 october 2004 , paris ( netherlands : esa publications ) roy , a. e. 1988 _ orbital motion _ ( bristol : institute of physics publishing ) shao , m. 2004 , science overview and status of the sim project , spie 5491 - 36 thiele , t. n. 1883 , astron .nachr . , 104 , 245 weaver , h. a. et al .2006 , nature , 439 , 943 .numerical example of orbital elements .the reference direction is taken along the -axis , which is different from the major axis of the apparent ellipse in this example . is the angle measured from the reference direction . [ cols="<,<,<,<,<,<,<",options="header " , ]
|
simplified solutions to determine binary elements by astrometry were obtained in terms of elementary functions ( asada et al . 2004 ) , and therefore require neither iterative nor numerical methods . in the framework of the simplified solution , this paper discusses the remaining two parameters of the time of periastron passage and the longitude of ascending node in order to complete the solution . we thus clarify a difference between the simplified solution and other analytical methods .
|
deep in the heart of northern england , on the banks of a river near a village at the edge of the lancashire pennines , there is a fine brick building dating from the late nineteenth century . heredwell two stout , well - preserved old ladies named victoria and alexandra. they will never invite you in for tea though , for the building is the ellenroad mill engine house and the two ladies are a giant , twin compound steam engine operating in tandem , originally built in 1892 . on weekends willing teams of overalled maids and butlers oil and polish the ladies and fire up the old lancashire boiler that delivers the steam to their cylinders to move the pistons that drive the giant , 80-ton flywheel .the speed of the engines is controlled by a centrifugal governor , and the motions of this device , occurring on time and spatial scales that can be appreciated by the human visual cortex , are fascinating to watch . originally patented by james watt in 1789 , the centrifugal steam engine governor is the most celebrated prototype example of a self - regulating feedback mechanism .the device consists of two steel balls hinged on a rotating shaft which is spun from a belt or gears connected to the flywheel , figure [ governor - sketch ] . in stable operation , as the speed of the engine increases the inertia of the flyballs swings the arms outwards , contracting the aperture of a valve which controls the speed of rotation by restricting the steam supply .if the engine lags due to an additional , imprecisely known , load ( in the mill this might have been another loom connected up to the engine by a belt drive ) the flyballs are lowered and the valve opens , increasing the steam supply to compensate .thus the design of the governor cleverly uses the disturbance itself , or deviation from set - point or desired performance , to actuate the restoring force . in certain operating regimesthe motions of the governor may lose stability , becoming oscillatory and spasmodic , amplifying the effect of the disturbance and thwarting control of the engine .nineteenth century engineers called this unstable behaviour _ hunting _ and devoted much effort to improving the design of centrifugal governors .james clarke maxwell was the first to formulate and analyse the stability of the equations of motion of the governor , explaining the onset of hunting behaviour in mathematical terms , followed ( independently ) by vyshnegradskii .we analyse vyshnegradskii s equations for the governor s motion in section [ sec3 ] , as an exemplary three dimensional stability problem .the self - correcting centrifugal governor is a simple feedback control system because the changes in velocity are fed back to the steam valve .its widespread adoption during the 18th and 19th centuries dramatically transformed the steam - driven textile mills , the mining industry , and locomotion .( in 1868 , the year maxwell published `` on governors'' there were an estimated 75,000 watt governors in england alone . ) without this device the incipient industrial revolution could not have progressed , because steam engines lacking self - control would have remained hopelessly inefficient , monstrous , contraptions , requiring more than the labour that they replaced to control them .watt s iconic governor also embodies a radical change in the philosophy of science . for several hundred yearsthe mechanical clock , with its precise gears and necessity for human intervention to rewind it or correct error and its complete absence of closed - loop feedback , had been the dominant motif in scientific culture . in a common metaphor ,the universe was created and ordered by god the clockmaker .isaac newton had no doubt that god had initiated the celestial mechanics of the motions of the planets and intervened when necessary to keep his creation perfectly adjusted and on track .the clockwork view was also deeply satisfying to laplace , one of the most influential mathematicians of the eighteenth and early nineteenth centuries .stability theory was developed some two centuries and more after newton published his _ principia _ ( 1687 ) , so he could not have known that the planetary orbits may be what poincar called poisson stable ( small perturbations are self - correcting ) or they may be chaotic . as concepts of feedback and stability were developed rigorously and applied in the late nineteenth and early twentieth centuries , divine open - loop control began to wane and there came a growing awareness of systems as dynamical entities that can regulate their own destiny and internally convert uncertain inputs into stable outputs .the technological advances in transport , power , and communications made possible by feedback control and applied stability theory are agents of change , the vectors of liberty , liberalism , and literacy in societies , themselves enabling the blossoming and seeding of more sophisticated ideas of feedback and stability in complex environmental , socio - economic , and biological systems ._ now , _ due to stability theory and feedback control , we may contemplate `` the fundamental interconnectedness of all things'' , but back _ then , _ in the clockwork days , people could not .it is surely no coincidence that totalitarian governments favour clockwork metaphors .control theory , then , is a major strand in the development of modern nonlinear dynamics , but it is not the first .the centrifugal governor also transformed the practice of astronomy , in that it enabled fine control of telescope drives and vastly improved quantitative observations , and it is this earlier force ( already alluded to above in mention of newton s and laplace s work ) in the development of dynamical systems and stability theory celestial mechanics on which we now focus attention .the next stage of our nonlinear dynamics odyssey takes us from the post - industrial north of england to the miraculously intact ( given the destructions of wwii ) medieval city of regensburg in germany , to an older , humbler but no less important building than that which houses the ladies , the kepler museum .in addition to celebrating the life and work of johannes kepler ( 15711630 ) the museum houses priceless manuscripts , letters , publications , and astronomical instruments and interpretive exhibits that tell a lively and inspiring story , that of the development of celestial mechanics from galileo to its culmination , in analytic terms , in the work of poincar .an exhibit from the 18th century , an exquisitely engineered brass orrery , or clockwork model of the solar system , in its detail and precision expresses the satisfaction and confidence of the clockwork aficionados of the age of enlightenment .but a nearby exhibit expresses , rather presciently , the need for a new metaphor for scientific endeavour and achievement .it is an early 19th century relief in which kepler unveils the face of urania , the muse of astronomy , whereupon she insouciantly hands him a telescope and a scroll inscribed with his own laws , as if to say : `` hmm not a bad job ; now take these back and do some more work then tell me why your elliptical orbits are non - generic '' .( see figure [ kepler ] . ) despite laplace s confidence the problem of the stability of the solar system refused to go away , but instead took on a central role in the preoccupations of mathematicians , physicists , astronomers , and navigators post - newton .it was by no means clear , even to newton , that newton s law was sufficient to describe the motions of three or more celestial bodies under mutual gravitational attraction .the problem also refused to be solved , in the sense of what was accepted as a `` solution '' during the latter 18th century and first half of the 19th century , i.e. , analytically in terms of elementary or previously - known special functions .progress was made in the mid-1800s in improving series approximations but , not surprisingly , the hydra of nonconvergence soon raised one after the other of its ugly ( of course ! ) heads . by 1885 , when it was chosen by weierstrass as one of four problems in the mathematics competition sponsored by king oscar ii of sweden , the -body problem had achieved notoriety for its recalcitrance but in doing so it had also driven many of the seminal advances in mathematics and produced many of the greatest mathematicians of the 19th century .the first problem in king oscar s competition was to show that the solar system as modeled by newton s equations is stable . in his ( corrected ) entry poincar invented or substantially extended integral invariants , characteristic exponents , and * poincar maps * ( obviously ) , invented and proved the * recurrence theorem * , proved the nonexistence of uniform first integrals of the three body problem , other than the known ones , discovered * asymptotic solutions * and * homoclinic points * , and wrote the first ever description of chaotic motion in short , founded and developed the entire subject of geometric and qualitative analysis . then he concluded by saying he regarded his work as only a preliminary survey from which he hoped future progress would result .poincar s `` preliminary survey '' is still inspiring new mathematics and applications , but during the 20th century the collective dynamic of dynamical systems development was highly nonlinear .homoclinic points and homoclinic chaos were partially treated by the american mathematician george birkhoff ( 18841944 ) he obtained rigorous results on the existence of periodic orbits _ near _ a homoclinic orbit and by cartwright and littlewood in their study of van der pol s ( non - hamiltonian ) equation , cartwright and littlewood stated numerous `` bizarre '' properties of solutions of this differential equation , implying the existence of an invariant cantor set , but their very concise paper was not easy to penetrate , and their results remained largely unknown until levinson pointed them out to stephen smale . during the 1960s and 1970s smale s representation of homoclinic chaos in terms of symbolic dynamics and the horseshoe map stimulated renewed interest in dynamical systems ( although we have skipped a lot of mathematical history here , most notably kam theory ) .happily , this coincided with the advent of desktop digital computers subject to moore s law .since the 1980s improvements in processor speed have both driven and been driven by the use of computational simulations of dynamical systems as virtual experiments , and inspired advances in fields such as network stability , numerical instabilities , and turbulence .essentially these advances are sophisticated and technologically facilitated applications of poincar s and lyapunov s stability theory , and in the next section we present the basics and some working definitions .it is somewhat ironic that improvements in processor speed have also led to renewed interest in low dimensional dynamical systems , which usually only require small - time computing and are at least partially amenable to rigorous stability analysis . for large dynamical systemsusually mean turbulent ones , and computation is , in essence , the notorious `` problem of turbulence '' . in a turbulent flow energyis distributed among wavenumbers that range over perhaps seven orders of magnitude ( for , say , a tokamak ) to twelve orders of magnitude ( for a really huge system , say a supernova ) . to simulate a turbulent flow in the computerit is necessary to resolve all relevant scales of motion in three dimensions .it is a fair estimate that such calculations would take 400 years at today s processor speeds , therefore a faster way to do them would be to rely on moore s law and wait only 20 years until computers are speedy enough .many of us in the turbulence business have realized that while we are waiting we can , more expediently , apply reduced dynamical systems methods to the problem , such as karhunen - love ( kl ) decomposition , to distill out a much - reduced , but nevertheless sophisticated , approximation to the dynamics and spatial structure of a turbulent flow .to introduce kl decomposition , we imagine a fractional distillation tower for which the feedstock is not crude oil but a high reynolds number flow . then instead of a natural distribution over hydrocarbon molecular weights we have an energy distribution over scales of motion .we know , in principle , how hydrocarbons are separated in the still according to their boiling points ( even if we do not work at an oil refinery ) , but what properties may we exploit to separate and re - form the energy components of a turbulent flow ? our turbulence refinery does not define the skyline of a seamy port city in complicated chiaroscuro , but exists more conveniently in constrained fluid flow experiments or as direct numerical simulations of the navier - stokes equations _ in silico_. the kl transform operates on data to yield eigenfunctions that capture in decreasing order most of the kinetic energy of the system , so it is especially useful for highly self - structured flows .+ very few dynamical systems have known , exact solutions .for the vast majority it can not even be proved that general solutions exist .stability theory is quite indifferent to such issues ; instead it tells us how families of solutions would behave , assuming they _ do _ exist .loosely we understand stability to mean that a solution does not run away , or to refer to the resilience of a solution to changes in initial conditions or to changes to the equation that generates it .stability is a qualitative property of dynamical equations and their solutions . for practical applicationsstability analysis allows us to say whether a given system configuration will exhibit runaway dynamics ( catastrophic failure ) or return to a stable quasi - equilibrium , limit cycle , or other attractor , in response to perturbation .we have indicated in section [ introduction ] above how the issue of stability of the planetary orbits drove the development of celestial mechanics , but stability is equally important in control theory from a design and operational point of view it could be said that control _ is _ applied stability . it is a grave issue because , as we show in section [ sec3 ] , feedback can result in systems that fail due to instabilities , as well as create ones which maintain homeostasis .thermal explosions , ecological `` arms races '' , and economic depressions are all more - or - less disastrous consequences of unstable feedback dynamics . a big stability question that occupies many scientists today concerns the long - term stability of the world s climate in response to the enhanced greenhouse effect ; questions related to stability of other complex systems will be explored in section [ sec6 ] . in this sectionwe give precise mathematical expression to these concepts of stability , for later reference . for more detail and discussion the readeris referred to the article in scholarpedia curated by holmes and shea - brown .consider the general dynamical system in vector form where and the derivatives are defined and continuous on a domain of the space of .let with the initial value .then , the ( forward ) orbit is the set of all values that this trajectory obtains : .* orbital or generalized lyapunov stability*. is orbitally stable if , for any , there is a neighbourhood of so that , for all in , and are -close .these general definitions of lyapunov stability and asymptotic stability are indifferent to the choice of initial values .lyapunov stability is intimated in figure [ orbital - stability ] , which sketches a segment of an orbit and a segment of a neighbouring orbit , in periodic and non - periodic cases .* lyapunov stability of equilibria*. is a stable equilibrium if for every neighborhood of there is a neighborhood of such that every solution starting in remains in for all .notice that need not approach .lyapunov stability means that when all orbits starting from a small neighbourhood of a solution remain forever in a small neighborhood of that solution the motion is stable , otherwise it is unstable .if is not stable , it is unstable .* asymptotic stability of equilibria*. an equilibrium is asymptotically stable if it is lyapunov stable and additionally can be chosen so that as for all .an asymptotically stable equilibrium ( stationary state ) and its local environment is sketched in figure [ a - equil - stability ] .it is all very well to settle the stability properties of a solution , but what then ?if , as is usually the case , we are studying eq .[ general ] as a model for coupled physical motions or a system of rate processes , and therefore necessarily imperfect , we also need information about how those properties fare under perturbations to the model , or _ structural stability_. the question usually goes something like this : when are sufficiently small perturbations of a dynamical system equivalent to the original unperturbed dynamical system ? andif a system is not structurally stable , how may one unfold it until it is ? andwhat ( new mathematics , physics ) do the unfoldings reveal ?the concept of structural stability has yielded a rich taxonomy of bifurcations and of different classes of vector fields .structural stability is thus fundamentally a classification science , a binomial key of the type that has been used in biology since the method was devised by the swedish botanist linnaeus ( 1707-1778 ) .it is more distracting than useful to define structural stability rigorously at this stage ( although authoritative definitions can be found in the literature , e.g. , hirsch and smale ( 1974) ) ; instead , we shall illustrate some of the concepts in section [ sec4 ] in relation to a perturbed simple pendulum as a simplified surrogate for the restricted three body problem .now that we have some background and theory resources to draw on , let us carry out a stability analysis of the centrifugal governor .this analysis is all the more important for being elementary because it introduces many of the key concepts of dynamical systems theory in a setting that is understandable to non - mathematician physical scientists and engineers and also sets the scene for the more complicated motions we describe in sections [ sec4 ] and [ sec5 ] .vyshnegradskii s equations of motion for the flyball governor sketched in figure [ governor - sketch ] were given as a 3-dimensional , autonomous , first - order dynamical system by pontryagin ( 1962) : where is the angle between the spindle and the flyball arms , is the rotational velocity of the flywheel , the transmission ratio , is the angular velocity of , is the gravitational acceleration , is the flyball mass , is the moment of inertia of the flywheel , represents the net load on the engine , is a constant , and is a frictional coefficient . the length of the arms is taken as unity . for a given load the engine speed and fly - ball angle are required to remain constant , and the unique steady state or equilibrium coordinates are easily found as , , .so far , so dull .dull , too , are the designers of engines , according to maxwell . in his treatment of the governor problem , which was more general than that of vyshnegradskii , he wrote : `` the actual motions corresponding to these impossible roots are not generally taken notice of by the inventors of such machines , who naturally confine their attention to the way in which it is _ designed _ to act ; and this is generally expressed by the real root of the equation . ''the impossible roots he referred to are the complex roots of the characteristic equation obtained from the linearized equations of motion .maxwell and vyshnegradskii both used this method to investigate the mathematical stability of the engine - governor dynamical system and relate the results closely to observed misbehaviours of the physical system .their linear stability analyses provide criteria for which the system returns to its equilibrium engine speed and flyball angle when subjected to a small perturbation .let us represent the perturbed system by setting with , , , and recasting equations ( [ vysh - governor ] ) as where we have neglected terms that are quadratic in the small perturbations , , and . equations ( [ perturbed - vysh - governor ] ) are a linear system with constant coefficients that may be written succintly in matrix form where equation ( [ linear ] ) has nontrivial , linearly independent solutions of the form where the constant components of and the constant may be complex . differentiating ( [ s1 ] ) with respect to and substituting in ( [ linear ] ) gives the eigenvalue problem where is the identity matrix .the requirement that , needed to obtain nontrivial solutions , satisfies ( [ s2 ] ) if and only if the factor , or the determinant may be evaluated and equation ( [ determinant ] ) expressed in terms of the characteristic polynomial : the roots of ( [ characteristic ] ) are the eigenvalues of and the solutions of ( [ s2 ] ) are the corresponding eigenvectors . by inspection of equation ( [ s1 ] ) stability can ensue only if the real eigenvalues , or real parts of complex eigenvalues are negative . from analysis of the characteristic equation ( [ characteristic ] ) this condition can be written as now let us consider the dynamical behaviour of the engine - governor system in the light of ( [ condition ] ) and with the aid of figure [ bifurcation - diagrams ] . in ( a ) and ( b ) the equilibria and linear stability of equations [ vysh - governor ] have been computed numerically and plotted as a function of the friction coefficient .this is a bifurcation diagram , where the bifurcation or control parameter is assumed to be quasistatically variable , rendered in the variables ( a ) and ( b ) .we see immediately that stable , steady state operation of the engine - governor system requires frictional dissipation above a critical value . as decreased through the * hopf bifurcation * point hb the real parts of a pair of conjugate eigenvalues become positive , the equilibrium becomes unstable , and the motion becomes oscillatory .the envelope of the periodic solutions grows as is decreased further , which is also deduced in the inequality ( [ condition ] ) : a decrease in the coefficient of friction can destabilize the system .bifurcation diagrams rendered for the variables ( a ) and ( b ) , stable equilibria are marked by a solid line , unstable equilibria are marked by a dashed line , hb stands for hopf bifurcation , ns stands for neimark - sacker bifurcation , black dots mark the amplitude envelope of the oscillations .( c ) the period of the oscillations decreases with .( d ) continuations at the hopf bifurcation in the parameters and . ] as the bifurcation parameter is decreased through the marked value with the label ns the stable periodic solution , for which the floquet multipliers have modulus , undergoes a * niemark - sacker bifurcation*. a conjugate pair of multipliers leaves the unit circle , and a two - dimensional asymptotically stable invariant torus bifurcates from the limit cycle . for the periodic solutions are unstable but the torus is stable .the behaviour of the system has become essentially 3-dimensional .this section assumes a working knowledge of hamiltonian mechanics from a text book such as goldstein ( 1980) or from undergraduate lecture notes such as dewar ( 2001) . rather than presume to capture the entire content and context of the restricted three body problem within the space of one chapter section we again summarize a small vignette from the panorama , a surrogate for the restricted three body problem .homoclinic chaos and the associated topics of poincar maps , symbolic dynamics , and the smale horseshoe construction , are fleshed out in guckenheimer and holmes ( 1983) and holmes ( 1990) .first let us return to kepler s ellipse , or the two - body problem of newton , which at the end of section [ introduction ] we gave in terms of the hamiltonian for the transformed harmonic oscillator , eq .[ kepler - ellipse ] .the well - known simple pendulum is also a harmonic oscillator , with hamiltonian and equations of motion the phase portrait of the flow , figure [ pendulum ] , shows the three families of periodic solutions bounded by the separatrices , which are emphasized in figure [ pendulum ] . the fixed point ( or equilibrium ) at represents the pendulum at rest and that at represents the upside - down position of the pendulum , keeping in mind that the flat phase portrait should be wrapped around a cylinder of circumference . elementary linear analysis tells us that the the fixed point at is a centre , with the solution matrix of the linearization having a pair of pure imaginary eigenvalues , and that at is a hyperbolic ( or non - degenerate ) saddle point , with the solution matrix of the linearization having having one positive and one negative eigenvalue .each point of the separatrices is homoclinic , or asymptotic to to the fixed point as .in fact the separatrices are simultaneously the stable and unstable manifolds for the saddle point .thus the phase portrait of the pendulum contains qualitative information about the global dynamics of the system .now consider the restricted three body problem that featured in poincar s memoir , in which two massive bodies move in circular orbits on a plane with a third body of negligible mass moving under the resulting gravitational potential . in a rotating framethe system is described by the position coordinates of the third body and the conjugate momenta .poincar studied the following two degree of freedom hamiltonian as a proxy for this system : with corresponding equations of motion by inversion of eq . [ proxy ] we have from which we can obtain the reduced equations of motion where denotes .we see that eqs [ reduced ] have the form of a periodically forced one degree of freedom system in which the angle variable plays the role of time . for eqs[ reduced ] are isomorphic to those for the simple pendulum , eqs [ pem ] , and the phase portrait is that of figure [ pendulum ] ( to make the origin a center we set ) .when a time - periodic perturbation is applied to the pendulum the stable and unstable manifolds that form the separatrix level set typically break up , but some homoclinic points may persist and with them small neighbourhoods of initial conditions , which are repeatedly mapped around in the region formerly occupied by the separatrixes .such regions can now fall on both sides of the saddle point so that of two solutions starting near each other , one may find itself on the rotation side and the other on the oscillation side . at each juncture near the saddle pointsuch solutions must decide which route to take .the global structure of the stable and unstable manifolds rapidly becomes very complicated .poincar prudently decided that , in this case , a thousand words are worth more than a picture : `` when we try to represent the figure formed by [ the stable and unstable manifolds ] and their infinitely many intersections , each corresponding to a doubly asymptotic solution , these intersections form a type of trellis , tissue or grid with infinitely fine mesh .neither of the two curves must ever cross itself again , but it must bend back upon itself in a very complex manner in order to cut across all of the meshes in the grid an infinite number of times.''(poincar ( 1899) , quoted in diacu and holmes ( 1996) ) . we have computed some orbits and rendered the data in figure [ perturbed - pend ] , which may or may not help to clarify the issue .segments of orbits belonging to the stable ( blue ) and unstable ( yellow ) manifolds of the saddle type periodic orbit of the periodically perturbed pendulum , eqs [ reduced ] with and . ]* transverse homoclinic points exist for . +a transverse homoclinic orbit occurs when the stable and unstable manifolds intersect transversally , i.e. , the unstable manifold intersects and crosses the stable manifold . in two dimensions , continuous dynamical systems do not have transverse homoclinic orbits , but a two - dimensional poincar map defined near a periodic orbit of a continuous dynamical system may have them . *transverse homoclinic points obstruct the existence of second integrals of the motion . *transverse homoclinic points imply that chaotic motions exist nearby . the model problem , eq .[ proxy ] , is essentially a simple pendulum coupled weakly to a linear oscillator . for the restricted three body problem itself , poincar showed that after applying perturbation methods and truncating certain higher order terms in the expansion the hamiltonian becomes completely integrable .he also showed that the reduced system , and therefore its poincar map , possesses hyperbolic saddle points whose stable and unstable manifolds , being level sets of the second integral , coincide , as they do for the pendulum illustrated in figure [ pendulum ] .he then asked the key question in the qualitative approach to dynamical systems : should i expect this picture to persist if i restore the higher order terms ?in other words , is the reduced system structurally stable ? it is now known that integrable hamiltonian systems of two or more degrees of freedom are not structurally stable .it is for this reason , even if no other , that they are exciting and productive to study . in this sectionwe have described how the structural stability of a poincar map of a continuous dynamical system can be evaluated , even though in general such a map can not be computed explicitly . in the next sectionwe look at stability and chaos in an explicit discrete dynamical system . in a series of population dynamics experiments ,may and oster and co - workers chose to rear blowflies in boxes ( for reasons we can not entirely fathom surely there are more alluring model species ) , and count their numbers at every generation .the blowflies in their boxes are a simple ecological system consisting of a single species limited by crowding and food supply , but with no predation .the system was analysed as a model of discrete chaos , and , in a different paradigm , as a control system by mees ( 1981) .assuming discrete generations , the data for the population dynamics of the blowflies can be fitted by a first - order difference equation where is the number of blowflies in the time period .the function is chosen so that increases when the population is small , because there is plenty of food and living space in the box , but decreases when the population is large , because of competition for food and living space . the simplest single - humped function for that one can think of is a parabola : for which equation [ e17 ] is known as the logistic map .the parameter is then the reproduction rate constant .equation [ e17 ] then says that due to reproduction the population will increase at a rate proportional to the current population , and due to starvation the population will decrease at a rate proportional to the _ square _ of the current population . for example , if there is a large number of flies in a box in one time period , they will eat most of the food , and the next generation of flies will be few in number .the weird properties of this simple model never fail to delight people .their implications for ecologies were explored in may ( 1974) ; a good modern mathematical treatment , accompanied by downloadable software to play with , is given in chapter 1 of ball ( 2003) .the evolution of the population starting initially at may be found graphically as indicated in the _ cobweb diagram _ of figure [ cobweb](a ) , where the of eq .[ logistic ] is plotted against for a given value of ( dashed curve ) .a vertical line takes the eye from , the population in time - window , to the corresponding and an adjoining horizontal line takes you from to , the population in the time - window .the solution converges to a point of zero population growth where the graphs of and intersect .this period-1 fixed point ( or equilibrium ) is a stable attractor : all nearby orbits converge to it as .increasing the height of the hump , , means increasing the reproduction rate in the blowfly model .for example , at the equilibrium has become unstable and two new stable equilibria have appeared .these new equilibria are not fixed points of .they are fixed points of the second composition map , as shown in figure [ cobweb](b ) . here, the initial condition is the same as in ( a ) , and the iterates at first take the population toward the old fixed point .but then they are repelled from it , because it is unstable , and converge instead to the _ two _ intersections of and , between which they oscillate in a period 2 orbit .this situation corresponds to the population switching between two states : a highly populated generation results in the next generation being poorly populated , but then resources are plentiful enough to induce a populous generation again , and so on. one can not help but be curious as to what happens when the parameter is increased again , and again we could compute many more of these cobweb diagrams , each at a different value of , but both the diagrams and this chapter would become very crowded. our curiosity can be assuaged ( or whetted ! ) more succinctly by inspecting the bifurcation diagram of stable solutions in figure [ bd ] .one can easily make out the branch points at , corresponding to bifurcations to period 2 , 4 and 8 orbits . beyond that , the period - doubling repeats until the periodic behaviour of the population becomes chaotic .the population never settles to discernibly regular -periodic oscillations , although the window at suggests the resumption of some sort of regularity .so far we have viewed the blowfly system as a difference equation , to model the generational delay , and as a bifurcation problem , to study the stability of the dynamics . picking up the theme of section [ sec3 ] , it is also instructive to view the blowfly system as a simple feedback system . the output of the system ( number of adult blowflies )is sensed by a controller which implements a mechanism , approximated here by the model function to control the level of input , or number of larvae . the actuating mechanism which transformsthe larvae into adult flies is simply the delay time of one generation .figure [ flies ] represents the feedback system as a block diagram .this diagram may seem rather facile , and nowhere near as interesting as the cobweb or bifurcation diagrams , but it does highlight a different side to the problem .for instance we see that the block components are independent .we could change the function without changing the simple delay model . inspecting this diagramalso makes it easy to build in perturbations such as predation or injecting more flies from outside . the conceptual difference between modelling the blowfly population as a difference equation and as a feedback system is how information is treated . in the block diagram representationthe information flow is explicit and the feedback is obvious , and we can immediately think up ways of adding additional regulations to it . in this sense feedback is an _ information _ science .this information about connectivity is subsumed in the discrete dynamical model , which allows us to analyse the stability of the population but glosses over the fact that the instabilities are caused by feedback .the third ( and final , for this chapter ) destination in our world tour of nonlinear dynamics is the 41st floor of an office tower in the district of wan chai , hong kong .it is here that the transport operations and infrastructure of hong kong , kowloon , and the new territories ( which together constitute a special administrative region of the people s republic of china , or hksar ) are controlled and coordinated day - to - day , and planning and policy development for future transport needs are carried out .the job of the hksar department of transport is formidable .consider the problem : the public transport network carries over 11 million passenger trips _ each day _ and this number will increase .it consists of railways , franchised buses , public light buses , private buses , ferries , trams , and taxis .each of these components is a complex sub - network in its own right .the area is geographically diverse , with islands , harbour , waterways , steep hills , airport , and old built - up districts with limited road space to be traversed or accessed .environmental imperatives require the use of or conversion to low or zero emissions locomotive units .efficient integration with transport in the densely populated economic - tiger zones of the pearl river delta is becoming necessary .the network as a whole must be safe , affordable , reliable , and robust .it must minimize redundancy and duplication of services , yet be flexible enough to match new demand without undue time - lags and provide services to new and changing population and employment centres .this means it must be capable of response and adaptatation on two time scales , daily and long - term ( approximately yearly ) .what a tall order ! can one tackle this complex network problem using the tools of dynamical systems theory ?in dynamical systems language we ask : is the hksar public transport network stable ?intuitively ( or through direct experience ) we expect such a complex network to exhibit * sensitive dependence on initial conditions*. one blinking red led on a signal - room console leads to a log - jam of peak hour trains . even with no perturbations on the network itself we know ( with depressing certitude ) that leaving for work five minutes later than usual is likely to result in arriving at work an hour late .these sorts of cascade effects in networks seem to occur when a small disturbance in one element of a network is transmitted through it leading to instability as it spreads , but what lies behind these phenomena ? studying networks such as the hksar public transport network is about building models of how they function , and then analysing those models to understand how changes in the structure of the network will result in changes in behaviour .navely , one expects that increasing the fraction of interacting elements or increasing the strength of interaction will enhance the stability of a complex network , but as we will show in the next example , that is not necessarily so . in a paper in _ nature _ in 1972 robert may used random matrix theory to show that in a large , linear , randomly coupled network the system dimension and the coupling strength must together satisfy a simple inequality . let us revisit the matrix equation ( [ linear ] ) : may considered this as the linearization of a ( large ) set of nonlinear first - order differential equations that describe an ecology , or populations of interacting species , but it could equally well describe rates of passenger turnover at each of nodes in a public transport network .the elements of the column vector are the disturbed populations and the elements of the interaction matrix describe the effect of species on species near equilibrium .each is assigned from a distribution of random numbers that has a mean of zero , so that any element is equally likely to be positive or negative , and a mean square value , which expresses the average interaction strength .then where is a random matrix and is the unit matrix .the probability that any pair of species will interact is expressed by the connectance , measured as the fraction of non - zero elements in .the elements in the random matrix are drawn from the random number distribution with probability or are zero with probability .for any given system of size , average interaction strength , and connectance we ask what is the probability that any particular matrix drawn from the ensemble gives a stable system ? may found that for large the system ( [ linear ] ) is almost certainly stable ( ) if and almost certainly unstable ( ) if this result suggests that an ecology that is too richly connected ( large ) or too strongly connected ( large ) is likely to be unstable and that the effect is more dramatic the larger the number of species . may s result is based firmly on stability theory as it was developed by poincar and lyapunov over a hundred years ago , as are more recent results on stability and control of dynamical network systems . for example ,yao _ _ et al__ in proposing a control method for chaotic systems with disturbances and unknown parameters ( imprecisely modelled or unmodelled dynamics ) rely on lyapunov stability theory , as do almost all of the applications mentioned by boccaletti and pecora ( 2006) in the preface to a special issue of the journal _ chaos _ devoted to stability of complex networks .although dynamical systems and stability theory was born and bred in celestial mechanics and control engineering , we now see that the concepts and methods have much wider application in the biological and environmental sciences and in socio - economic modelling and forecasting .a goal that is shared by many researchers in both hard and soft science is the improved management , and ultimately a priori design , of complex dynamical networks that are intrinsically imprecise or error - prone . to this endthere is a need to disseminate the principles of stability and chaos outside mathematics , so that non - mathematical scientists are better - equipped to understand and manage the dynamics of complex natural and anthropogenic systems , and channel uncertainty into stable output. how will these problems , fundamental and applied , be tackled ?how will the science of dynamical systems , stability and chaos advance ?we suggest that the three main approaches will be used in synergy : qualitative and asymptotic analysis , interdisciplinary collaboration , and computation .the rapid growth of interest in dynamical systems and chaos over the past 30 years is , in a sense , quite different from the way that areas of mathematics and physics developed in earlier times .it is not driven by industrialization , as for example was thermodynamics in the 19th century and classical control in the early 20th century , or by defence and cold war imperatives , as was nuclear physics from the 1940s to the 1960s .what we are seeing now is the reverse : theory and mathematics of dynamical systems and chaos together with faster computers are actually driving developments in a wide range of very diverse fields , from medical imaging to art restoration , traffic control to ecosystems , neuroscience to climatology .j. vyshnegradskii . on the general theory of governors .( sur la thorie gnerale des rgulateurs ) . , 83:318 , 1876 .( translation in c.c .bissell : stodola , hurwitz and the genesis of the stability criterion .j. control 50(6 ) , 1989 , 23132332 ) .the terms highlighted in bold - faced type in their first appearance in the text are defined or described in this glossary .more comprehensive glossaries of dynamical systems terminology may be found easily on the web ; for example , mrb.niddk.nih.gov/glossary/glossary.html , www.dynamicalsystems.org / gl / gl/. asymptotic solutions : : : solutions which asymptotically approach an unstable periodic solution .homoclinic points or doubly asymptotic solutions : : : points at which stable and unstable manifolds intersect transversally . in a hamiltonian flow the stable and unstable manifolds must intersect transversally infinitely often ( or coincide , as in the harmonic oscillator , equation [ hpem ] ) because otherwise one of them would shrink and volume conservation would be violated .this remains true for dissipative systems .homoclinic chaos : : or * homoclinic tangle * or * sensitive dependence on initial conditions * : a region densely packed with homoclinic points , where the dynamics is equivalent to and described by the smale horseshoe map .arbitrarily close initial conditions must actually belong to totally different parts of the homoclinic tangle , therefore they evolve quite differently in time .poincar map and cross section : : : a sort of stroboscopic map ; an extremely useful way of representing the dynamics of a two degree of freedom system on a plane .consider the set of trajectories of a two degree of freedom hamiltonian system that satisfy , where is a constant and and are canonical action - angle variables .each energy level is therefore three - dimensional . to construct a poincar mapwe take a two - dimensional transverse surface or _cross section _ such as that defined by =0 .then , for given the value of can be computed by solving the implicit equation , so that we may locally describe by the two variables .successive punctures of the surface in one direction by each trajectory form a stroboscopic map of the time evolution of the trajectory in phase space .recurrence theorem : : : a volume - preserving system has an infinite number of solutions which return infinitely often to their initial positions , or an infinite number of poisson stable solutions .hopf bifurcation : : : the real parts of a pair of conjugate eigenvalues become positive and a family of periodic orbits bifurcates from a `` spiral '' fixed point ( a focus ) .neimark - sacker bifurcation : : or secondary hopf bifurcation : consider a periodic orbit with period and suppose that a pair of floquet multipliers crosses the unit circle at at an isolated bifurcation point .an invariant torus is born .solutions on the torus are quasi - periodic , and if for integers and the motion is said to be phase - locked .the floquet multipliers are related to the eigenvalues of the poincar map linearised at the fixed point corresponding to the original -periodic orbit .
|
in this expository and resources chapter we review selected aspects of the mathematics of dynamical systems , stability , and chaos , within a historical framework that draws together two threads of its early development : celestial mechanics and control theory , and focussing on qualitative theory . from this perspective we show how concepts of stability enable us to classify dynamical equations and their solutions and connect the key issues of nonlinearity , bifurcation , control , and uncertainty that are common to time - dependent problems in natural and engineered systems . we discuss stability and bifurcations in three simple model problems , and conclude with a survey of recent extensions of stability theory to complex networks .
|
randomized nomination sampling ( rns ) is a rank - based sampling scheme .rank - based sampling schemes are data collection techniques which utilize the advantage of additional information available in the population to provide an artificially stratified sample with more structure .providing more structured sample enables us to direct our attention toward units that represent the underlying population .let be an absolutely continuous random variable distributed according to the cumulative distribution function ( cdf ) and the probability density function ( pdf ) , where is known and ( p - dimensional euclidean space ) , is unknown .further , let be a sequence of independent random variables taking values in ( the natural numbers ) with probabilities so that , .let be a sequence of independent bernoulli random variables with success probability ] based on the population shape .this reduces the concern in working with extremes when the underlying population is skewed .the rns - based statistical inference may be made under various situations .for example , there might be the case where s are the only available information and no further information is provided on either or , .there might also be situations in which the size of sets or the number of maximums ( and subsequently the number of minimums ) , or both are chosen in advance , instead of getting involved in a randomized process .in the situation where and/or are known , the cdf of can be found by conditioning on and , or both .the conditioning argument makes the theoretical investigation more complicated , but it provides more efficient statistical inference . in this paper ,both unconditional and conditional rns are studied .four types of rns data are introduced , corresponding to situations where , for any set , ( 1 ) the triplet are all known , ( 2 ) only are known , ( 3 ) only are known , or ( 4 ) only are known .these types of rns data are , respectively , called rns complete - data , type i , type ii , and type iii rns incomplete - data .we discuss rns - based maximum likelihood ( ml ) and method of moments ( mm ) estimates of the population parameters when the underlying random variable follows the proportional hazard rate ( phr ) or proportional reverse hazard rate ( prhr ) model .let be an absolutely continuous probability distribution function with density , possibly depending on an unknown vector of parameters , and let and ( where , by convention , we take ) so that , if , we have with the family of phr models ( based on ) is given by the family of distributions ^{1/\gamma({\pmb{\theta}})},\end{aligned}\ ] ] where . similarly , the family of prhr models is given by the family of distributions ^{1/\gamma({\pmb{\theta}})}.\end{aligned}\ ] ] the hazard rate function and the reverse hazard rate function at are given respectively by the phr and prhr models in ( [ hr - cdf ] ) and ( [ ihr - cdf ] ) are well - known in lifetime experiments .the lifetime distribution of a system and its components are of interest in reliability testing .statistical analysis of the lifetime of a system or its components is an important topic in many research areas such as engineering , marketing and biomedical sciences .see , for example , , , , and .the phr and prhr models include several well - known lifetime distributions . in the sequel , we are interested in estimating with specific choices of . some examples of hazard and reverse hazard rate models are presented in table [ examples ] .the remainder of the paper is organized as follows . in section [ mle ], we investigate the rns complete - data and provide the pdf and cdf of an rns observation in the form of complete - data .we also derive the ml estimators of in the phr and prhr model when the triplet , , is available . in section [ mle - incom ] , we present the ml estimation for the parameters based on incomplete rns data .we provide the pdf and cdf of observations in each rns incomplete - data and use the em algorithm to obtain the ml estimators of the parameters of interest . in section [ mme ] , we derive the rns - based mm estimation in the phr and prhr models ; when the rns data are from either complete- or incomplete - data scenarios . in section [ numeric ], we illustrate the numerical results in detail and compare the performance of the rns - based estimators with the corresponding srs estimators for the exponential and beta distributions .moreover , in section [ numeric ] , the performance of rns - based ml estimators in a more complicated situation is investigated using a real life dataset on fish mercury contamination measurement .let be a simple random sample of size from a continuous distribution with cdf and pdf .if it exists , the srs - based ml estimator of , denoted by , satisfies the ml equations where let be a randomized nomination sample of size from .the forms of the cdfs and pdfs of s , in addition to the rns - based ml equations , are determined by the available rns data . in this section ,we use the rns complete - data to derive the ml estimator of . in the rns complete - data case ,the triplets , , are known . in other words, one knows that , for , the observed value is from a set of size with the value and the rank in the -th set , where and are both known .an rns observation given and , where and , has the cdf and the pdf as follows and the log likelihood function based on the rns complete - data is given by upon differentiation of ( [ loglcomplete ] ) with respect to and equating the result to zero , the ( complete ) ml estimator of , denoted by , is obtained from where . since both and involved in the rns likelihood , equation ( [ dlc ] ) is more complicated to solve for than ( [ srs - mle ] ) , and for most distributions there is no closed form expressions for the ml estimators .following the idea proposed by , we consider the modified ml ( mml ) estimators of parameters . depending on the underlying distribution , one may need to replace one or both of the second and third terms on the left - hand side of ( [ dlc ] ) by their corresponding expected values .the obtained mml estimator of is denoted by .let , , be a sequence of iid random variables from the family of phr models in ( [ hr - cdf ] ) .the srs - based ml estimator of , denoted by , can be expressed as which is an unbiased estimator of with variance =\gamma^2({{\pmb{\theta}}})/m ] and ] and the variance =\lambda^2/m ] and the rns - based complete ml estimator of , denoted by , is obtained from the equation the mins complete - data ml estimator of is given by which is unbiased for with the variance =\lambda^2/m ] and =\lambda^2\sum_{j=1}^{k_i}\frac{1}{j^2 } , ] is computed .this step is called the e - step . as is a linear function of the unobservable data , the e - stepis performed simply by replacing by their current conditional expectations given the observed data and .therefore , for a known parameter , we have = { \mathbb{p}}_{{{\pmb{\theta}}}^{(t)}}(z_i=1 | y_i , k_i)=z_i^{(t)},\end{aligned}\ ] ] where the next step on the -th iteration , which is called the m - step , requires replacing s by s in ( [ loglcomplete ] ) to obtain by maximizing .we keep alternating between and until converges to a fixed point .+ when the parameter is unknown , the procedure may be started with the initial value of ( in phr model ) or ( in prhr model ) , and continued by updating using .b. * type ii rns incomplete - data : * here , we consider the case where are known , but the set size is unknown . in other words , we observed the value and we know if the observed unit is the maximum or the minimum unit of the set , but the set size is unknown .the cdf and pdf for the observed value given are , respectively , as follow and from ( [ t2-cdf ] ) , the log likelihood function for obtained from the observed data and is expressed as .\end{aligned}\ ] ] upon equating the derivatives of ( [ l2 ] ) with respect to to zero , we have which apparently do not yield explicit solutions for the incomplete - data mle of . since the vector is unobservable , we are unable to estimate by the maximizing ( [ loglcomplete ] ) .so we again use the em algorithm . in the e - step ,we substitute the unobservable data in ( [ loglcomplete ] ) by averaging the complete - data log likelihood over its conditional distribution given the observed and . as is a linear function of the unobservable data , the e - stepis performed simply by replacing by their current conditional expectations given the observed data and .therefore , for a known parameter , = \sum_{k=1}^\infty k{\mathbb{p}}_{{{\pmb{\theta}}}^{(t)}}(k_i = k | { \boldsymbol y } , { \boldsymbol z})= k_i^{(t)},\end{aligned}\ ] ] where + the m - step on the -th iteration requires replacing s by s in ( [ loglcomplete ] ) to obtain by maximizing .we keep alternating between and until converges to a fixed point .+ when the parameter is unknown , the procedure can be started with the initial value , where the length of is a relatively large but arbitrary , and continued by calculating using the frequencies of the over . c. * type iii rns incomplete - data : * here , we study the case where only is observed and no more information on the set size and the rank of the selected unit is available .the cdf and pdf for the observed value are given , respectively , by and the log likelihood function for formed on the basis of is given by upon equating the derivatives of ( [ l3 ] ) with respect to to zero , the following results are obtained : similar to type i and type ii incomplete - data , no explicit ml estimator for the parameter can be obtained from ( [ l3der ] ) . in this type of rns incomplete - data , two unobservable data sets in the form of and are introduced . in order to perform the em algorithm assuming and known , we first calculate = \sum_{k=1}^\infty k{\mathbb{p}}_{{\pmb{\theta}}^{(t)}}(k_i = k | { \boldsymbol y})=k_i^{(t)},\end{aligned}\ ] ] where then , we obtain ={\mathbb{p}}_{{{\pmb{\theta}}}^{(t)}}(z_i=1 | { \boldsymbol y})=z_i^{(t)},\end{aligned}\ ] ] where + the m - step on the -th iteration requires replacing s by s and s by s in ( [ loglcomplete ] ) to obtain by maximizing .we keep alternating between , and until converges to a fixed point .when the parameters and are unknown , similar procedures proposed in type i incomplete - data ( for ) and in type ii incomplete - data ( for ) are used .finding the ml estimators of for complete - data case requires finding the roots of the nonlinear equations ( [ hr - complete ] ) and ( [ ihr - complete ] ) , which are cumbersome and computationally expensive . when the available data is incomplete ,the iterative em algorithm for calculating the ml estimator of is not easy - to - use . in this section, we briefly study the mm estimation based on rns data for in phr and prhr models .the srs - based mm estimate of , denoted by is equal to the srs - based ml estimate , , obtained from ( [ srs - hr - ml ] ) . in phr ,considering the random variable , the mm estimator of can be obtained by equating the first moment of the population to the sample moment as follow similarly , in prhr model , the mm estimator of is expressed as now , we present the rns - based complete- and incomplete - data mm estimators of in both phr and prhr models .[ mmestim ] let be an rns sample of size obtained from a continuous cdf of the family of phr model or prhr model .further , let and , where and .then , the unbiased mm estimators of in phr and prhr models are , respectively , obtained as where the value of depends on the rns data type and the underlying model as presented in table [ mm ] .note that for the rns complete - data , the variance of in both phr and prhr models provided in theorem [ mmestim ] are derived using ( [ exp - hpr ] ) , ( [ exp^2-hpr ] ) , ( [ exp - prh ] ) , and ( [ exp^2-prh ] ) .in this section , we perform numerical studies to compare the performance of the proposed rns methods with srs in estimating some parameters .first , we perform some simulations to examine the performance of rns compared with srs under different scenarios of available information about the observations , set sizes , and rank of observations . then , in a case study we evaluate the precision of the rns design over the srs design in a more complicated scenario in both perfect and imperfect settings .we first discuss the reduction in the mean square error ( mse ) of the ml estimators in the rns complete - data in the phr and prhr models using the relative precision .the relative precision is defined as the ratio of the rns - based mse over the srs - based mse such that values less than one are desired . for the incomplete - data settings ,the performance of mle s of the population parameters in two distributions are examined ; the parameter in the exponential distribution introduced in example [ exp - exp ] and the parameter in the beta distribution introduced in example [ beta - beta ] .note that the expected value and the variance of the rns complete - data in the phr and prhr models presented in ( [ com - exp ] ) , ( [ com - var ] ) , ( [ e - prhr ] ) , and ( [ v - prhr ] ) do not depend on the observed data and are only functions of and .in addition , we investigate the role of the rns design parameters in improving the performance of the rns - based estimators compared with their srs counterparts . in figure [ msefixedk - phr ] , we provide the mse of , the estimator of in the rns setting when the data is complete , over the mse of , the estimator of in the srs setting , for the phr ( left panel ) and prhr ( right panel ) models .the relative precision is calculated for four rns designs with fixed set sizes as and the proportion of maximums varies in . from the results we can compare rns complete - data with different fixed set sizes and proportion of maximums among themselves and with srs in terms of the performance of estimators of . for example , in the left panel of figure [ msefixedk - phr ] , which shows the relative precision for the phr models , it is seen that any rns design , even with and proportion of maximums , outperforms srs .increasing the set size and the proportion of maximums improves the performance of the rns complete - data .the best performance pertains to mans with . in the right panel of figure [ msefixedk - phr ] , which shows the relative precision for the prhr models , similar resultsare obtained except that the best performance pertains to mins with .based on the rns complete - data over their srs counterparts in the phr ( left panel ) and prhr ( right panel ) models when and the proportion of maximums is .values less than one show rns performs better than srs ., width=576,height=240 ] in figure [ phr - t123 ] , we provide the relative precision of the ml estimators of as the parameter of the exponential distribution in example [ exp - exp ] in three rns incomplete - data type i , type ii , and type iii .the relative precision is calculated by the mean square of the rns - based ml estimate of over its srs - based counterpart , so values less than one are desired .the top left panel shows the relative precision of the rns - based ml estimator of in the incomplete - data type i. the relative precision is calculated for the ml estimators of and ] . as approaches to 1 , regardless of the value of the parameter of interest , the performance of rns incomplete - data type i becomes better than srs .the top right panel presents the relative precision of the rns incomplete - data type ii for the range of and for four distributions of the set size as follows it is seen that the rns incomplete - data type ii with the assumed , , , and improves the precision of the estimators of especially when the set sizes get larger . as the value of increases , the performance of the rns incomplete - data type ii is improved more and the distributions of perform similarly .the next four panels in figure [ phr - t123 ] present the relative precision of the rns incomplete - data type iii for and .the relative precision for small depends on .the last four panels in figure [ phr - t123 ] show that , for all the considered distributions on , by increasing , rns outperforms srs and the relative precision reaches the lowest value when . ] .it is seen that for the examined values of , improves the rns incomplete - data type i over srs .the top right panel presents the relative precision of the rns incomplete - data type ii for the range of and for four distributions on , which are shown by , , , and . for the examined s , rnsincomplete - data type ii outperforms srs .the next four panels in figure [ prhr - t123 ] present the relative precision of the rns incomplete - data type iii for and .it is seen that for the rns incomplete - data type iii performs better than srs .the relative precision of the estimators obtained from the rns design with other than zero might works good for some values of , especially when is close to zero . ] .we observe in the left panel that , similar to the ml estimators in the rns - based complete - data , increasing the set size and the proportion of maximums improve the performance of the rns complete - data in the phr model . in the phr model , the best performanceis obtained from the mans design , where all the selected units are maximums , and with the set size . in the prhr model( right panel ) , the best performance belongs to the mins design , where all the selected units are minimums , with the set size . based on the rns complete - data over their srs counterparts in the phr ( left panel ) and prhr ( right panel ) models when and 5 , and the proportion of maximums is .the relative precision less than one shows rns performs better than srs.,width=576,height=240 ] figure [ mme - t123 ] provides the relative precision of the mm estimators of parameter in the exponential distribution introduced in example [ exp - exp ] based on the rns incomplete - data type i , type ii , and type iii .the top left panel shows the relative precision of the rns - based mm estimators in the incomplete - data type i. it shows that , regardless of the parameter value , is the optimum value of which improves the rns - based mm estimator in the incomplete - data type i scenario compared with srs .looking at the top right panel , which presents the performance of the mm estimator of in the rns incomplete - data type ii , shows that rns designs with design parameters , , and do not perform better than srs for all the examined parameters . for , the performance of the rns incomplete - data type ii improves over srs .the next four panels in the middle and down in figure [ mme - t123 ] present the performance of the mm estimators of in the rns incomplete - data type ii for , where has the best impact on the performance of this type of the rns design especially when the parameter value increases .for small the proportion of srs , samples from the sets of size , should be small . ] and , type ii ( top right panel ) for four distributions on and , and type iii ( middle and lower panels ) for and in a beta distribution with parameter , the shape parameter , and .values of the relative precision less than one shows rns performs better than srs ., width=576,height=672 ] in figure [ prhr - mm - t123 ] , the performance of the rns - based mm estimators of in the introduced beta distribution are compared with their corresponding estimators in the srs design . to evaluate the relative precision in the rns incomplete - data type i , the rns - based mm estimators of for ] , and it shows the best performance of rns over srs at .randomized nomination sampling ( rns ) was introduced by and it has been shown to perform better than simple random sampling ( srs ) in constructing nonparametric confidence and tolerance intervals .rns has potentials for a wide range of applications in medical , environmental and ecological studies . in this paper, we described the rns - based ml and mm estimators of the population parameters when the underlying study variable follows phr or prhr model .various conditions on the type of information , ranking error settings and the design parameters including distribution of the set size ( ) and probability of taking the maximum observation of the set ( ) have been investigated .we introduced four types of rns data , corresponding to situations in which all the observations , set sizes and observations ranks in the sets are known , only observations and the set sizes are known , only the observations and their ranks in the sets are known , or finally only the observations are known . considering all the situations , we also provided the pdf and cdf of an rns observation .we showed that there is always a range of on each rns is superior to srs in terms of the relative precision .the rns design has this advantage regardless of the ranking setting .the relative precision of the estimators obtained in the rns design becomes better when more weight is given to the larger set size and ( in phr model ) or ( in prhr model ) .the authors gratefully acknowledge the partial support of the nserc canada . wells , m. t. , r. c. tiwari , et al .( 1990 ) . estimating a distribution function based on minima - nomination sampling . in _ topics in statistical dependence _, pp . 471479 .institute of mathematical statistics .
|
randomized nomination sampling ( rns ) is a rank - based sampling technique which has been shown to be effective in several nonparametric studies involving environmental and ecological applications . in this paper , we investigate parametric inference using rns design for estimating the unknown vector of parameters in the proportional hazard rate and proportional reverse hazard rate models . we examine both maximum likelihood ( ml ) and method of moments ( mm ) methods and investigate the relative precision of our proposed rns - based estimators compared with those based on simple random sampling ( srs ) . we introduce four types of rns - based data as well as necessary em algorithms for the ml estimation , and evaluate the performance of corresponding estimators in estimating . we show that there are always values of the design parameters on which rns - based estimators are more efficient than those based on srs . inference based on imperfect ranking is also explored and it is shown that the improvement holds even when the ranking is imperfect . theoretical results are augmented with numerical evaluations and a case study . mohammad nourmohammadi , mohammad jafari jozanijafari.ca . tel : 1 204 272 1563 . ] , and brad c. johnson + _ university of manitoba , department of statistics , winnipeg , mb , canada , r3 t 2n2 _ * keywords : * randomized nomination sampling ; method of moments ; maximum likelihood ; modified maximum likelihood ; proportional hazard rate ; proportional reverse hazard rate ; em algorithm .
|
social media networking websites like twitter and facebook have fundamentally changed how people communicate and socialize in the 21st century . with over 200 million users on the popular microblogging site twitter in 2011 , exploring the social impact of using such a service becomes ever more important .although twitter has been considered by critics as a site where people tweet ( i.e. update statuses ) about mundane things such as what they are having for dinner , researchers have recently taken an interest in twitter to study social - behavioral attitudes , positive and negative influences of popular users , and even temporal patterns of happiness using ideas from linguistics . in this emerging culture of tweeters ( i.e. one who tweets ) that connect from places all over the world, there also comes the bigger problem of cyberbullying .twitter connects people by allowing them to send short bursts of information , called tweets , consisting of up to 140 characters in length , to other users .what makes twitter different from other social networking sites is the character limit imposed on tweets , along with its asymmetrical nature : you may follow someone , but they do not have to follow you .a user has three different types of relationships with fellow tweeters : you can have followers ( i.e. friends ) , which are people who follow you and can see your tweets ; you can also have people who you are following ; and finally there are bi - directional friends , which are people who are mutually following each other . despite the different relationships that occur on twitter ,when a user tweets to the twitter community , a response is not necessary .user s friends can see all the tweets sent by the user .to involve a particular person in a tweet or a conversation , common practice is to use ` @ ' followed by a unique identifier address . to join a conversation , one can reply to a tweet , which uses ` @ ' .one can also retweet , which allows a user to tweet the same message to all his / her friends while giving credit to the original tweeter , allowing the message to reach a wider range of twitter users . for simplicity of presentation in this analysis, we use the term retweet " to describe both types of message repetition . in their retweets, users can use a hashtag , denoted by ` # , ' which declares a tweet as being a part of a larger conversation whose topic is related to the phrase which comes after the hashtag .we differentiate between keywords and hashtags : hashtags are used to identify the topic of a tweet , while keywords are found in the actual tweet .since the advent of social media sites like twitter , there has been research on cyberbullying .some research into methods for stopping cyberbullying on twitter can be found in xu __ . in their report , the authors used natural processing techniques to recognize traces of bullying tweets , and gorzig and frumkin suggested raising awareness for adolescents , especially with regard to privacy settings . while a user can block another person specifically on twitter , no current method exists to block offensive or hurtful tweets from the twitter community . per the terms of services , twitter does not currently suspend accounts for cyberbullying ( available at http://twitter.com/tos ) . in this study , we employ a concept refered to as quarantine " that would enable twitter to temporarily separate offenders from the twitter community , therefore protecting users from the spread of offensive messages . in this analysis , we extended the deterministic compartmental model developed by zhao _et al . _ for rumor spreading in the new media age . in the model, they built on the pioneering work of daley and kendall , who in 1965 introduced a model that looks at the spread of rumors as a non - standard contagion process .the daley - kendall model was the first rumor spreading model developed , it used the terms ignorants , spreaders and stiflers analogous to the disease model classes of susceptible , infected and recovered , respectively .et al . _ also looked at the maki - thompson rumor model , which showed the spreading of rumors through direct contact between spreaders and others , and the model developed by nekovee _et al . _ which combined the sir epidemic model and the maki - thompson model on complex social networks . by drawing inferences from these models , zhao __ developed their own model which included a hibernator " class where the members of this class become disinterested in spreading the rumor but could become interested in spreading the same rumor again .we considered instead a model where users are quarantined , " i.e. users are limited in their ability to spread the rumor through the enforced limitation on their contact with other members of the population . to do this, we developed a discrete - time compartmental model that simulates the dynamics of rumor spreading in social networks , and examined how user s degree of quarantine hinders the spread of offensive messages .we estimated model parameters by comparing the model predictions to patterns observed in twitter data related to a topic that communities would find offensive . in the following sections we describe the sources of data used in this analysis and give a description of the discrete - time compartmental model used to simulate spread of tweets , followed by a presentation and discussion of results .= [ rectangle , draw , fill = blue!20 , text width=5em , text centered , rounded corners , minimum height=4em ] = [ rectangle , draw , fill = red!20 , text width=5em , text centered , rounded corners , minimum height=4em ] = [ rectangle , draw , fill = green!20 , text width=5em , text centered , rounded corners , minimum height=4em ] = [ rectangle , draw , fill = violet!20 , text width=5em , text centered , rounded corners , minimum height=4em ]twitter uses application programming interface ( api ) version 1.1 which allows us to scrape " public data off the website ( https://twitter.com ) . to be able to access the api, we use the r programming language package twitter " ( http://cran.r-project.org/web/packages/twitter/ ) .this package acquires the information about a tweet such as the time created , the screen name of the person who tweeted or retweeted , and their follower and friend counts . in order to decide which keywords to use, we conducted a search on the twitter website and searched for trending hashtags which contained offensive language .we noticed that anti - gay sentiment is prevalent on twitter so we chose words that expressed disgust towards the gay community to use as an example .there is no doubt that these tweets are offensive to most people .we used the searchtwitter " function to search within the days of july 6 - 10 , 2013 for offensive tweets or retweets containing the keywords disgusting " and gay " yielding a total of 884 tweets .the data showed that there were 100 tweets from this sample of 478 original tweets that had been retweeted at least once .this is in qualitative concordance with kwak __ where they found the fraction of not retweeting to be 79% , and grabowicz _ et al ._ who found that 85% of tweets were not retweeted .the average duplication of a tweet in our sample was 1.85 times .in addition , we estimated the probability of a message being retweeted by using ( number of retweets)/(number of friends of offensive tweeters ) , yielding = . thus 0.999 , which will be included as a parameter in our model below ._ found that 99% of twitter users have fewer than 20 followers .furthermore , the degree of social networks has been found to follow a power law functional relationship , with exponent .the probability of having twitter followers is thus . with the power law relationship, most users have few followers and a very few users have many followers .the average degree of the network , , is ] . for our model analysis below , we assume = 10 .we developed a discrete - time model to simulate the dynamics of the spread of messages on twitter , because tweets occur at discrete times , whereas a continuous - time model includes a variable window of times between the tweets . in the following subsections ,we describe a basic model of twitter message spreading dynamics that does not include a quarantine class .we determined an expressions for the threshold value of this model , then we extended the model to include a quarantined class and discuss the implications that the addition of this class has for the threshold values of the system .a simple discrete - time rumor model for describing the spread of a message in a social network is : which was proposed by zhao _ , where , , are the uninformed , offender , and stifler classes at time , respectively , where the constant population size is , and is the probability that an uninformed user becomes a stifler after seeing an offensive tweet ( which we estimated from twitter data in the previous section ) .the parameter is the probability of seeing an offensive tweet , and is the probability of an offender becoming a stifler . to ensure that these two probabilities lie between 0 and 1 , we use the form : where is the degree of the network , which is estimated from studies of twitter data in the published literature , as described in the previous section .the parameters of the model are shown in table [ tab : params ] and the compartmental diagram is shown in figure [ model : basiccomp ] .in lemma [ lem : gt0 ] in appendix a we show that if the initial values of , , and are non - negative , then all future populations in those classes will be non - negative . taken in conjunction with the fact that the system is bounded such that the populations in each of the classes sums to ( see lemma [ lem : bound ] in appendix a ), we see that the population within each class can be at most .we note that as a result , the population in the uninformed class monotonically decreases asymptotically to zero , because when and ( for a formal proof , see lemma [ lem : decu ] in appendix a ) .additionally , the population in the stifler class monotonically increases , because the last two terms in are both at least zero for all , , and .further , when the population in the stifler class is we note that and , and that from system [ eqn : basic ] we see that .thus , asymptotically monotonically increases to . in theorem [ thm : thresh ] in appendixa we show that if then .since monotonically decreases to zero , and monotonically increases because monotonically increases , we see that montonically decreases to zero .thus there will always exist a point at which , and .we thus conclude that even without quarantine , the offender class will eventually begin to die out . in the next section, we will discuss how the implementation of a quarantine class ensures that the offender class will begin to die out more quickly .= [ rectangle , draw , fill = blue!20 , text width=5em , text centered , rounded corners , minimum height=4em ] = [ rectangle , draw , fill = green!20 , text width=5em , text centered , rounded corners , minimum height=4em ] = [ rectangle , draw , fill = violet!20 , text width=5em , text centered , rounded corners , minimum height=4em ] compartmental model structure for basic twitter model , height=240 ] before we move on to the quarantine model , we examine the equilibrium points for this basic model .there are two equilibrium points : the first equilibrium point represents the case where there are only offenders , because the only way for an offender to leave the class is to come in contact with a stifler , of which there are none .the second equilibrium point is analogous to the disease - free equilibrium because there is no one in the offender class .we refer to this point as the offender - free equilibrium ( ofe ) .the jacobian for this basic model is : the eigenvalues of the jacobian corresponding to the first equilibrium point , ( ) , which we refer to the offender - only equilibrium , are : we note that and thus the third eigenvalue is always greater than 1 . we thus conclude that the first equilibrium point is not stable .evaluating the jacobian at the offender - free equilibrium , ( , , ) , we obtain with eigenvalues note that if and the first eigenvalue is which is always greater than zero and less than 1 since .thus there always exists at least one value of such that the first eigenvalue is less than 1 .however , the other two eigenvalues , are equal to 1 , which does not _ a priori _ indicate that the equilibrium point is unstable , but neither does it imply stability .eigenvalues equal to one can result from system constraints not being applied .we note that one of the equations in system [ eqn : basic ] can be removed if we apply the constraint . evaluating the eigenvalues of the jacobian of this reduced system about the offender - free equilibrium yields the jury criterion is an algorithmic method based upon the characteristic equation of the jacobian that is used to assess the stability of discrete systems .a prior condition for use of the method is that the characteristic equation , must be greater than zero when .this condition is not met for this model , thus we can not apply the jury criterion . thus , in order to assess the stability of this equilibrium point , we would have to resort to higher - order methods , rather than just linearization of the system about the offender - free equilibrium .this is beyond the scope of this paper , and we thus conclude that that the stability of the offender - free equilibrium in this model is currently indeterminate .to calculate the next generation matrix of the model , we use the methods of allen and van den driessche , 2008 , and we reorder the columns and rows of the offender - free jacobian in equation such that the offender ( infected ) class comes first , and the uninformed and stifler ( uninfected ) classes follow .then , as in , we identify the components of this jacobian with the following form : then the matrix is given by the matrix is and the matrix is the next generation matrix ( ngm ) is defined by .the spectral radius of this matrix is we note that the spectral radius of the matrix , , is exactly equal to one ; in order for the spectral radius of the ngm to be identified with the reproduction number of the system , must be less than one .we thus do not identify with the reproduction number , but note with interest that it matches the expression in equation that we had shown could be used as a threshold expression for . to examine how isolation of offending users would hinder the spread of offensive tweets , we extend the basic model in equation by adding a quarantine class where the parameter is the probability that an offender becomes quarantined .we also include a parameter that describes the probability that a person in the quarantine class becomes a stifler . when is close to one the offenders spend longer in the quarantine class then when is close to zero .the discrete - time equations of this extended model are : where and are as in the basic model , the parameters of the model are shown in table [ tab : params ] and the compartmental diagram is shown in figure [ flow_chart2 ] .compartmental model structure , height=240 ] as before in the basic model , the sum of the populations in each of the classes adds to a constant , .we also note that the population in the uninformed class monotonically asymptotically decreases to zero , and that the population in the stifler class monotonically asymptocally increases to . in theorem [ thm : thresh_quar ] in appendixa , we show that if then .since monotonically decreases to zero , and monotonically increases because monotonically increases , we see that montonically decreases to zero .thus , just as in the basic model , there will always exist a point at which , and .however , we note that the smaller the ( ie ; the higher the quarantine fraction ) , the faster will decrease to 0 . in theorem [ thm : threshb ] in appendixa , we show additionally show that if then regardless of the values of further , we show in theorem [ thm : threshc ] in appendix a that if and , then , regardless of the current values of .we note that and for twitter data , thus .thus for twitter data there exists a quarantine fraction such that the spread of offensive messages will be guaranteed to decline at the next time step .there are two equilibrium points for the model with the quarantine class : the first equilibrium point is biologically irrelevant because it implies that the stifler class will have a negative population since .therefore we disregard this equilibrium point .the second equilibrium point is analogous to the offender - free equilibrium of equation .the jacobian of the system is : + the eigenvalues of the jacobian evaluated at the offender - free equilibrium are , , and . as in section [ section : basic ], we note that the constraint can be used to reduce the number of equations by one .the eigenvalues of the reduced system are , , and .we note that if ( which it is for twitter data , which has and , as seen in the previous section ) then and can always be chosen to ensure that the second and third eigenvalues are less than 1 .however the first eigenvalue is exactly equal to one .as before , we find that the jury criterion can not be applied to assess the stability of this equilibrium point , so we conclude that we find the stability to be currently indeterminate . as in section [ section : basic ] , we begin computation of the next generation matrix for this system by reordering the equations such that the infected classes ( offender and quarantine ) classes come first , and the uninfected ( uninformed and stifler ) classes follow .the jacobian of the rearranged equations , evaluated at the disease - free equilibrium is thus : as before , we identify the components of this matrix with then the matrix is given by matrix is and the matrix is the next generation matrix is defined by .the spectral radius of this matrix is we again note that the spectral radius of the matrix , , is exactly equal to one and thus we do not identify with the reproduction number , but again note with interest that it matches the expression in equation that we have shown could be used as a threshold condition for in the quarantine model ..[tab : params ] parameter values for the discrete - time models .[ cols="^,^,^",options="header " , ]we determined the parameters of equation using current twitter data : * , the probability that the user becomes a stifler * the fraction of the population that is initally in the stifler class .* , the degree of the network .we assumed based on our data and information from the literature that and as described in section [ sec : data ] .we examined values of between to 1 in steps of and determined the model prediction for the number of retweets , and compared this to the number of retweets we observed in the data ( namely approximately one retweet per tweet ) .this can be seen in table [ tab : params ] .we found the value of best fit the data .we let but we found our results were not sensitive to the value of , for large .figures [ fg : basic_model_output ] and [ fg : quarantined_model_output ] are numerical simulations of our basic and quarantined models respectively .these figures visually show the dynamics of the systems ., .the uninformed and offender classes tend to zero while the stifler class tends to which is the total population .[ fg : basic_model_output ] ] , , , .the uninformed , offender , and quarantined classes tend to zero while the stifler class tends to which is the total population .[ fg : quarantined_model_output ] ] we used the parameter values in table [ tab : params ] to find solutions to equation . in our computations, we assume the probability a quarantined user becomes a stifler , is zero . to do this we examined values of , the quarantine fraction , between zero and one and obtained the model prediction for the number of retweetsthe results are seen in figure [ fg : lambdavsthreshold ] . .for , , .we assume that quarantined users are fully isolated from the rest of the community .as the probability of quarantine increases , the number of retweets decreases .[ fg : lambdavsthreshold ] ] by looking at the number of offenders over time , while varying we can see how the quarantine fraction , , affects the life - time of the tweet . as increases , the model predicts a decrease in the average life - time of the tweet , see in figure [ fg : offendersvstimelambda ] , i.e. the tweet on average dies out more quickly. varies .the parameters , , and . for a low probability of being quarantined, there is a high number of offenders over time . as the probability of the quarantined increases ,the offenders deceases ]in this paper we presented a discrete - time non - linear compartmental model to simulate the dynamics of the spread of offensive messages on twitter .we developed the model to gain insight into reducing the number of offensive tweets seen by twitter users .we then extended the model to include a quarantine class whereby offending users were isolated from contact with the rest of the twitter community .we found that the implementation of quarantine was an effective way to quickly reduce the spread of offensive messages .further , we proved that if , where is the degree of the network , and is the probability of retweeting , then if the number of offenders at the next time step will always be less than the number of offenders the time step before .we showed that this was independent of the the number of uninformed , offenders , quarantined , and stiflers at that time step . from data we obtained from twitter , and the published literature , we determined that and , thus for twitter and there thus exists a such that offensive tweets will be effectively suppressed .we performed numerical simulations to examine the effect of quarantine .as shown in figure 3 , the quarantine of a certain fraction of offending twitter users led to a disproportionately larger reduction in the number of new offenders .previous studies in this area have used a wide variety of statistical , computational , and mathematical methods to understand the spread of information in social networks ( see , for instance ) . the use of a quarantine class to hinder the spread of messages has not been previously considered .there are some limitations of the model we have developed . for instance , we assume that once offenders leave the quarantine class , they have learned their lesson and do not offend again .this is not realistic in all cases . also , while we find that the quarantine of offenders is effective , we have not determined how such a quarantine can be implemented .one way to potentially implement the quarantine is to have twitter use an algorithm that flags tweets as being potentially offensive based on keywords .another way to implement a quarantine is to have users give negative rating to tweets they find offensive , for instance , a rating system that goes from zero to minus ten by integer values and lets each user choose the value that fits the type of tweets they want to see . with this basic and quarantine model ,there are other routes that could be taken to further this research . for instance , performing an analysis of a markov chain model .we can also examine how quarantined users or stiflers stop the spread of a tweet through a network graph . performing a network analysis on such a modelwould provide insight into which users should be quarantined first , or which groups to isolate from the overall population , to reduce the spread of these tweets . in conclusion, our model indicates that quarantine of users that spread offensive message would likely be effective .our use of a discrete - time non - linear model highlights the usefulness of mathematical models in understanding the underlying dynamics of social networks , and opens up many interesting possibilities for future research .we would like to thank dr .carlos castillo - chavez , executive director of the mathematical and theoretical biology institute ( mtbi ) , for giving us the opportunity to participate in this research program .we would also like to thank co - executive summer directors dr .erika t. camacho and dr .stephen wirkus for their efforts in planning and executing the day to day activities of mtbi .we also want to give special thanks to daniel burkow for helping us redefine our topic , kamal barley for his graphic enhancements , dr . luis melara , dr . leon arriola , xiaoguang zhang , and oyita udiani for their invaluable help and feedback in writing this paper .this research was conducted in mtbi at the mathematical , computational and modeling sciences center ( mcmsc ) at arizona state university ( asu ) . this project has been partially supported by grants from the national science foundation ( nsf - grant dmps-1263374 ) , the national security agency ( nsa - grant h98230 - 13 - 1 - 0261 ) , the office of the president of asu , and the office of the provost of asu .[ lem : gt0 ] if , , in the basic model in system [ eqn : basic ] , then , , let , , since , we can see that for all . therefore , as , we can use a similar argument to show that for lastly to show that for all we notice that since and are always non - negative . therefore since , and then let , . from ( 7),(8 ) we know notice that if then . since , we can see that for all and consequently therefore since , we can say and .now , we wish to show that when then note that equation implies that substituting this into equation yields from equation we know that , thus the lhs is equal to .thus now , we wish to show that when then note that equation implies that substituting this into equation yields from equation we know that , thus the lhs is equal to . thus note that the numerator of the lhs is always less than or equal to because , and the denominator of the lhs is always greater than or equal to , because and thus .thus we have shown that the lhs is at most the rhs .and thus when , then in the quarantine model .we have , and we also have that , which ensures that . we thus have that )}}$ ] .the rhs is equal to 1 , thus we have shown that when and , then .peter sheridan dodds , kameron decker harris , isabel m kloumann , catherine a bliss , and christopher m danforth .temporal patterns of happiness and information in a global social network : hedonometrics and twitter ., 6(12):e26752 , 00 2011 .haewoon kwak , changhyun lee , hosung park , and sue moon .what is twitter , a social network or a news media ? in_ proceedings of the 19th international conference on world wide web _ , www 10 , pages 591600 , new york , ny , usa , 2010 .acm .jun - ming xu , kwang - sung jun , xiaojin zhu , and amy bellmore . learning from bullying traces in social media . in _ proceedings of the 2012 conference of the north american chapter of the association for computational linguistics : human language technologies _ , naacl hlt 12 , pages 656666 , stroudsburg , pa , usa , 2012 .association for computational linguistics .przemyslaw a grabowicz , josj ramasco , esteban moro , josep m pujol , and victor m eguiluz .social features of online networks : the strength of intermediary ties in online social media . , 7(1):e29358 , 00 2012 .tiago rodrigues , fabrcio benevenuto , meeyoung cha , krishna gummadi , and virglio almeida . on word - of - mouth based discovery of the web .in _ proceedings of the 2011 acm sigcomm conference on internet measurement conference _ , pages 381396 .acm , 2011 .ceren budak , divyakant agrawal , and amr el abbadi . limiting the spread of misinformation in social networks . in _ proceedings of the 20th international conference on world wide web _ ,pages 665674 .acm , 2011 .feng wang , haiyan wang , and kuai xu .diffusive logistic model towards predicting information diffusion in online social networks . in _distributed computing systems workshops ( icdcsw ) , 2012 32nd international conference on _ , pages 133139 .ieee , 2012 .rudra m tripathy , amitabha bagchi , and sameep mehta .a study of rumor control strategies on social networks .page 1817 , new york , new york , usa , 00 2010 .sigir , acm special interest group on information retrieval .
|
the 21st century has redefined the way we communicate , our concept of individual and group privacy , and the dynamics of acceptable behavioral norms . the messaging dynamics on twitter , an internet social network , has opened new ways / modes of spreading information . as a result cyberbullying or in general , the spread of offensive messages , is a prevalent problem . the aim of this report is to identify and evaluate conditions that would dampen the role of cyberbullying dynamics on twitter . we present a discrete - time non - linear compartmental model to explore how the introduction of a quarantine class may help to hinder the spread of offensive messages . we based the parameters of this model on recent twitter data related to a topic that communities would deem most offensive , and found that for twitter a level of quarantine can always be achieved that will immediately suppress the spread of offensive messages , and that this level of quarantine is independent of the number of offenders spreading the message . we hope that the analysis of this dynamic model will shed some insights into the viability of new models of methods for reducing cyberbullying in public social networks . department of mathematics , boston university , boston , ma + department of mathematics and computing , columbia college , columbia , sc + department of biology , medgar evers college ( cuny ) , brooklyn , ny + department of mathematics , st . mary s university , san antonio , tx + department of applied mathematics for the life and social sciences , arizona state university , tempe , az + department of physics , purdue university , west lafayette , in + mathematical , computational & modeling sciences center , arizona state university , tempe , az + school of mathematical & statistical science , arizona state university , tempe , az
|
social influence can play a crucial role in a range of behavioral phenomena , from the dissemination of information , to the adoption of political opinions and technologies , which are increasingly mediated through online systems . despite the wide availability of data from online social networks, identifying influence remains a challenge .individuals tend to engage in similar activities as their peers , so it is often impossible to determine from observational data whether a correlation between two individuals behaviors exists because they are similar or because one person s behavior has influenced the other . in the context of information diffusion, two people may disseminate the same information as each other because they possess the same information sources , such as web sites or television , that they consume regularly .moreover , homophily the tendency of individuals with similar characteristics to associate with one another creates difficulties for measuring the relative role of strong and weak ties in information diffusion , since people are more similar to those with whom they interact often . on one hand ,pairs of individuals who interact more often have greater opportunity to influence one another and have more aligned interests , increasing the chances of contagion .however , this commonality amplifies the potential for confounds : those who interact more often are more likely to have increasingly similar information sources . as a result, inferences made from observational data may overstate the importance of strong ties in information spread .conversely , individuals who interact infrequently have more diverse social networks that provide access to novel information .but because contact between such ties is intermittent , and the individuals tend to be dissimilar , any particular piece of information is less likely to flow across weak ties .historical attempts to collect data on how often pairs of individuals communicate and where they get their information have been prone to biases , further obscuring the empirical relationship between tie strength and diffusion . confounding factors related to homophilycan be addressed using controlled experiments , but experimental work has thus far been confined to the spread of highly specific information within limited populations . in order to understand how information spreads in a real - world environment ,we wish to examine a setting where a large population of individuals frequently exchange information with their peers .facebook is the most widely used social networking service in the world , with over 800 million people using the service each month .for example , in the united states , 54% of adult internet users are on facebook .those american users on average maintain 48% of their real world contacts on the site , and many of these individuals regularly exchange news items with their contacts .in addition , interaction among users is well correlated with self - reported intimacy .thus , facebook represents a broad online population of individuals whose online personal networks reflect their real - world connections , making it an ideal environment to study information contagion .we use an experimental approach on facebook to measure the spread of information sharing behaviors .the experiment randomizes whether individuals are exposed via facebook to information about their friends sharing behavior , thereby devising two worlds under which information spreads : one in which certain information can only be acquired external to facebook , and another in which information can be acquired within or external to facebook . by comparing the behavior of individuals within these two conditions , we can determine the causal effect of the medium on information sharing .the remainder of this paper is organized as follows .we further motivate our study with additional related work in section [ sec : related_work ] .our experimental design is described in section [ sec : experimental_design ] .then , in section [ sec : exposure ] we discuss the causal effect of exposure to content on the newsfeed , and how friends sharing behavior is correlated in time , irrespective of social influence via the newsfeed .furthermore , we show that multiple sharing friends are predictive of sharing behavior regardless of exposure on the feed , and that additional friends do indeed have an increasing causal effect on the propensity to share . in section [ sec : tie_strength ] we discuss how tie strength relates to influence and information diffusion .we show that users are more likely to have the same information sources as their close friends , and that simultaneously , these close friends are more likely to influence subjects . using the empirical distribution of tie strength in the network ,we go on to compute the overall effect of strong and weak ties on the spread of information in the network .finally , we discuss the implications of our work in section [ sec : discussion ] .online networks are focused on sharing information , and as such , have been studied extensively in the context of information diffusion .diffusion and influence have been modeled in blogs , email , and sites such as twitter , digg , and flickr .one particularly salient characteristic of diffusion behavior is the correlation between the number of friends engaging in a behavior and the probability of adopting the behavior .this relationship has been observed in many online contexts , from the joining of livejournal groups , to the bookmarking of photos , and the adoption of user - created content .however , as anagnostopoulos , et al . point out , individuals may be more likely to exhibit the same behavior as their friends because of homophily rather than as a result of peer influence .statistical techniques such as permutation tests and matched sampling help control for confounds , but ultimately can not resolve this fundamental problem .not all diffusion studies must infer whether one individual influenced another .for example , leskovec et al . study the explicit graph of product recommendations , sun et al . study cascading in page fanning , and bakshy et al . examine the exchange of user - created content .however , in all these studies , even if the source of a particular contagion event is a friend , such data does not tell us about the relative importance of social networks in information diffusion .for example , consider the spread of news . in bradley greenberg s classsic study of media contagion , 50% of respondents learned about the kennedy assassination via interpersonal ties . despite the substantial word - of - mouth spread , it is clear that all of the respondents would have gotten the news at a slightly later point in time ( perhaps from the very same media outlets as their contacts ) , had they not communicated with their peers .therefore , a complete understanding of the importance of social networks in information diffusion not only requires us to identify sources of interpersonal contagion , but also requires a counterfactual understanding of what would happen if certain interactions did not take place .facebook users primarily interact with information through an aggregated history of their friends recent activity ( stories ) , called the news feed , or simply feed for short .some of these stories contain links to content on the web , uniquely identified by urls .our experiment evaluates how much exposure to a url on the feed increases an individual s propensity to share that url , beyond correlations that one might expect among facebook friends .for example , friends with whom a user interacts more often may be more likely to visit sites that the user also visits . as a result , those friends may be more likely to share the same url as the user before she has the opportunity to share that content herself .additional unobserved correlations may arise due to external influence via e - mail , instant messaging , and other social networking sites .these causal relationships are illustrated in figure [ fig : causal ] . from the figure, one can see that all unobservable correlations can be identified by blocking the causal relationship between the facebook feed and sharing .our experiment therefore randomizes subjects with respect to whether they receive social signals about friends sharing behavior of certain web pages via the facebook feed .subject - url pairs are randomly assigned at the time of display to either the _ no feed _ or the _ feed _ condition .stories that contain links to a url assigned to the _ no feed _condition for the subject are never displayed in the subject s feed .those assigned to the _ feed _ condition are not removed from the feed , and appear in the subject s feed as normal ( figure [ fig : screenshot_feed ] ) .pairs are deterministically assigned to a condition at the time of display , so any subsequent share of the same url by any of a subject s friends is also assigned to the same condition . to improve the statistical power of our results ,twice as many pairs were assigned to the _ no feed _ condition . because removal from the feed occurs on a subject - url basis , and we include only a small fraction of subject - url pairs in the _ no feed _ condition , a shared url is on average delivered to over 99% of its potential targets .all activity relating to subject - url pairs assigned to either experimental condition is logged , including feed exposures , censored exposures , and clicks to the url ( from the feed or other sources , like messaging ) . directed shares , such as a link that is included in a private facebook message or explicitly posted on a friend s wall , are not affected by the assignment procedure .if a subject - url pair is assigned to an experimental condition , and the subject clicks on content containing that url in any interface other than the feed , that subject - url pair is removed from the experiment .our experiment , which took place over the span of seven weeks , includes 253,238,367 subjects , 75,888,466 urls , and 1,168,633,941 unique subject - url pairs .threats to data quality include using content that was or may have been previously seen by subjects on facebook prior to the experiment , content that subjects may have seen through interfaces on facebook other than feed , spam , and malicious content .we address these issues in a number of ways .first , we only consider content that was shared by the subjects friends only after the start of the experiment .this enables our experiment to accurately capture the first time a subject is exposed to a link in the feed , and ensures that urls in our experiment more accurately reflect content that is primarily being shared contemporaneously with the timing of the experiment .we also exclude potential subject - url pairs where the subject had previously clicked on the url via any interface on the site at any time up to two months prior to exposure , or any interface other than the feed for content assigned to the _ no feed _ condition .finally , we use the facebook s site integrity system to classify and remove urls that may not reflect ordinary users purposeful intentions of distributing content to their friends .the experimental population consists of a random sample of all facebook users who visited the site between august 14th to october 4th 2010 , and had at least one friend sharing a link .at the time of the experiment , there were approximately 500 million facebook users logging in at least once a month .our sample consists of approximately 253 million of these users .all facebook users report their age and gender , and a user s country of residence can be inferred from the ip address with which she accesses the site . in our sample, the median and average age of subjects is 26 and 29.3 , respectively .subjects originate from 236 countries and territories , 44 of which have one million or more subjects .additional summary statistics are given in table [ tab : demographics ] , and show that subjects are assigned to the conditions in a balanced fashion ..summary of demographic features of subjects assigned to the _ feed _ ( ) and _ no feed _( ) condition .some subjects may appear in both columns .[ cols= " < , < , < " , ] contemporaneous behavior among connected individuals is commonly used as evidence for social influence processes ( e.g. ) .we find that subjects who share the same link as their friends typically do so within a time that is proximate to their friends sharing time , even when no exposure occurs on facebook .figure [ fig : temporal ] illustrates the cumulative distribution of information lags between the subject and their first sharing friend , among subjects who had shared a url after their friends .the top panel shows the latency in sharing times between the subject and their friend for users in the _ feed _ and _ no feed _ condition .while a larger proportion of users in the _ feed _ condition share a link within the first hour of their friends , the distribution of sharing times is strikingly similar .the bottom panel shows the differences in time between when subjects shared and when they were ( or would have been ) first exposed to their friends sharing behavior on the facebook feed .the horizontal axis is negative when a subject had shared a link after a friend but had not yet seen that link on the feed . from this comparison , it is easy to see that users in the _ feed _ condition are most likely to share a link immediately upon exposure , while those who share it without seeing it in their feed will do so over a slightly longer period of time . to evaluate how exposure on the facebook feed relates to the speed at which urls appear to diffuse ,we consider urls that were assigned to both the _ feed _ and _ no feed _ condition .we first match the share time of each url in the _ feed _ condition with a share time of the url in the _ no feed _ condition , sampling urls in proportion to their relative abundances in the data . from this set of contrasts , we find that the median sharing latency after a friend has already shared the content is 6 hours in the _ feed _ condition , compared to 20 hours when assigned to the _ no feed _ condition ( wilcoxon rank - sum test , ) .the presence of strong temporal clustering in both experimental conditions illustrates the problem with inferring influence processes from observations of temporally proximate behavior among connected individuals : regardless of access to social signals within a particular online medium , individuals can still acquire and share the same information as their friends , albeit at a slightly later point in time .classic models of social and biological contagion ( e.g. ) predict that the likelihood of `` infection '' increases with the number of infected contacts .observational studies of online contagion not only find evidence of temporal clustering , but also observe a similar relationship between the likelihood of contagion and the number of infected contacts .however , it is important to note that this correlation can have multiple causes that are unrelated to social influence processes .for example , if a website is popular among friends , then a particularly interesting page is more likely to be shared by a users friends independent of one another .the positive relationship between the number of sharing friends and likelihood of sharing may therefore simply reflect heterogeneity in the `` interestingness '' of the content , which is clustered along the network : the more popular a page is for a group of friends ,the more likely it is that one would observe multiple friends sharing it .we first show that , consistent with prior observational studies , the probability of sharing a link in the _ feed _ condition increases with the number of contacts who have already shared the link ( solid line , figure [ fig : numfriends]a ) .but the presence of a similar relationship in the _ no feed _ condition ( grey line , figure [ fig : numfriends]a ) shows that an individual is more likely to exhibit the sharing behavior when multiple friends share , even if she does not necessarily observe her friends behavior .therefore , when using observational data , the nave conditional probability ( which is equivalent to the probability of sharing in the _ feed _ condition ) does not directly give the probability increase due to influence via multiple sharing friends .rather , such an estimate reflects a mixture of internal influence effects and external correlation .our experiment allows us to directly measure the effect of the feed relative to external factors , computed as either the difference or ratio between the probability of sharing in the _ feed _ and _ no feed _ conditions ( figure [ fig : numfriends]bc ) .while the difference in sharing likelihood grows with the number of sharing friends , the relative risk ratio falls .this contrast suggests that social information in the feed is most likely to influence a user to share a link that many of her friends have shared , but the relative impact of that influence is highest for content that few friends are sharing .the decreasing relative effect is consistent with the hypothesis that having multiple sharing friends is associated with greater redundancy in information exposure , which may either be caused by homophily in visitation and sharing tendencies , or external influence .next , we examine the relationship between tie strength , influence , and information diversity by combining the experimental data with users online and offline interactions . following arguments originally proposed by mark granovetter s seminal 1973 paper ,_ the strength of weak ties _ , empirical work linking tie strength and diffusion often utilize the number of mutual contacts as proxies of interaction frequency .rather than using the number of mutual contacts , which can be large for pairs of individuals who no longer communicate ( e.g. former classmates ) , we directly measure the strength of tie between a subject and her friend in terms of four types of interactions : ( i ) the frequency of private online communication between the two users in the form of facebook messages ; ( ii ) the frequency of public online interaction in the form of comments left by one user on another user s posts ; ( iii ) the number of real - world coincidences captured on facebook in terms of both users being labeled by users as appearing in the same photograph ; and ( iv ) the number of online coincidences in terms of both users responding to the same facebook post with a comment .frequencies are computed using data from the three months directly prior to the experiment .the distribution of tie strengths among subjects and their sharing friends can be seen in figure [ fig : ts_dist ] .note that the vertical axis is collapsed.,scaledwidth=45.0% ] + we measure how the difference in the likelihood of sharing a url in the _ feed _ versus _ no feed _ conditions varies according to tie strength . to simplify our estimate of the effect of tie strength, we restrict our analysis to subjects with exactly one friend who had previously shared the link . in both conditions , a subject is more likely to share a link when her sharing friend is a strong tie ( figure [ fig : ts_pr ] ) .for example , subjects who were exposed to a link shared by a friend from whom the subject received three comments are 2.83 times more likely to share than subjects exposed to a link shared by a friend from whom they received no comments . for those who were not exposed, the same comparison shows that subjects are 3.84 times more likely to share a link that was previously shared by the stronger tie . the larger effect in the _ no feed _ condition suggests that tie strength is a stronger predictor of externally correlated activity than it is for influence on feed . from figure[ fig : ts_pr ] , it is also clear that individuals are more likely to be influenced by their stronger ties via the feed to share content that they would not have otherwise spread . furthermore , our results extend granovetter s hypothesis that weak ties disseminate novel information into the context of media contagion .figure [ fig : ts_rr ] shows that the risk ratio of sharing between the _ feed _ and _ no feed _ conditions is highest for content shared by weak ties .this suggests that weak ties consume and transmit information that one is unlikely to be exposed to otherwise , thereby increasing the diversity of information propagated within the network .strong ties may be individually more influential , but how much diffusion occurs in aggregate through these ties depends on the underlying distribution of tie strength ( i.e. figure [ fig : ts_dist ] ) . using the experimental data, we can estimate the amount of contagion on the feed generated by strong and weak ties .the causal effect of exposure to information shared by friends with tie strength is given by the average treatment effect on the treated : to determine the collective impact of ties of strength , we multiply this quantity by the fraction of links displayed in all users feeds posted by friends of tie strength , denoted by . in order to compare the impact of weak and strong ties , we must set a cutoff value for the minimum amount of interaction required between two individuals in order to consider that tie strong . setting the cutoff at ( a single interaction )provides the most generous classification of strong ties while preserving some meaningful distinction between strong and weak ties , thereby giving the most influence credit to strong ties . under this categorization of strong and weak ties ,the estimated total fraction of sharing events that can be attributed to weak and strong ties is the average treatment effect on the treated weighted by the proportion of url exposures from each tie type : we illustrate this comparison in figure [ fig : influence_volume ] , and show that by a wide margin , the majority of influence is generated by weak ties .although we have shown that strong ties are individually more influential , the effect of strong ties is not large enough to match the sheer abundance of weak ties .social networks may influence an individual s behavior , but they also reflect the individual s own activities , interests , and opinions .these commonalities make it nearly impossible to determine from observational data whether any particular interaction , mode of communication , or social environment is responsible for the apparent spread of a behavior through a network . in the context of our study, there are three possible mechanisms that may explain diffusion - like phenomena : ( 1 ) an individual shares a link on facebook , and exposure to this information on the feed causes a friend to re - share that same link . ( 2 ) friends visit the same web page and share a link to that web page on facebook , independently of one another .( 3 ) an individual shares a link within and external to facebook , and exposure to the externally shared information causes a friend to share the link on facebook .our experiment determines the causal effect of the feed on the spread of sharing behaviors by comparing the likelihood of sharing under the _ feed _ condition ( possible causes 1 - 3 ) with the likelihood under the _ no feed _ condition ( possible causes 2 - 3 ) . our experiment generalizes mark granovetter s predictions about the strength of weak ties to the spread of everyday information .weak ties are argued to have access to more diverse information because they are expected to have fewer mutual contacts ; each individual has access to information that the other does not . for informationthat is almost exclusively embedded within few individuals , like job openings or future strategic plans , weak ties play a necessarily role in facilitating information flow .this reasoning , however , does not necessarily apply to the spread of widely available information , and the relationship between tie strength and information access is not immediately obvious .our experiment sheds light on how tie strength relates to information access within a broader context , and suggests that weak ties , defined directly in terms of interaction propensities , diffuse novel information that would not have otherwise spread .although weak ties can serve a critical bridging function , the influence that weak ties exert has never before been measured empirically at a systemic level .we find that the majority of influence results from exposure to individual weak ties , which indicates that most information diffusion on facebook is driven by simple contagion .this stands in contrast to prior studies of influence on the adoption of products , behaviors or opinions , which center around the effect of having multiple or densely connected contacts who have adopted .our results suggest that in large online environments , the low cost of disseminating information fosters diffusion dynamics that are different from situations where adoption is subject to positive externalities or carries a high cost . because we are unable to observe interactions that occur outside of facebook, a limitation of our study is that we can only fully identify causal effects within the site .correlated sharing in the _ no feed _ condition may occur because friends independently visit and share the same page as one another , or because one user is influenced to share via an external communication channel .although we are not able to directly evaluate the relative contribution of these two potential causes , our results allow us to obtain a bound on the effect on sharing behavior within the site .the probability of sharing in the _ no feed _ condition , which is a combination of similarity and external influence , is an upper bound on how much sharing occurs because of homophily - related effects .likewise , the difference in the probability of sharing within the _ feed _ and _ no feed _ condition gives a lower bound on how much on - site sharing is due to interpersonal influence along any communication medium .the mass adoption of online social networking systems has the potential to dramatically alter an individual s exposure to new information . by applying an experimental approach to measuring diffusion outcomes within one of the largest human communication networks , we are able to rigorously quantify the effect of social networks on information spread .the present work sheds light on aggregate trends over a large population ; future studies may investigate how properties of the individual , such as age , gender , and nationality , or features of content , such as popularity and breadth of appeal , relate to the influence and its confounds .we would like to thank michael d. cohen , dean eckles , emily falk , james fowler , and brian karrer for their discussions and feedback on this work .this work was supported in part by nsf iis-0746646 .e. adar , j. teevan , and s. t. dumais .resonance on the web : web dynamics and revisitation patterns . in _ proceedings of the 27th international conference on human factors in computing systems_ , chi 09 , pages 13811390 , new york , ny , usa , 2009 .acm press .a. anagnostopoulos , r. kumar , and m. mahdian .influence and correlation in social networks . in _ proceedings of the 14th internal conference on knowledgediscover & data mining _ , pages 715 , new york , ny , usa , 2008 .acm press .l. backstrom , d. huttenlocher , j. kleinberg , and x. lan .group formation in large social networks : membership , growth , and evolution . in _kdd 06 : proceedings of the 12th acm sigkdd international conference on knowledge discovery and data mining _ , pages 4454 , new york , ny , usa , 2006 .acm .m. cha , a. mislove , and k. p. gummadi . a measurement - driven analysis of information propagation in the flickr social network . in _ proceedings of the 18th international conference on world wide web _ , www 09 , pages 721730 , new york , ny , usa , 2009 .acm .e. gilbert and k. karahalios . predicting tie strength with social media . in _ proceedings of the 27th international conference on human factors in computing systems _ , chi 09 , pages 211220 , new york , ny , usa , 2009 .m. gomez rodriguez , j. leskovec , and a. krause .inferring networks of diffusion and influence . in _ proceedings of the 16th acm sigkdd international conference on knowledge discovery and data mining _ , kdd 10 , pages 10191028 , new york , ny , usa , 2010 .a. goyal , f. bonchi , and l. v. lakshmanan . learning influence probabilities in social networks . in _ proceedings of the third acm international conference on web search and data mining _ , wsdm 10 , pages 241250 , new york , ny , usa , 2010 .acm .k. lerman and r. ghosh .information contagion : an empirical study of the spread of news on digg and twitter social networks . in _ proceedings of 4th international conference on weblogs and social media ( icwsm ) _ , 2010 .e. s. sun , i. rosenn , c. a. marlow , and t. m. lento .gesundheit ! modeling contagion through facebook news feed . in _ proceedings of the 3rd intl aaai conference on weblogs and social media_ , san jose , ca , 2009 . aaai .
|
online social networking technologies enable individuals to simultaneously share information with any number of peers . quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom , but also of whether individuals would still propagate information in the absence of social signals about that information . we examine the role of social networks in online information diffusion with a large - scale field experiment that randomizes exposure to signals about friends information sharing among 253 million subjects in situ . those who are exposed are significantly more likely to spread information , and do so sooner than those who are not exposed . we further examine the relative role of strong and weak ties in information propagation . we show that , although stronger ties are individually more influential , it is the more abundant weak ties who are responsible for the propagation of novel information . this suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed . = 10000 = 10000
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.