article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
the problem of distributed control for strings of dynamical agents can be seen as a particular instance of flocking for multi - agent systems .the flocking problem has been intensively studied in the last decade for agents with linear or nonlinear dynamics .generally speaking , two important features characterize the flocking behavior of autonomous agents : cohesion and collision avoidance . in multi - agent systemsthey are implemented as connectivity / topology preservation and collision avoidance , respectively .while these features are sufficient for reasonable flocking behavior , in the control of strings ( also known as platooning ) it is essential to add a supplementary requirement related to avoiding the amplification of oscillations through the formation , phenomenon known as _ string instability _ .even if string instability is circumvented , supplemental problems may be caused by the so called accordion effect ( or slinky effect ) consisting of weakly attenuated oscillations and long settling times of the relative positions and velocities of the vehicles .both string instability and the slinky effect are consequences of the notorious lack of scalability of networks of dynamical agents .this lack of scalability causes the performance of the control scheme to depend on the number of vehicles in formations and also on the vehicle s position in formation . for the case of lti agents , a novel distributed control architecture that can guarantee string stability even for distance based headways and can also guarantee perfect trajectory tracking ( _ i.e. _ the complete elimination of the slinky effect ) in the presence of bounded disturbances and communications delays , has been recently proposed in . for many important applications in networks of dynamical agents ,the regulated signals in trajectory tracking or synchronization problems represent relative measurements such as interspacing distances , relative velocities ( with respect to neighboring agents ) or phase differences between ( neighboring ) coupled oscillators . in this paper , we present preliminary results on a class of novel distributed control policies where the relative measurements ( with respect to the neighboring agents ) are used by the local sub controllers in conjunction with the knowledge of the control actions of the sub - controllers at the neighboring agents .it turns out that the performance of the resulting distributed control schemes vastly outperforms the distributed architectures based solely on relative measurements . in this work ,the dynamical models of the agents are taken to be non linear and time invariant , satisfying a global lipschitz like condition .such models represent an effective framework for describing the dynamics of the transmission block of road vehicles , from the break / throttle controls to the vehicle s position on the roadway . for illustrative simplicity, we only look at the scenario of identical agents for which the formation graph is a string , however the proposed scheme could be adapted to multi tree ( no self loops ) graphs , once that the relative errors are defined adequately in order to avoid the well known formation rigidity problems .it is noteworthy that the string formation alone has important applications , as it addresses the longstanding platooning problem , which is paramount to the autonomous vehicles industry .the problem considered in this paper can be rephrased as a multi - agent flocking problem with collision avoidance .the literature on this topic is very rich and considers both directed or undirected , fixed or time - varying interconnection graphs .the objective of the control scheme is to achieve the synchronization of the trajectories of all agents in the formation with the trajectory of the leader agent .such trajectory tracking must be achieved while ensuring zero ( steady state ) errors of the regulated measures ( in our case the relative speed between two consecutive agents ) and while avoiding collisions , _i.e. _ performing the needed longitudinal steering ( brake / throttle ) maneuvers that guarantee the avoidance of collision with the preceding vehicle .the approach taken here is based on the use of artificial potential functions in the formulation of the _ local _ control laws . when compared to the state - of - the - art ,our results represent a consistent extension of the ones in , from at least two perspectives .firstly , it guarantees stability , velocity matching ( trajectory tracking ) and collision avoidance even for _ directed topologies _ , as illustrated by the distributed control scheme reported here .secondly , it achieves complete _ scalability _ with respect to the number of agents in formation ( which is a daunting requirement for platooning systems , even in the case of lti agents ) and also with respect to the connectivity of the communications graph . by comparison, the main result in imposes that all the minor matrices of the weighted laplacian matrix associated with the interconnection graph are positive definite and lower bounded by a certain constant .this practically requires the maximization of the eigenvalues of the weighted laplacian matrix , which can be interpreted as the maximization of the number of interconnections in the underlined graph ( see ) together with the maximization of its diagonal elements ( see gergorin disk theorem ) .it is worth noting that the first requirement involves the transmission of the exact state of the leader to many agents in the formation , while the second requirement represents high local control gains . the most interesting feature of our proposed scheme is the fact that it achieves complete _ scalability _ with respect to the number of vehicles in the string .the scalability does not result from the tuning of the local sub controllers , as it is a structural property for the class of all stabilizing controllers and it is a consequence of the complete `` decoupling '' of a certain bounded approximation of the closed loop equations .this allows the study of the closed loop stability by performing solely individual , local analyses of the closed loops stability at each agent , which in turn guarantee the aggregated stability of the formation .+ the paper is organized as follows : in section ii we introduce the general framework and we formulate the platooning control problem .section iii provides a preliminary description of the novel distributed control architecture introduced in this work along with a first glimpse at the closed - loop dynamics `` decoupling '' featured by the control scheme .section iv contains the main result as it delineates the guarantees for stability , velocity matching and collision avoidance .the of a vector is defined as \ ] ] with is a positive constant .this is a class function of and is differentiable everywhere .a set is said to be _ forward invariant _ with respect to an equation ,if any solution of the equation satisfies : [ apf ] artificial potential function ( apf ) .the function is a differentiable , nonnegative , radially unbounded function of satisfying the following properties : * ( i ) * as , * ( ii ) * has a unique minimum , which is attained at , with being a positive constant .we consider a _homogeneous _ group of agents ( _ e.g. _ autonomous road vehicles ) moving along the same ( positive ) direction of a roadway , with the origin at the starting point of the leader . the dynamical model for the agents , relating the control signal of the vehicle to its position on the roadway , is given by [ dacia ] where is the instantaneous speed of the agent , is its command signal and is the initial interspacing distance between the agent and its predecessor in the string . throughout the sequel we will use the notation to denote ( especially for the graphical representations ) the input output operator of the dynamical system from ( [ nlin ] ) , with the initial conditions ( [ initcond ] ) . [ lider ]the index `` '' is reserved for the _ leader vehicle _ , the first vehicle in the string , for which we assume that there is no controller on board and consequently the command signal will represent a _ reference _ signal for the entire formation .we further define to be the interspacing and relative velocity error signals respectively ( with respect to the predecessor in the string ) . by differentiating the first equation in ( [ z ] )it follows that , therefore implying that constant interspacing errors ( in steady state ) are equivalent with zero relative velocity errors and also allowing to write the following time evolution for the relative velocity error of the vehicle inherent difficulty in platooning control is rooted in the _ nested _ nature of the interdependencies between the regulated signals . specifically , the regulated errors ( _ e.g. _ interspacing errors or relative velocity errors ) at the agent depend on the regulated errors of its predecessor ( the agent ) and so on , such that by a recursive argument going through all the predecessors of the agent they ultimately depend on the trajectory of the leader vehicle , which represents the reference for the entire formation . we introduce a novel control architecture featuring a certain beneficial `` decoupling '' properties of the closed loop dynamics that avoid the pitfalls of the aforementioned _ nested _ interdependencies .the distributed control policies rely only on information locally available to each vehicle . for the scope of this paper ,we consider non linear controllers built on the so - called artificial potential functions ( apf ) , in particular we will look at control laws of the type with , where each of the functions is an artificial potential function ( * ? ? ?* definition 7 ) with being a proportional gain to be designed for supplemental performance requirements . with the notation from ( [ z ] ) , the control policy ( [ ours ] ) for the vehicle becomes and it can further be written as the sum of the following two components : firstly , the control signal of the preceding vehicle , which is received onboard the vehicle via wireless communications ( _ e.g. _ digital radio ). secondly , the _ local _ component , which we denote with and which is based on the measurements ( [ z ] ) which are _ locally _ available to the vehicle , as they can be acquired via onboard lidar sensors .thus , the control law reads : by further denoting with the input output operator in ( [ figurehelper ] ) from and respectively , to , namely the resulted control architecture for any two consecutive vehicles ( ) can be pictured as in figure [ f2 ] .note that due to the assumed homogeneity of the formation , the wirelessly received predecessor control signal appears unfiltered in ( [ oursbis ] ) . for the illustrative simplicity of the exposition , we look only at the scenario in which there are no communications induced delays . a ( gps time based )synchronization mechanism that can cope with the communications induced time delays will be addressed in future work .the control policy ( [ ours ] ) entails a highly beneficial `` decoupling '' feature of the closed loop dynamics at each agent , as we simply illustrate next .firstly , note that by plugging ( [ ours ] ) into ( [ primo ] ) we obtain the following closed loop error equations at the agent : the following result will be instrumental in the sequel . consider the following lyapunov candidate functions : [ instrumental - lemma ] the differential of the lyapunov candidate function introduced in along the trajectories of ( [ dacia ] ) and ( [ ours ] ) is given by and does not depend on the particular choice of the apfs .differentiating the apf at the agent with respect to time , yields and by employing the anti symmetrical property of apfs : we get that therefore from ( [ lk ] ) it follows that a preliminary framework in which the advantages of the closed lop decoupling become apparent is illustrated by the following result .local _ stabilization at the agent is achieved irrespective of the dynamics involved at any other agents in the string and without invoking any stability arguments for the entire formation .[ concave ] let the real function in ( [ nlin ] ) be concave and differentiable , with its differential upper - bounded by a given constant .then , the controller guarantees the stability of as an equilibrium point of the decoupled system as far as .let us notice and since is upper - bounded by a given constant , we get therefore , along the trajectory of the closed loop error system the following holds using gronwall s lemma the stability of as an equilibrium point of the decoupled system is ensured by the stability of as an equilibrium point for the derivative of the lyapunov function defined in along the trajectory of can be computed following the prof of lemma as the proof ends by remarking that guarantees that .as we will show in this section , our methodology needs only information from the predecessor without employing any information from the leader agent .while the control gains are strictly related to the reactivity of the system ( _ i.e. _ faster systems needs higher controller gains ) our scheme does not require making the leader s information ( instantaneous speed or acceleration ) available to all other vehicles in the string ( the virtual leaders from ) , rendering our approach more suited to practical platooning applications .our directed communications scheme , necessitates a minimal information exchange and sensing radius for all agents ( each agent performs measurements and receives information only with respect to its predecessor ) .the following result is the main result of this section , as it delineates a `` decoupling '' property of the closed loop dynamics , achieved by the ( [ ours ] ) type control policy along with velocity matching and collision avoidance .[ nostra ] if the function from ( [ nlin ] ) satisfies the global lipshitz like condition ( * ? ? ? * assumption 1 ) then for all type control laws with that the following hold : * ( a ) * given the lyapunov function introduced in ( [ lk ] ) , _ local _ to the -th agent , the sub level sets of are compact and they represent forward invariant sets for the _ local _ closed loop dynamics ( [ secundo ] ) of the vehicle . + * ( b ) * the controller guarantees velocity matching and collision avoidance .furthermore , considering there exists such that therefore , a pre specified safety distance can be imposed by the initial conditions . * ( a ) * we show that the _ local _ sub level sets of are compact . note that implies that and . since is radially unbounded this implies that is bounded and consequently is bounded .therefore is a bounded set .moreover due to continuity of and one obtains that is a closed set .precisely is the pre - image of a closed set through a continuous function . in the banach space it therefore holds that is closed and bounded thus is compact .furthermore , point * ( a ) * and the lipschitz like assumption ( [ lipsha ] ) on implies that along the trajectories of ( [ secundo ] ) .therefore it suffices to choose the controller gain in order to guarantee that along the trajectories of ( [ secundo ] ) and that is a forward invariant set for the decoupled closed loop system , local to the -th vehicle . + * ( b ) * from the properties of the apf it follows that when .consequently , denote with and so for it holds that is forward invariant yielding .this implies that and using we conclude that .it is noteworthy that is implicitly defined by , which in turn depends on the initial condition .therefore , the interspacing distance between -th and -th vehicles is imposed by the initial condition .finally , by employing lasalle s invariance principle we conclude that the lyapunov function converges asymptotically to its minimum ( _ i.e. _ ) and consequently converges to .therefore , the velocity matching is guaranteed .given as introduced in ( [ lk ] ) , the string formation s steady state configuration is attained at the minimum of the following formation - level lyapunov function : which coincides component wise with the minima of the lyapunov functions ( [ lk ] ) _ local _ to the -th agent . furthermore , the level sets of given by are compact and they represent forward invariant sets for the closed loop dynamics of the entire formation , as given in ( [ dacia ] ) and ( [ ours ] ) with .consequently , velocity matching and vehicles collision avoidance are achieved , _ without the need for inserting exact leader information in the formation _( virtual leaders ) while maintaining a safe interspacing distance .it follows from the definition of ( [ l ] ) and lemma [ instrumental - lemma ] that along the trajectories of ( [ dacia ] ) and ( [ ours ] ) one has let us notice that is a finite cartesian product of the compacts , thus is compact .furthermore , designing the local controllers as in theorem [ nostra ] it follows that is a forward invariant set for the closed loop dynamics of the entire formation .consequently , without the need for inserting exact leader information in the formation , one guarantees the velocity matching and moreover from point * ( b ) * in theorem [ nostra ] the vehicles maintain a strictly positive interspacing distance .we have introduced a novel distributed control architecture for a class of nonlinear dynamical agents moving in the string formation , while guaranteeing trajectory tracking ( with respect to the leader agent ) and collision avoidance .the performance of the proposed scheme is entirely scalable with respect to the number of vehicles in the string .the scalability is a consequence of the complete `` decoupling '' of a certain bounded approximation of the closed loop equations , that allows the study of the closed loop stability by performing solely individual , local analyses of the closed loops stability for each agent , which in turn guarantees the aggregated stability of the overall formation .p.g . mehta and p. barooah and j.p .hespanha `` mistuning based control design to improve closed loop stability margins of vehicular platoons '' , _ ieee trans . on aut .54 , no.9 , ( pp . 21002113 ) , 2009 .r. h. middleton and j. braslavsky `` string stability in classes of linear time invariant formation control with limited communication range '' , _ ieee trans .aut . control _ , vol.55 , no.7 , 2010 .15191530 ) l. buoniu , i .-morrescu , `` topology - preserving flocking of nonlinear agents using optimistic planning '' ._ control theory and technology _ , special issue on learning and control in cooperative multi - agent systems , 13(1 ) , 70 - 81 , 2015 g. orosz and s.p .shah `` a nonlinear framework for autonomous cruise control '' , _ asme 5th annual dynamic systems and control conference joint with jsme 11th motion and vibration conference _ , fort lauderdale , florida , 2012 .
we introduce a novel distributed control architecture for a class of nonlinear dynamical agents moving in the string formation , while guaranteeing trajectory tracking and collision avoidance . an interesting attribute of the proposed scheme is the fact that its performance is scalable with respect to the number of vehicles in the string . the scalability is a consequence of the complete `` decoupling '' of a certain bounded approximation of the closed loop equations , that allows the study of the closed loop stability by performing solely individual , local analyses of the closed loops stability at each agent , which in turn guarantee the aggregated stability of the formation .
it is known that an accurate stability analysis using the lyapunov approach requires a complete quadratic lyapunov - krasovskii functional .such an approach was first implemented in the form of the discretized lyapunov - krasovskii functional method in , and a refined version was presented in . in this method ,the kernel of the lyapunov - krasovskii functional is piecewise linear .an alternative approach is the sum - of - squares ( sos ) method presented in . in the sos method, the kernel is polynomial . in both approaches ,the stability problem is reduced to a semi - definite programming problem , or more specifically , a linear matrix inequality problem . for many practical systems ,the number of state variables with delays is very small compared with the total number of state variables . for such systems , a special form of the coupled differential - difference equation formulation , or its generalized counterpart, the coupled differential - functional equation formulation proves to be much more efficient in numerical computation .the differential - difference formulations can also model systems of neutral type .the discretized lyapunov - krasovskii functional approach to stability of differential - difference equations is documented in , and , and the sos formulation can be found in . control synthesis based on complete quadratic lyapunov - krasovskii functional stability conditions is still rare .an early example is , in which a more limited class of lyapunov - krasovskii functional is used , and some parameter constraints are imposed .recently , a synthesis based on the inverse of kernel operator associated with the lyapunov - krasovskii functional for time - delay systems of retarded type in the sos formulation was developed in and .this paper extends the method by peet et al . to coupled differential - functional equations .the inverse operator is derived using direct algebraic approach rather than the series expansion approach .the basic idea of such synthesis is outlined as follows .consider the coupled differential functional equations where , , , , , and is time delay , and denote the set of real vectors and matrices with dimensions , respectively .the initial conditions are defined as where represents a shift and restriction of defined by , , represents the set of piecewise continuous functions from ] , or using ( [ 37 ] ) , we have and the right sides of ( [ 38])-([39 ] ) are equal for arbitrary and if and only if ( [ 28])-([30 ] ) are satisfied .obviously , the above is generalization of theorem 3 in .in this section , we will present an analytical expression for the inverse of the operator when it is separable .similar to , such an analytic expression for the inverse operator can be used to expedite the construction of the stabilizing controller in the controller synthesis problem .+ * definition 1 : * an operator defined in ( [ 2 ] ) is said to be separable if for some constant matrices and , and column vector function . [ thm1 ] assume in ( [ 2 ] ) is separable . then , provided that all the inverse matrices below are well defined , its inverse may be expressed as (s)\nonumber\\ & = & \left[\begin{array}{c}\hat{p}\psi+\int_{-r}^{0}\hat{q}(\theta)\phi(\theta)d\theta\\ r\hat{q}^{t}(s)\psi+\hat{s}(s)\phi(s)+\int_{-r}^{0}\hat{r}(s,\theta)\phi(\theta)d\theta \end{array}\right],\end{aligned}\ ] ] where ^{-1 } , \end{array}\ ] ] (i+k\gamma)^{-1 } , \end{array}\ ] ] and denotes the identity matrix with appropriate dimension .let the operator defined by the right hand side of ( [ 42 ] ) be denoted as , then (s)=\left[\begin{array}{c}\lambda_{1}\\ \lambda_{2 } \end{array}\right],\ ] ] where using ( [ 40])-([41 ] ) and ( [ 43])-([47 ] ) , we obtain \nonumber\\ & & -r p^{-1}htkh^{t}\nonumber\\ & = & i , \end{array}\ ] ] p^{-1}h - p^{-1}ht\right.\nonumber\\ & & \left .- p^{-1}htk\gamma\right)z(\theta)\nonumber\\ & = & \left[p^{-1}h+p^{-1}ht(r kh^{t}p^{-1}h - i - k\gamma)\right]z(\theta)\nonumber\\ & = & \left(p^{-1}h - p^{-1}h\right)z(\theta)\nonumber\\ & = & 0 , \end{array}\ ] ] (i+k\gamma)^{-1}kh^{t}\right\}\nonumber\\ & = & \hat{z}^{t}(s)\left\{i - t^t ( i - r h^{t}p^{-1}h(i+k\gamma)^{-1}k ) \right.\nonumber\\ & & \left.-\gamma(i+k\gamma)^{-1}k\right\}h^{t}\nonumber\\ & = & \hat{z}^{t}(s)\left\{i - t^t(i - r h^{t}p^{-1}hk(i+\gamma k)^{-1 } ) \right.\nonumber\\ & & \left.-\gamma k(i+\gamma k)^{-1}\right\}h^{t}\nonumber\\ & = & \hat{z}^{t}(s)\left\{i - t^t(i+\gamma k - r h^{t}p^{-1}hk)(i+\gamma k)^{-1}\right.\nonumber\\ & & \left.-\gamma k(i+\gamma k)^{-1}\right\}h^{t}\nonumber\\ & = & \hat{z}^{t}(s)[i-(i+\gamma k)^{-1}-\gamma k(i+\gamma k)^{-1}]h^{t}\nonumber\\ & = & 0 , \end{array}\ ] ] (i+k\gamma)^{-1}(i+k\gamma)\right\}z(\theta)\nonumber\\ & = & \hat{z}^{t}(s)\left(-rt^t h^{t}p^{-1}h+\gamma+rt^t h^{t}p^{-1}h\right.\nonumber\\ & & \left.-\gamma\right)z(\theta)\nonumber\\ & = & 0 . \end{array}\ ] ] thus , we have shown =\left[\begin{array}{c}\psi\\ \phi\end{array}\right ] , \end{array}\ ] ] for all \in z ] . since , ,\mathcal{p}^{-1}\mathcal{a}\left[\begin{array}{c}\psi\\ \phi(s ) \end{array}\right]\right\rangle\\ & & + \left\langle\mathcal{a}\left[\begin{array}{c}\psi\\\phi(s ) \end{array}\right],\mathcal{p}^{-1}\left[\begin{array}{c}\psi\\ \phi(s ) \end{array}\right]\right\rangle\\ & = & \left\langle \mathcal{p}^{-1}\left[\begin{array}{c}\psi\\ \phi(s ) \end{array}\right],\mathcal{a}\mathcal{p}\mathcal { p}^{-1}\left[\begin{array}{c}\psi\\ \phi(s ) \end{array}\right]\right\rangle\\ & & + \left\langle\mathcal{a}\mathcal { p}\mathcal { p}^{-1}\left[\begin{array}{c}\psi\\ \phi(s ) \end{array}\right],\mathcal{p}^{-1}\left[\begin{array}{c}\psi\\ \phi(s ) \end{array}\right]\right\rangle.\\ \end{array}\ ] ] next , we note that if we define as \right)\nonumber\\ & = & k_{0}x(t)+k_{1}y(t - r)+\int_{-r}^{0}k_{2}(s)y(t+s)ds,\end{aligned}\ ] ] and as \right)\nonumber\\ & = & m_{0}x(t)+m_{1}y(t - r)+\int_{-r}^{0}m_{2}(s)y(t+s)ds,\end{aligned}\ ] ] then .we construct the controller \nonumber\\ & = & m_{0}\left(\hat p x+\int_{-r}^{0 } \hat q(\theta)y(\theta)d\theta\right)\nonumber\\ & & + m_{1}\left(r \hat q^{t}(-r)x+\hat s(-r)y(-r)+\int_{-r}^{0}\hat r(-r,\theta)y(\theta)d\theta\right)\nonumber\\ & & + \int_{-r}^{0}m_{2}(s)\left(r\hat q^{t}(s)x+\hat s(s)y(s)\right.\nonumber\\ & & \left.+\int_{-r}^{0}\hat r(s,\theta)y(\theta)d\theta\right)ds\nonumber\\ & = & \left(m_{0}\hat p+rm_{1}\hat q^{t}(-r)+r\int_{-r}^{0}m_{2}(s ) \hat q^{t}(s)ds\right)x\nonumber\\ & & + m_{1}\hat s(-r)y(-r)+\int_{-r}^{0}\left(m_{0}\hat q(s)+m_{1}\hat r(-r , s)\right.\nonumber\\ & & \left.+m_{2}(s)\hat s(s)+\int_{-r}^{0}m_{2}(\theta)\hat r(\theta , s)d\theta\right)y(s)ds\nonumber\\ & = & \mathcal{k}\left[\begin{array}{c}x\\ y\end{array}\right].\end{aligned}\ ] ] now we define a new state =\mathcal{p}^{-1}\left[\begin{array}{c}\psi\\ \phi(s)\end{array}\right]\in x ] , then the closed - loop system is stable if , where ,\mathcal{a}\mathcal{p}\left[\begin{array}{c}\hat{\psi}\\ \hat{\phi}(s ) \end{array}\right]\right\rangle\nonumber\\ & & + \left\langle\mathcal{a}\mathcal{p}\left[\begin{array}{c}\hat{\psi}\\ \hat{\phi}(s ) \end{array}\right],\left[\begin{array}{c}\hat{\psi}\\ \hat{\phi}(s ) \end{array}\right]\right\rangle\nonumber\\ & & + \left\langle \mathcal{fm}\left[\begin{array}{c}\hat{\psi}\\ \hat{\phi}(s ) \end{array}\right],\left[\begin{array}{c}\hat{\psi}\\ \hat{\phi}(s ) \end{array}\right]\right\rangle\nonumber\\ & & + \left\langle \left[\begin{array}{c}\hat{\psi}\\ \hat{\phi}(s ) \end{array}\right],\mathcal{fm}\left[\begin{array}{c}\hat{\psi}\\ \hat{\phi}(s ) \end{array}\right]\right\rangle . \end{array}\ ] ] to show that , we examine and separately .first , we have =\left[\begin{array}{c}\psi\\ \phi(s)\end{array}\right],\ ] ] where then , ,\mathcal{a}\mathcal{p}\left[\begin{array}{c}\hat{\psi}\\ \hat{\phi}(s ) \end{array}\right]\right\rangle\nonumber\\ & = & \int_{-r}^{0}\hat{\psi}^{t}\psi ds + \int_{-r}^{0}\hat{\phi}^{t}(s)\phi(s)ds\nonumber\\ & = & r \hat{\psi}^{t}ap \hat{\psi } + r\int_{-r}^{0}\hat{\psi}^{t}a q(s)\hat{\phi}(s)ds+r \hat{\psi}^{t}br q^{t}(-r)\hat{\psi}\nonumber\\ & & + r \hat{\psi}^{t}b s(-r)\hat{\phi}(-r)+r\int_{-r}^{0}\hat{\psi}^{t}br(-r,\theta)\hat{\phi}(\theta)d\theta\nonumber\\ & & + \int_{-r}^{0}r\hat{\phi}^{t}(s)\dot{q}^{t}(s)\hat{\psi } ds+\int_{-r}^{0}\hat{\phi}^{t}(s)\dot{s}(s)\hat{\phi}(s)ds\nonumber\\ & & + \int_{-r}^{0}\int_{-r}^{0}\hat{\phi}^{t}(s)\frac{d}{ds}r(s,\theta)\hat{\phi}(\theta)dsd\theta \nonumber\\ & & + \int_{-r}^{0}\hat{\phi}^{t}(s)s(s)\dot{\hat{\phi}}(s)ds\nonumber\\ & = & \int_{-r}^{0}\left[\begin{array}{c}\hat{\psi}\\ \hat{\phi}(-r)\\ \hat{\phi}(s)\end{array}\right]^{t}\sigma\left[\begin{array}{c}\hat{\psi } \\ \hat{\phi}(-r)\\ \hat{\phi}(s)\end{array}\right]ds\nonumber\\ & & + \int_{-r}^{0}\int_{-r}^{0}\hat{\phi}^{t}(s)\frac{d}{ds}r(s,\theta)\hat{\phi}(\theta)dsd\theta\nonumber\\ & & + \int_{-r}^{0}\hat{\phi}^{t}(s)s(s)\dot{\hat{\phi}}(s)ds , \end{array}\ ] ] where ,\ ] ] where .since \in x ] .this approach is described in more detail in and . in the following , we present a numerical example to illustrate the controller obtained from the condition in proposition 1 .we consider the following system with a feedback controller as follows (t)\nonumber\\ & & + \left[\begin{array}{cc } 0.5 & 0\\ 0 & 0\\ 0 & 0\\ 0 & 0\\ 0 & 0\\ 0 & 1 \end{array } \right]y(t - r)+\left[\begin{array}{c}1\\ 0\\ 0\\ 0\\ 0\\ 1 \end{array } \right ] u(t),\label{s1}\\ y(t)&=&\left[\begin{array}{cccccc } -0.2 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \end{array } \right]x(t),\label{s2}\end{aligned}\ ] ] where . by using proposition 1 , together with the tools of mupad , matlab , sostools and polynomials with degree 2, we obtain the controller ^{t}x(t)+\left[\begin{array}{c } -0.239 \\ -0.343\end{array}\right]^{t}y(t - r)\nonumber\\ & & + \int_{-1.6}^{0}k_{2}y(t+s)ds,\end{aligned}\ ] ] where ^{t}.\ ] ] using controller ( [ 19 ] ) coupled with system ( [ s1])-([s2 ] ) we simulate the closed - loop system , which is illustrated in fig.1 . ) -([s2 ] ) coupled with stabilizing controller from prop . 1 ]in this paper , we have obtained an analytic formulation for the inverse of jointly positive multiplier and integral operators as defined in .this formulation has the advantage that it eliminates the need for either individual positivity of the multiplier and integral operators or the need to use a series expansion to find the inverse .this inversion formula is applied to controller synthesis of coupled differential - difference equations .the use of the differential - difference formulation has the advantage that the size of the resulting decision variables is reduced , thereby allowing for control of systems with larger numbers of states .these methods are illustrated by designing a stabilizing controller for a system with 6 states and 2 delay channels ..full state feedback of delayed systems using sos : a new theory of duality ._ ifac proceedings volumes _ , pages 2429 , february 2013 . m.m .lmi parameterization of lyapunov functions for infinite - dimensional systems : a toolbox . _i proceedings of the american control conference _ ,june 4 - 6 , 2014 . h. li .discretized lkf method for stability of coupled differential - difference equations with multiple discrete and distributed delays ._ international journal of robust and nonlinear control _ , 22:0 875891 , 2012 .
this article presents the inverse of the kernel operator associated with the complete quadratic lyapunov - krasovskii functional for coupled differential - functional equations when the kernel operator is separable . similar to the case of time - delay systems of retarded type , the inverse operator is instrumental in control synthesis . unlike the power series expansion approach used in the previous literature , a direct algebraic method is used here . it is shown that the domain of definition of the infinitesimal generator is an invariant subspace of the inverse operator if it is an invariant subspace of the kernel operator . the process of control synthesis using the inverse operator is described , and a numerical example is presented using the sum - of - square formulation . lyapunov - krasovskii functional , linear operator , time delay , sum - of - square .
_ secret sharing _ is a task where a _ dealer _ sends a secret to ( possibly , dishonest ) _ players _ so that the cooperation of a minimum of players is required to decode the secret .protocols that accomplish this are called -threshold schemes .the need for such a task appears naturally in many situations , from children s games and online chats , to banking , industry , and military security : the secret message can not be entrusted to any individual , but coordinated action is required to decrypt it in order to prevent wrongdoings . for the classical implementation of the simplest -threshold scheme , alice , the dealer , encodes her secret into a binary string and adds to it a random string of the same length , resulting into the coded cypher , where `` '' denotes addition modulo .she then sends and respectively to the players bob and charlie . while the individual parts and carry no information about the secret , only by collaboration the players can recover adding their strings together : .general -threshold classical schemes are a bit more involved .such protocols , however , face the same problem as any other classical key distribution protocol : _ eavesdropping_. an eavesdropper , eve , or even a dishonest player , can intercept the transmission and copy the parts sent from the dealer to the players , thus accessing the secret .an obvious way to proceed would be for alice to first employ standard two - party quantum key distribution ( qkd ) protocols , to establish separate secure secret keys with bob and charlie , then implement the classical procedure to split the secret into parts and , and use the obtained secret keys to securely transmit these parts to each player .the advantage of this protocol , which we call parallel - qkd ( pqkd ) , is that it exploits unconditional security offered by the well - studied two - party qkd against eavesdropping and , very importantly , that it can be unconditionally secure against any possible dishonest actions of the players .however , pqkd can be demanding in terms of resources , as for a general scenario it requires the implementation of distinct qkd protocols plus the classical procedure to split the secret , thus becoming less efficient with increasing . an alternative proposal to cope with these difficulties lies in so - called _ quantum secret sharing_ ( qss ) alias quantum sharing of a classical secret , distinct from quantum _ state _ sharing , in which the secret is a quantum state rather than a classical message which allows for implementing a -threshold scheme supported by a _single _ classical post - processing , regardless of the number of players .unfortunately , as we shall see below , there exists no provably secure qss scheme at the moment that enjoys the unconditional security of pqkd against both eavesdropping and dishonesty .hillery , buek , and berthiaume ( hbb , for short ) proposed the first ( 2,2)- and ( 3,3)-threshold qss schemes that use multipartite entanglement to split the classical secret , and protect it from eavesdropping and dishonest players in a single go .various other entanglement - based ( hbb - type ) schemes have been proposed , some being more economic in the required multipartite entanglement , while others allowing for more general -threshold schemes .a different entanglement - based qss scheme has also been proposed , where entangled states are directly used as secure carriers and splitters of information .a few experimental demonstrations have been reported as well .the security of all current schemes , however , is limited to either plain external eavesdropping unrealistically assuming honest players , or limited types of attacks by eavesdroppers and dishonest participants , yet sharing ideally pure maximally entangled states .furthermore , all such schemes are vulnerable to participant attack and cheating , and no method is currently known to deal with such conspiracies in general , not even in the ideal case of pure shared states .zhang , li , and man proposed the first _( n , n)_-threshold scheme that required no entanglement and was claimed to be unconditionally secure .although it required perfect single photon sources and quantum memories ( rendering it impractical for current technology ) , it was later shown to be vulnerable to various participant attacks . in the same category of entanglement - free qss schemes ,_ proposed a protocol based on a single photon ; although originally claimed to be unconditionally secure , it was also shown to be vulnerable to participant attacks .alternative schemes can be devised to deal with particular attacks , however there currently exists no rigorous method against arbitrary participant attacks . to sum up , almost two decades after the conception of qss ,no existing scheme ( with or without entanglement ) has been proven unconditionally secure against cheating of dishonest players .hence any practical implementation of secure secret sharing needs to resort to conventional pqkd , while qss schemes only served so far as a theoretical curiosity . in this article , we consider a continuous variable version of an hbb - type scheme .we determine conditions on the extracted key rate for the secret to be unconditionally secure against both external eavesdropping and arbitrary cheating strategies of dishonest participants , in the limit of asymptotic keys , independently of the shared state , and for arbitrary -threshold schemes .the central idea in our approach , to rigorously deal with arbitrary cheating strategies , is to treat the measurements announced by the players as an input / output of an uncharacterized measuring device ( black box ) , analogously to how ( possibly , hacked ) measuring devices are treated in device - independent qkd . in practice, this translates into making no assumption about the origin of the players ( possibly , faked ) announced measurements , in contrast to previous qss approaches that considered the players actions as trusted thus suffering from cheating strategies .the dealer , on the other hand , is regarded as a trusted party with trusted devices , which is a natural assumption for this task . at variance with device - independent qkd , where the _ devices _ are untrusted , for the qss task we treat the _ players _ themselves as untrusted , independently of their devices .therefore the framework established in this article , which makes no assumptions about the players measurements , allows us to prove security against general attacks of eavesdroppers and/or of dishonest players .this is achieved by making a sharp connection with , and extending all the tools of , the recently developed one - sided device - independent qkd ( 1sdi - qkd ) , in particular for continuous variable systems , which has been proven unconditionally secure in the limit of asymptotic keys .however , the approach introduced here is general and can be adapted to derive security proofs for discrete variable qss schemes as well as in the regime of finite keys .the paper is organized as follows . in section [ section2 ]we present our continuous variable qss protocol , focusing on the -threshold case . in section [ section3 ]we provide a proof of its unconditional security , adopting techniques from the 1sdi - qkd paradigm . in section [ section4 ] we present extensions to -threshold schemes and analyze the experimental feasibility of our protocol . in section [ section5 ]we summarize our work and discuss some future perspectives .for illustration , we first focus on the -threshold scheme .the trusted dealer alice prepares a 3-mode continuous variable entangled state , keeps one mode and sends the other modes to the untrusted players , bob and charlie , through individual unknown quantum channels .alice is assumed to perform homodyne measurements of two canonically conjugate quadratures , and , on her mode , with corresponding outcomes , satisfying = i12 & 12#1212_12%12[1][0] * * , ( ) in _ _ ( , ) p. link:\doibase10.1103/revmodphys.74.145 [ * * , ( ) ] link:\doibase10.1103/physreva.59.1829 [ * * , ( ) ] link:\doibase10.1103/physrevlett.83.648 [ * * , ( ) ] link:\doibase10.1103/physrevlett.92.177903 [ * * , ( ) ] link:\doibase10.1103/physreva.59.162 [ * * , ( ) ] link:\doibase10.1103/physreva.69.052307 [ * * , ( ) ] link:\doibase10.1103/physreva.72.022303 [ * * , ( ) ] http://www.rintonpress.com/journals/qiconline.html#v7n8 [ * * , ( ) ] link:\doibase10.1103/physreva.93.022325 [ * * , ( ) ] link:\doibase10.1103/physreva.61.042311 [ * * , ( ) ] link:\doibase10.1103/physrevlett.114.090501 [ * * , ( ) ] link:\doibase10.1103/physreva.88.042332 [ * * , ( ) ] link:\doibase10.1016/j.physleta.2005.04.007 [ * * , ( ) ] link:\doibase10.1103/physreva.63.042301 [ * * , ( ) ] http://stacks.iop.org/1367-2630/5/i=1/a=304 [ * * , ( ) ] link:\doibase10.1103/physreva.78.042309 [ * * , ( ) ] link:\doibase10.1103/physreva.82.062315 [ * * , ( ) ] link:\doibase10.1103/physreva.88.042313 [ * * , ( ) ] link:\doibase10.1007/s11128 - 013 - 0713 - 7 [ * * , ( ) ] link:\doibase10.1103/physreva.67.044302 [ * * , ( ) ] link:\doibase10.1103/physrevlett.95.200502 [ * * , ( ) ] link:\doibase10.1103/physrevlett.98.020503 [ * * , ( ) ] link:\doibase10.1038/ncomms6480 [ * * ( ) , 10.1038/ncomms6480 ] link:\doibase10.1103/physreva.76.062324 [ * * , ( ) ] http://arxiv.org/abs/0705.0279 [ ( ) ] link:\doibase10.1103/physreva.71.044301 [ * * , ( ) ] link:\doibase10.1103/physreva.72.044302 [ * * , ( ) ] link:\doibase10.1016/j.physleta.2006.04.030 [ * * , ( ) ] link:\doibase10.1103/physrevlett.95.230505 [ * * , ( ) ] link:\doibase10.1103/physrevlett.98.028901 [ * * , ( ) ] http://arxiv.org/abs/quant-ph/0703159 [ * * , ( ) ] link:\doibase10.1103/physreva.92.030301 [ * * , ( ) ] link:\doibase10.1103/physreva.92.030302 [ * * , ( ) ] link:\doibase10.1103/physrevlett.98.230501 [ * * , ( ) ] link:\doibase10.1103/physrevlett.106.110506 [ * * , ( ) ] link:\doibase10.1364/optica.3.000634 [ * * , ( ) ] link:\doibase10.1103/physrevlett.109.100502 [ * * , ( ) ] link:\doibase10.1142/s0219749908003256 [ * * , ( ) ] link:\doibase10.1098/rspa.2004.1372 [ * * , ( ) ] link:\doibase10.1103/physrevlett.95.080501 [ * * , ( ) ] link:\doibase10.1103/physrevlett.96.070501 [ * * , ( ) ] link:\doibase10.1103/physrevlett.97.190502 [ * * , ( ) ] http://www.nature.com/nphys/journal/v6/n9/abs/nphys1734.html [ * * , ( ) ] link:\doibase10.1103/physrevlett.100.200501 [ * * , ( ) ] link:\doibase10.1103/physrevlett.102.110504 [ * * , ( ) ] link:\doibase10.1103/physrevlett.108.130502 [ * * , ( ) ] link:\doibase10.1103/physrevlett.107.020502 [ * * , ( ) ] link:\doibase10.1103/physrevlett.115.030505 [ * * , ( ) ] * * , ( ) link:\doibase10.1103/physrevlett.103.020402 [ * * , ( ) ] link:\doibase10.1063/1.4903989 [ * * , ( ) ] link:\doibase10.1007/s00220 - 013 - 1775 - 1 [ * * , ( ) ] http://arxiv.org/abs/1511.04857 [ ( ) ] _ _ , https://uwspace.uwaterloo.ca/handle/10012/7468[ph.d .thesis ] , ( ) link:\doibase10.1103/physrevlett.104.251102 [ * * , ( ) ] link:\doibase10.1364/oe.21.011546 [ * * , ( ) ] link:\doibase10.1364/ol.37.005178 [ * * , ( ) ] http://dx.doi.org/10.1038/nphys3202 [ * * , ( ) ] * * , ( ) link:\doibase10.1103/physrevlett.114.050501 [ * * , ( ) ] link:\doibase10.1103/physrevlett.112.120505 [ * * , ( ) ] * * , ( ) link:\doibase10.1063/1.4962732 [ * * , ( ) , 10.1063/1.4962732 ] link:\doibase10.1103/physreva.85.010301 [ * * , ( ) ] link:\doibase10.1103/physrevlett.98.140402 [ * * , ( ) ] link:\doibase10.1103/revmodphys.81.865 [ * * , ( ) ] link:\doibase10.1103/revmodphys.86.419 [ * * , ( ) ] link:\doibase10.1103/physreva.95.010101 [ * * , ( ) ] link:\doibase10.1103/physrevlett.111.250403 [ * * , ( ) ]
the need for secrecy and security is essential in communication . secret sharing is a conventional protocol to distribute a secret message to a group of parties , who can not access it individually but need to cooperate in order to decode it . while several variants of this protocol have been investigated , including realizations using quantum systems , the security of quantum secret sharing schemes still remains unproven almost two decades after their original conception . here we establish an unconditional security proof for continuous variable entanglement - based quantum secret sharing schemes , in the limit of asymptotic keys and for an arbitrary number of players . we tackle the problem by resorting to the recently developed one - sided device - independent approach to quantum key distribution . we demonstrate theoretically the feasibility of our scheme , which can be implemented by gaussian states and homodyne measurements , with no need for ideal single - photon sources or quantum memories . our results contribute to validating quantum secret sharing as a viable primitive for quantum technologies .
in the aim of achieving a robust processing of quantum information , one of the main tasks is to prepare and to protect various quantum states . through the last 15 years, the application of quantum feedback paradigms has been investigated by many physicists as a possible solution for this robust preparation . however , most ( if not all ) of these efforts have remained at a theoretical level and have not been able to be give rise to successful experiments .this is essentially due to the necessity of simulating , in parallel to the system , a quantum filter providing an estimate of the state of the system based on the historic of quantum jumps induced by the measurement process .indeed , it is , in general , difficult to perform such simulations in real time . in this paper, we consider a prototype of physical systems , the photon - box , where we actually have the time to perform these computations in real time ( see for a detailed description of this cavity quantum electrodynamics system ) .taking into account the measurement - induced quantum projection postulate , the most practical measurement protocols in the aim of feedback control are the quantum non - demolition ( qnd ) measurements .these are the measurements which preserve the value of the measured observable . indeed , by considering a well - designed qnd measurement process where the quantum state to be prepared is an eigenstate of the measurement operator , the measurement process , not only , is not an obstacle for the state preparation but can even help by adding some controllability . in qnd measures are exploited to detect and/or produce highly non - classical states of light trapped in a super - conducting cavity ( see for a description of such qed systems and for detailed physical models with qnd measures of light using atoms ) . for such experimental setups ,we detail and analyze here a feedback scheme that stabilizes the cavity field towards any photon - number states ( fock states ) .such states are strongly non - classical since their photon numbers are perfectly defined .the control corresponds to a coherent light - pulse injected inside the cavity between atom passages .the overall structure of the proposed feedback scheme is inspired by using a quantum adaptation of the observer / controller structure widely used for classical systems ( see , e.g. , ) . as the measurement - induced quantum jumps and the controlled field injection happen in a discrete - in - time manner , the observer part of the proposed feedback scheme consists in a discrete - time quantum filter .indeed , the discreteness of the measurement process provides us a first prototype of quantum systems where we , actually , have enough time to perform the quantum filtering and to compute the measurement - based feedback law to be applied as the controller . from a mathematical modeling point of view, the quantum filter evolves through a discrete - time markov chain .the estimated state is used in a state - feedback , based on a lyapunov design . indeed , by considering a natural candidate for the lyapunov function , we propose a feedback law which ensures the decrease of its expectation over the markov process .therefore , the value of the considered lyapunov function over the markov chain defines a super - martingale .the convergence analysis of the closed - loop system is , therefore , based on some rather classical tools from stochastic stability analysis .one of the particular features of the system considered in this paper corresponds to a non - negligible delay in the feedback process .in fact , in the experimental setup considered through this paper , we have to take into account a delay of steps between the measurement process and the feedback injection .indeed , there are , constantly , atoms flying between the photon box ( the cavity ) to be controlled and the atom - detector ( typically ) . therefore , in our feedback design, we do not have access to the measurement results for the last atoms . through this paper , we propose an adaptation of the quantum filter , based on a stochastic version of the smith predictor , which takes into account this delay by predicting the actual state of the system without having access to the result of last detections . in the next section , we describe briefly the physical system and the associated quantum monte - carlo model . in section [ sec: open - loop ] , we consider the dynamics of the open - loop system .we will prove , through theorem [ thm : openloop ] that the qnd measurement process , without any additional controlled injection , allows a non - deterministic preparation of the fock states .indeed , we will see that the associated markov chain converges , necessarily , towards a fock state and that the probability of converging towards a fixed fock state is given by its population over the initial state .also , through proposition [ prop : openlooplin ] , we will show that the linearized open - loop system around a fixed fock state admits strictly negative lyapunov exponents ( see appendix [ append : lyap - exp ] for a definition of the lyapunov exponent ) . in section [ sec : closed - loop ], we propose a lyapunov - based feedback design allowing to stabilize globally the delayed closed - loop system around a desired fock state .the theorem [ thm : closedloop ] proves the almost sure convergence of the trajectories of the closed - loop system towards the target fock state .also , through proposition [ prop : closedlooplin ] , we will prove that the linearized closed - loop system around the target fock state admits strictly negative lyapunov exponents . finally in section[ sec : filter ] , we propose a brief discussion on the considered quantum filter and by proving a rather general separation principle ( theorem [ thm : separation ] ) , we will show a semi - global robustness with respect to the knowledge of the initial state of the system . also ,through a brief analysis of the linearized system - observer around the target fock state and applying the propositions [ prop : openlooplin ] and [ prop : closedlooplin ] , we show that its largest lyapunov exponent is also strictly negative ( proposition [ prop : filterlin ] ) .a preliminary version of this paper without delay has appeared as a conference paper .the delay compensation scheme is borrowed from .the authors thank m. brune , i. dotsenko , s. haroche and j.m .raimond from ens for many interesting discussions and advices .with its feedback scheme ( in green).,scaledwidth=80.0% ] as illustrated by figure [ fig : expscheme ] , the system consists in a high - q microwave cavity , a box producing rydberg atoms , and two low - q ramsey cavities , an atom detector and a microwave source .the dynamics model is discret in time and relies on quantum monte - carlo trajectories ( see ) .each time - step indexed by the integer corresponds to atom number coming from , submitted then to a first ramsey -pulse in , crossing the cavity and being entangled with it , submitted to a second -pulse in and finally being measured in .the state of the cavity is associated to a quantized mode .the control corresponds to a coherent displacement of amplitude that is applied via the micro - wave source between two atom passages . in this paperwe consider a finite dimensional approximation of this quantized mode and take a truncation to photons .thus the cavity space is approximated by the hilbert space .it admits as ortho - normal basis .each basis vector corresponds to a pure state , called fock state , where the cavity has exactly photons , . in this fock - states basisthe number operator corresponds to the diagonal matrix the annihilation operator truncated to photons is denoted by .it corresponds to the upper -diagonal matrix filled with : the truncated creation operator denoted by is the hermitian conjugate of .notice that we still have , but truncation does not preserve the usual commutation =1 ] and satisfies ,\quad \theta f(x ) + ( 1-\theta)f(y ) = \tfrac{\theta(1-\theta)}{2}(x - y)^2 + f(\theta x+ ( 1-\theta ) y ) .\ ] ] the function is increasing and convex and is a martingale .thus is sub - martingale . since we have then , together with yields to thus we recover that is a sub - martingale , .we have also shown that implies that either or ( assumption and invertible is used here ) .we apply now the invariance theorem established by kushner ( recalled in the appendix [ append : stoch - stab ] ) for the markov process and the sub - martingale .this theorem implies that the markov process converges in probability to the largest invariant subset of but the set is invariant .it remains thus to characterized the largest invariant subset denoted by and included in .take .invariance means that and belong to ( the fact that and are invertible ensures that probabilities to jump with or are strictly positive for any ) .consequently .this means that . by cauchy - schwartz inequality , with equality if , and only if , and are co - linear . non - degenerate , is necessarily a projector over an eigenstate of , i.e. , for some . since , and thus reduced to . therefore the only possibilities for the -limit set are or and the convergence in probability together with the fact that is a positive bounded ( ] .this implies that converges almost surely towards the random variable ] since will depend on and the feedback law is not causal . in , this feedback law is made causal by replacing by its expectation value ( average prediction ) knowing and the past controls , , : where the kraus map is defined by .we will thus consider here the following causal feedback based on an average compensation of the delay \right ) } & \mbox { if } { \text{tr}\left(\bar\rho{\rho^{\text{\tiny pred}}}_{k}\right ) } \ge \eta \\\underset{|\alpha|\le\bar\alpha}{\text{argmax } } \left ( { \text{tr}\left(\bar\rho~{{\mathbb d}}_\alpha({\rho^{\text{\tiny pred}}}_{g , k})\right)}{\text{tr}\left(\bar\rho~{{\mathbb d}}_\alpha({\rho^{\text{\tiny pred}}}_{e , k})\right)}\right ) & \mbox { if } { \text{tr}\left(\bar\rho{\rho^{\text{\tiny pred}}}_{k}\right ) } < \eta \\\end{array } \right.\ ] ] with the closed - loop system , i.e. markov chain with the causal feedback is still a markov chain but with as state at step .more precisely , denote by this state where stands for the control delayed steps .then the state form of the closed - loop dynamics reads where the control law defined by corresponds to a static state feedback since notice that .simulations displayed on figures [ fig : closedloop ] and [ fig : closedloopdelay ] correspond to 100 realizations of the above closed - loop systems with and .the goal state contains photons and , and are those used for the open - loop simulations of figure [ fig : openloop ] .each realization starts with the same coherent state and .the feedback parameters appearing in are as follows : this simulations illustrate the influence of the delay on the average convergence speed : the longer the delay is the slower convergence speed becomes .[ rem : const - cont ] the choice of the feedback law whenever might seem complicated for real - time simulation issues .however , this choice is only technical .actually , any non - zero constant feedback law will seems to achieve the task here ( see for instance the simulations of ) .however , the convergence proof for such simplified control scheme is more complicated and not considered in this paper .[ thm : closedloop ] take the markov chain with the feedback where , and are given by with .then , for small enough and , the state converges almost surely towards whatever the initial condition is ( the compact set is defined by ) .it is based on the lyapunov - type function where has already been used during the proof of theorem [ thm : openloop ] .the proof relies in 4 lemmas : * in lemma [ lem : martingale ] , we prove an inequality showing that , for small enough , and are sub - martingales within .* in lemma [ lem : kick ] , we show that for small enough , the trajectories starting within the set always reach in one step the set ; * in lemma [ lem : doobs ] , we show that the trajectories starting within the set , will never hit the set with a uniformly non - zero probability ; * in lemma [ lem : invariance ] , we combine the first step and the invariance principle due to kushner , to prove that almost all trajectories remaining inside converge towards .the combination of lemmas [ lem : kick ] , [ lem : doobs ] and [ lem : invariance ] shows then directly that converges almost surely towards .we detail now these 4 lemmas .[ lem : martingale ] for small enough and for satisfying , \right)}\right|^2\ ] ] and also \right)}\right|^2 \\ + \tfrac{p_{g , k}p_{e , k}}{2}\text{\rm\huge ( } { \text{tr}\left(\bar\rho~ { { \mathbb d}}_{\alpha_k}\circ{{\mathbb k}}_{\beta_{1,k}}\circ\cdots\circ{{\mathbb k}}_{\beta_{d-1,k } } \circ{{\mathbb m}}_g\circ{{\mathbb d}}_{\beta_{d , k}}(\rho_k)\right)}\\ \qquad\qquad - { \text{tr}\left(\bar\rho~ { { \mathbb d}}_{\alpha_k}\circ{{\mathbb k}}_{\beta_{1,k}}\circ\cdots\circ{{\mathbb k}}_{\beta_{d-1,k } } \circ{{\mathbb m}}_e\circ{{\mathbb d}}_{\beta_{d , k}}(\rho_k)\right)}\text{\rm\huge ) } ^2 \end{gathered}\ ] ] since and =[\bar\rho , m_e]=0 ] , we get \right)}\right|^2 + o(\epsilon^2).\end{aligned}\ ] ] thus for small enough and uniformly in \right)}\right|^2 .\ ] ] using the fact that is increasing and for any , we get \right)}\right|^2 .\ ] ] [ lem : kick ] when is small enough , any state satisfying the inequality yields a new state such that .since and are invertible , there exists ,1[ ] : for any , and are in .let us prove first that , for any if for some , the above maximum is zero , then for all ( analyticity of versus and ) : this implies that either or ( if the product of two analytic functions is zero , one of them is zero ) .take such that .we can decompose as a sum of projectors , where are strictly positive eigenvalues , ] , i.e. , and also invariance associated implies that .thus the above equality reads where we have used the fact that , for any , .then satisfies that reads , since , and , since , we recover the same condition as the one appearing at the end of the proof of theorem [ thm : openloop ] .similar invariance arguments combined with imply then . thus is reduced to .consider now the event .convergence of in probability towards means that where is any norm on the -space .the continuity of implies that , , as , we have thus and consequently , , , i.e. , the process is a bounded sub - martingale and therefore , by theorem [ thm : conv_martingale1 ] of the appendix [ append : stoch - stab ] , we know that it converges for almost all trajectories remaining in the set . calling the limit random variable , we have by dominated convergence theorem this trivially proves that almost surely and finishes the proof of the lemma . around the target state the closed - loop dynamics reads ~{{\mathbb k}}_{\beta_{1,k}}\circ\cdots \circ{{\mathbb k}}_{\beta_{d , k}}(\rho_k)\right ) } \\\beta_{2,k+1 } & = \beta_{1,k } \\ & \vdots \\\beta_{d , k+1 } & = \beta_{d-1,k } .\end{aligned}\ ] ] set with small .computations based on -\delta\beta^ * [ { { \text{\bf a}}},\bar\rho ] \right ) + o(|\delta\beta|^2 ) , \\ & { { \mathbb k}}_{\delta\beta}(\bar\rho ) = { { \mathbb k}}_0(\bar\rho)+ \cos{\vartheta}\left(\delta\beta [ { { \text{\bf a}}}^\dag,\bar\rho]-\delta\beta^ * [ { { \text{\bf a}}},\bar\rho ] \right ) + o(|\delta\beta|^2 ) , \\ & { { \mathbb k}}_0(\bar\rho)=\bar\rho , \quad { { \mathbb k}}_0 ( [ { { \text{\bf a}}}^\dag,\bar\rho ] ) = \cos{\vartheta}~ [ { { \text{\bf a}}}^\dag,\bar\rho ] , \quad { { \mathbb k}}_0 ( [ { { \text{\bf a}}},\bar\rho ] ) = \cos{\vartheta}~ [ { { \text{\bf a}}},\bar\rho ] , \\ & { \text{tr}\left([{{\text{\bf a}}},\bar\rho][{{\text{\bf a}}}^\dag,\bar\rho]\right ) } = -(2\bar n+1 ) \quad \text{and } \quad { \text{tr}\left([{{\text{\bf a}}},\bar\rho]^2\right)}=0,\end{aligned}\ ] ] yield the following linearized closed - loop system - \delta\beta_{d , k}^ * [ { { \text{\bf a}}},\bar\rho ] \right ) a_{s_k}^\dag - { \text{tr}\left(a_{s_k}\delta\rho_k a_{s_k}^\dag\right)}\bar\rho \\\delta \beta_{1,k+1 } & = -\epsilon ( 2\bar n + 1 ) \left(\sum_{j=1}^{d } \cos^j\!{\vartheta } ~\delta\beta_{j , k}\right ) + \epsilon \cos^d\!{\vartheta } ~{\text{tr}\left(\delta \rho_k [ { { \text{\bf a}}},\bar\rho]\right ) } \\ \delta\beta_{2,k+1 } & = \delta\beta_{1,k } \\ & \vdots \\\delta\beta_{d , k+1 } & = \delta\beta_{d-1,k } \end{array}\ ] ] where , the random matrices are given by with probability and with probability .set for any . since , we exclude here the case because .when does not belong to , we recover the open - loop linearized dynamics : where ( resp . ) with probability ( resp . ) and where and .a direct adaptation of the proof of proposition [ prop : openlooplin ] shows that the largest lyapounov exponent of this dynamics is strictly negative and given by for , we just have to consider and since is hermitian .set .we deduce from that the process is governed by where ( resp . ) with probability ( resp . ) and take to be defined later , set ,1[$ ] and consider a direct computation exploiting yields thus take , then because , for small enough ( ) the norm is a super - martingale converging exponentially almost surely towards zero .thus the largest lyapunov exponent of the linear markov chain is strictly negative . to conclude, we have proved the following proposition : [ prop : closedlooplin ] consider the linear markov chain .for small enough , its largest lyapunov exponent is strictly negative .the feedback law requires the knowledge of .when the measurement process is fully efficient and the jump model admits no error , the markov system represents a natural choice for the quantum filter to estimate the value of .indeed , we define the estimator satisfying the dynamics note that , similarly to any observer - controller structure , the jump result , or , is the output of the physical system but the feedback control is a function of the estimator .indeed , is defined as in : \right ) } & \mbox { if } { \text{tr}\left(\bar\rho{\rho^{\text{\tiny pred , est}}}_{k}\right ) } \ge \eta \\\underset{|\alpha|\le\bar\alpha}{\text{argmax } } \left ( { \text{tr}\left(\bar\rho~{{\mathbb d}}_\alpha({\rho^{\text{\tiny pred , est}}}_{g , k})\right)}{\text{tr}\left(\bar\rho~{{\mathbb d}}_\alpha({\rho^{\text{\tiny pred , est}}}_{e , k})\right)}\right ) & \mbox { if } { \text{tr}\left(\bar\rho{\rho^{\text{\tiny pred , est}}}_{k}\right ) }< \eta \\ \end{array } \right.\]]where the predictor s state is defined as follows : we will see through this section that , even if do not have any a priori knowledge of the initial state of the physical system , the choice of the feedback law through the above quantum filter can ensure the convergence of the system towards the desired fock state .indeed , we prove a semi - global robustness of the feedback scheme with respect to the choice of the initial state of the quantum filter . before going through the details of this robustness analysis ,let us illustrate it through some numerical simulations .in the simulations of figure [ fig : filter1 ] , we assume no a priori knowledge on the initial state of the system .therefore , we initialize the filter equation at the maximally mixed state . computing the feedback control through the above quantum filter and injecting it to the physical system modeled by , the fidelity ( with respect to the target fock state ) of the closed - loop trajectories of the physical system are illustrated in the first plot of figure [ fig : filter1 ] . the second plot of this figure , illustrate the frobenius distance between the estimator and the physical state .as one can easily see , one still have the convergence of the quantum filter and the physical system to the desired fock state ( here ) . through these simulations , we have considered the same measurement and control parameters as those of section [ sec : closed - loop ] .the system is initialized at the coherent state while the quantum filter is initialized at . versus for 100 realizations of the closed - loop markov process with feedback based on the quantum filter starting from the same state with -step delay ( ) .the initial state of the physical system is given by .the ensemble average over these realizations corresponds to the thick red curve ; ( second plot ) the frobenius distance between the estimator and ( ) for 100 realizations .the ensemble average over these realizations corresponds to the thick red curve . ] versus for 100 realizations of the closed - loop markov process with feedback based on the quantum filter starting from the same state with -step delay ( ) .the initial state of the physical system is given by .the ensemble average over these realizations corresponds to the thick red curve ; ( second plot ) the frobenius distance between the estimator and ( ) for 100 realizations .the ensemble average over these realizations corresponds to the thick red curve . ] through the next subsection , we establish a sort of separation principle implying this semi - global robustness of the closed - loop system with respect to the initial state of the filter equation . also through the short subsection [ ssec : filter - rate ] we provide a heuristic analysis of the local convergence rate of the filter equation around the target fock state .we consider the joint system - observer dynamics defined for the state : we have the following result , a quantum version of the separation principle ensuring asymptotic stability of observer / controller from stability of the observer and of the controller separatly . [ thm : separation ] consider any closed - loop system of the form , where the feedback law is a function of the quantum filter : .assume moreover that , whenever ( so that the quantum filter coincides with the closed - loop dynamics ) , the closed - loop system converges almost surely towards a fixed pure state .then , for any choice of the initial state , such that , the trajectories of the system converge almost surely towards the same pure state : . [ rem : ker ] one only needs to choose , so that the assumption is satisfied for any .the basic idea is based on the fact that ( where we take the expectation over all jump realizations ) depends linearly on even though we are applying a feedback control .indeed , the feedback law depends only on the historic of the quantum jumps as well as the initialization of the quantum filter .therefore , we can write where denotes the sequence of first jumps . finally , through simple computations , we have where , we easily have the linearity of with respect to . at this point, we apply the assumption and therefore , one can find a constant and a well - defined density matrix in , such that now , considering the system initialized at the state , we have by the assumptions of the theorem and applying dominated convergence theorem : by the linearity of with respect to , we have and as both and are less than or equal to one , we necessarily have that both of them converge to 1 : this implies the almost sure convergence of the physical system towards the pure state .let us linearize the system - observer dynamics around the equilibrium state + .set with small , and hermitian and of trace 0 .we have the following dynamics for the linearized system ( adaptation of ): - \delta\beta_{d , k}^ * [ { { \text{\bf a}}},\bar\rho ] \right ) a_{s_k}^\dag - { \text{tr}\left(a_{s_k}\delta\rho_k a_{s_k}^\dag\right)}\bar\rho\\ \delta{\rho^{\text{\tiny est}}}_{k+1}&= a_{s_k } \left ( \delta{\rho^{\text{\tiny est}}}_k+ \delta\beta_{d , k } [ { { \text{\bf a}}}^\dag,\bar\rho ] - \delta\beta_{d , k}^ * [ { { \text{\bf a}}},\bar\rho ] \right ) a_{s_k}^\dag - { \text{tr}\left(a_{s_k}\delta{\rho^{\text{\tiny est}}}_k a_{s_k}^\dag\right)}\bar\rho \\ \delta \beta_{1,k+1 } & = -\epsilon ( 2\bar n + 1 ) \left(\sum_{j=1}^{d } \cos^j\!{\vartheta } ~\delta\beta_{j , k}\right ) + \epsilon \cos^d\!{\vartheta } ~{\text{tr}\left(\delta { \rho^{\text{\tiny est}}}_k [ { { \text{\bf a}}},\bar\rho]\right ) } \\\delta\beta_{2,k+1 } & = \delta\beta_{1,k } \\ & \vdots \\\delta\beta_{d , k+1 } & = \delta\beta_{d-1,k } \end{array}\ ] ] where , the random matrices are given by with probability and with probability . at this point, we note that by considering , we have the following simple dynamics : indeed , as the same control laws are applied to the quantum filter and the physical system , the difference between and follows the same dynamics as the linearized open - loop system .but , we know by the proposition [ prop : openlooplin ] that this linear system admits strictly negative lyapunov exponents . this triangular structure , together with the convergence rate analysis of the closed - loop system in proposition [ prop : closedlooplin ] , yields the following propositionwhose detailed proof is left to the reader : [ prop : filterlin ] consider the linear markov chain . for small enough ,its largest lyapunov exponent is strictly negative .we have analyzed a measurement - based feedback control allowing to stabilize globally and deterministically a desired fock state . in this feedback design ,we have taken into account the important delay between the measurement process and the feedback injection .this delay has been compensated by a stochastic version of a smith predictor in the quantum filtering equation .in fact , the measurement process of the experimental setup admits some other imperfections .these imperfections can , essentially , be resumed to the following ones : 1- the atom - detector is not fully efficient and it can miss some of the atoms ( about 20% ) ; 2- the atom - detector is not fault - free and the result of the measurement ( atom in the state or ) can be inter - changed ( a fault rate of about 10% ) ; 3- the atom preparation process is itself a stochastic process following a poisson law and therefore the measurement pulses can be empty of atom ( a pulse occupation rate of about 40% ) .the knowledge of all these rates can help us to adapt the quantum filter by taking into account these imperfections .this has been done in , by considering the bayesian law and providing numerical evidence of the efficiency of such feedback algorithms assuming all these imperfections . 10 v.p .belavkin . quantum stochastic calculus and quantum nonlinear filtering ., 42(2):171201 , 1992 .braginskii and y.i .quantum - mechanical limitations in macroscopic experiments and modern experimental technique . , 17(5):644650 , 1975 .m. brune , s. haroche , j .-raimond , l. davidovich , and n. zagury .manipulation of photons in a cavity by dispersive atom - field coupling : quantum - nondemolition measurements and gnration of `` schrdinger cat '' states ., 45(7):51935214 , 1992 .s. delglise , i. dotsenko , c. sayrin , j. bernu , m. brune , j .- m .raimond , and s. haroche .reconstruction of non - classical cavity field states with snapshots of their decoherence ., 455:510514 , 2008 .doherty and k. jacobs .feedback control of quantum systems using continuous state estimation ., 6:27002711 , 1999 .i. dotsenko , m. mirrahimi , m. brune , s. haroche , j .-raimond , and p. rouchon .quantum feedback by discrete quantum non - demolition measurements : towards on - demand generation of photon - number states ., 80 : 013805 - 013813 , 2009 .deterministic and nondestructively verifiable preparation of photon number states . , 97(073601 ) , 2006 .s. gleyzes , s. kuhr , c. guerlin , j. bernu , s. delglise , u. busk hoff , m. brune , j .- m .raimond , and s. haroche .quantum jumps of light recording the birth and death of a photon in a cavity . , 446:297300 , 2007 . c. guerlin , j. bernu , s. delglise , c. sayrin , s. gleyzes , s. kuhr , m. brune , j .-raimond , and s. haroche .progressive field - state collapse and quantum non - demolition photon counting ., 448:889893 , 2007 .r. van handel , j. k. stockton , and h. mabuchi .feedback control of quantum state reduction ., 50:768780 , 2005 .s. haroche and j.m .oxford university press , 2006 .t. kailath . .prentice - hall , englewood cliffs , nj , 1980 .kushner . .holt , rinehart and wilson , inc . , 1971 .liptser and a.n .shiryayev . .springer - verlag , 1977 .m. mirrahimi , i. dotsenko , and p. rouchon .feedback generation of quantum fock states by discrete qnd measures . in _ decision and control ,2009 held jointly with the 2009 28th chinese control conference .cdc / ccc 2009 .proceedings of the 48th ieee conference on _ , pages 1451 1456 , 2009 .m. mirrahimi and r. van handel . stabilizing feedback controls for quantum systems . , 46(2):445467 , 2007 .closer control of loops with dead time ., 53(5):217219 , 1958 .thorne , r.w.p .drever , c.m .caves , m. zimmermann , and v.d . sandberg .quantum nondemolition measurements of harmonic oscillators ., 40:667671 , 1978 .p. tombesi and d. vitali .macroscopic coherence via quantum feedback ., 51:49134917 , 1995 .analysis of quantum - nondemolition measurement ., 18:17641772 , 1978 .quantum theory of continuous feedback ., 49:213350 , 1994 .we recall here the doob s first martingale convergence theorem , the doob s inequality and the kushner s invariance theorem . for detailed discussions and proofs we refer to (chapter 2 ) and ( sections 8.4 and 8.5 ) .the following theorem characterizes the convergence of bounded martingales : [ thm : conv_martingale1 ] let be a markov chain on state space and suppose that this is is a submartingale .assume furthermore that ( is the positive part of ) then ( ) exists with probability , and .now , we recall two results that are often referred as the stochastic versions of the lyapunov stability theory and the lasalle s invariance principle .[ thm : doob ] let be a markov chain on state space .suppose that there is a non - negative function satisfying where on the set .then for the statement of the second theorem , we need to use the language of probability measures rather than the random processes .therefore , we deal with the space of probability measures on the state space .let be the initial probability distribution ( everywhere through this paper we have dealt with the case where is a dirac on a state of the state space of density matrices ) .then , the probability distribution of , given initial distribution , is to be denoted by .note that for , the markov property implies : [ thm : kushner ] consider the same assumptions as that of the theorem [ thm : doob ] .let be concentrated on a state ( being defined as in theorem [ thm : doob ] ) , i.e. .assume that in implies that .under the conditions of theorem [ thm : doob ] , for trajectories never leaving , converges to almost surely .also , the associated conditioned probability measures tend to the largest invariant set of measures whose support set is in .finally , for the trajectories never leaving , converges , in probability , to the support set of .consider a discrete - time linear stochastic system defined on by where is a random matrix taking its values inside a finite set with a stationary probability distribution for over .then for different initial states , may take at most values which are called the lyapunov exponents of the linear stochastic system .
we study a feedback scheme to stabilize an arbitrary photon number state in a microwave cavity . the quantum non - demolition measurement of the cavity state allows a non - deterministic preparation of fock states . here , by the mean of a controlled field injection , we desire to make this preparation process deterministic . the system evolves through a discrete - time markov process and we design the feedback law applying lyapunov techniques . also , in our feedback design we take into account an unavoidable pure delay and we compensate it by a stochastic version of a smith predictor . after illustrating the efficiency of the proposed feedback law through simulations , we provide a rigorous proof of the global stability of the closed - loop system based on tools from stochastic stability analysis . a brief study of the lyapunov exponents of the linearized system around the target state gives a strong indication of the robustness of the method .
recent advances in digital microfluidic ( dmf ) biochips have enabled realizations of a variety of laboratory assays on a tiny chip for automatic and reliable analysis of biochemical samples .a dmf biochip consists of a patterned 2d array or a customized layout of electrodes , typically a few square centimeters in size .the device can manipulate pico- or femtoliter - sized discrete droplets for the purpose of conducting various fluidic operations under electrical actuations .typical fluidic operations on a droplet include dispensing , transport , mixing , splitting , heating , incubation , and sensing .dmf biochips offer significant flexibility and programmability over their continuous - flow counterparts while implementing various assays that mandate high - sensitivity , and low requirement of sample and reagent consumption .one such example is sample preparation , which plays a pivotal role in biochemical laboratory protocols , e.g. , in polymerase chain reaction ( pcr ) , and in other applications in biomedical engineering and life sciences .an important step in sample preparation is dilution , where the objective is to prepare a fluid with a desired concentration ( or dilution ) factor .there are two performance metrics in sample preparation : the number of _ mix - split _ operations to achieve a concentration factor with a specified accuracy , and the overall reactant usage ( equivalently , waste production ) .the first parameter determines the sample preparation time , whereas the latter is related to the cost of stock solution .an efficient sample preparation algorithm should target to minimize either one or both of them as far as possible . in sample preparation , producing chemical and biomolecular concentration gradients is of particular interest .dilution gradients play essential roles in in - vitro analysis of many biochemical phenomena including growth of pathogens and selection of drug concentration .for example , in drug design , it is important to determine the minimum amount of an antibiotic that inhibits the visible growth of bacteria isolate ( defined as minimum inhibitory concentration ( mic ) ) .the drug with the least concentration factor ( i.e. , with highest dilution ) that is capable of arresting the growth of bacteria , is considered as mic . during the past decade , a variety of automated bacterial identification and antimicrobial susceptibility test systems have been developed , which provide results in only few hours rather than days , compared to traditional overnight procedures .typical automated susceptibility methods use an exponential dilution gradient ( e.g. , ) in which concentration factors ( ) of the given sample are in geometric progression .linear dilution gradient ( e.g. , ) , in which the concentration factors of the sample appear in arithmetic progression , offers more sensitive tests .linear gradients are usually prepared by using continuous - flow microfluidic ladder networks , or by other networks of microchannels , .since the fluidic microchannels are hardwired , continuous - flow based diluters are designed to cater to only a pre - defined gradient , and thus they suffer from inflexibility and non - programmability .also , these methods require a significant amount of costly stock solutions .in contrast , on a dmf biochip platform , a set of random dilution factors can be easily prepared . however, existing algorithms fail to optimize the cost or performance when a certain gradient pattern is required . in digital microfluidics ,two types of dilution methods are used : serial dilution and interpolated dilution .a serial dilution consists of a sequence of simple dilution steps to reduce the concentration of a sample .the source of the dilution sample for each step comes from the diluted sample of the previous step .a typical serial procedure generates an _ exponential dilution _ profile , in which , a unit volume sample / reagent droplet is mixed with a unit - volume buffer droplet to obtain two unit - volume droplets of half the concentration .if a sample / reagent is recursively diluted by a buffer solution , then the of the sample / reagent becomes after steps of mixing and balanced splitting . in each _interpolated dilution _step , two unit - volume droplets with and are mixed to obtain two droplets with .both the dilution methods produce concentration factors whose denominators are integral power of two .thus , in the _ mix - split _ model , the values are approximated ( rounded off ) as an -bit binary fraction , i.e. , as , where , ; , determines the required accuracy ( maximum error in ) of the target concentration factor . in this work ,we present for the first time , an algorithm to generate any arbitrary linear gradient , on - chip , with minimum wastage , while satisfying a required accuracy in concentration factors .our algorithm utilizes the underlying combinatorial properties of a linear gradient in order to generate the target set .the corresponding layout design of the biochip is also proposed .we prove theoretical results on maximum storage requirement in the layout .simulation results on different linear gradients show a significant improvement in sample cost over three algorithms that were used earlier for the generation of multiple concentration factors .in early days , dilutions were obtained by manually measuring and dispensing solutions with a pipette . with the advent of continuous - flow microfluidic ( cmf ) biochips ,dilution gradients were prepared based on diffusive mixing of two or more streams .the degree of diffusion can be regulated by the flow rate or by channel dimensions .designs of such gradient generators on a cmf biochip were proposed by walker et al . and oneill et al . .the flow rates were adjusted by controlling the channel length , which is proportional to fluidic resistance in each channel .serial dilution cmf biochips for monotonic and arbitrary gradient were also reported by lee et al .dertinger et al .have shown how a complex cmf network of microchannels designed for diffusive mixing can be used to generate linear , parabolic , or periodic dilution gradients . recently , design of a 2d combinatorial dilution gradient generator has been reported based on a tree - type microchannel structure and an active injection system . although continuous - flow devices are found to be adequate for many biochemical applications , they are less suitable for tasks requiring a high degree of flexibility or programmable fluid manipulations .these closed - channel systems are inherently difficult to integrate or scale because the parameters that govern fluid flow depend on the properties of the entire system .thus , permanently etched microstructures suffer from limited reconfigurability and poor fault tolerance .a dmf biochip typically manipulates discrete fluid droplets on a uniform array of identical electrodes .thus the volume of a merged droplet is usually an integral multiple of that of a single droplet ( unit volume ) .it is a challenge to achieve a desired concentration factor using the fewest number of _ mix - split _ steps with minimum number of _ waste _ droplets .a single - target mixing algorithm based on bit - scanning ( _ bs _ ) method was proposed by thies et al . considering the mixing model . in the special case of diluting a sample , the _ bs_ method first represents the target as an -bit binary string depending on the required accuracy of ; it then scans the bits from right - to - left to decide on the sequence of _ mix - split _ steps , i.e. , whether the sample or the buffer droplet is to be mixed with the most recently produced droplet . as an example , any path from the root to a leaf node in fig .[ fig : tsung ] represents an execution sequence of the _ bs _ method .however , it produces one _ waste _ droplet at each _ mix - split _ step except the last one . in order to achieve a target with a maximum error of ,the dilution process is to be repeated through at most _ mix - split _ steps .thus , depending on the required accuracy level of the target concentration , the value of is chosen .the _ dmrw _method generates a single dilution of a sample using a binary search method that reduces the number of _ waste _ droplets significantly compared to the _ bs _ method by reusing the intermediate droplets . recently , a reagent - saving mixing algorithm for preparing multiple target concentrations was proposed by hsieh et al .for example , the dilution tree for the target set is shown in fig .[ fig : tsung ] , where the a black dot represents an available droplet ( output or waste ) .recently , another method for generating droplets with multiple without using any intermediate storage is reported by mitra et al .based on de bruijn graphs .method was generalized for producing multiple with reduced _ mix - split _ and _ waste _ droplets .if multiple target droplets with the same are required in a protocol , a dilution engine can be used .a reactant minimizing multiple dilution sample preparation algorithm was reported by huang et al .gradients play essential roles in studying many biochemical phenomena in - vitro , including the growth of pathogens and efficacy of drugs . among various types of dilution profiles ,linear gradient is most widely used for biochemical analysis .several sample preparation methods are available that can be used for generating specified gradients . motivated by an example described by brassard et al . , we present an algorithm for producing any arbitrary linear dilution gradient with minimum wastage ( reagent consumption is minimum ) . to illustrate the proposed algorithm we assume that the two boundary concentrations ( first and last of the target sequence ) are available .if droplets with the two boundary are not supplied , we can prepare them by diluting the original sample with a buffer following an earlier algorithm , . a simple observation that motivates us to design the proposed linear gradient generator is the following : mixing two non - consecutive , which are separated by an odd number of elements of the gradient sequence , produces the median of the two concentrations .this special property follows from the simple fact that the values in the linear gradient sequence are in arithmetic progression .this property is used to design our algorithm for producing the gradient with no wastage . moreover ,only the concentrations that are elements of the gradient set will be generated during this process .the problem of linear dilution sample preparation can be formulated as follows .let be a linear gradient of targets to be generated from and , i.e. , .our objective is to generate all values of without generating any waste droplets ; we assume that a sufficient supply of boundary concentrations ( and ) is available .the process of generating the target satisfying a linear dilution gradient can be envisaged as a tree structure called linear dilution tree ( ldt ) , as described below .return a linear dilution tree ( _ ldt _ ) is a complete binary search tree having nodes , where each node represents a value in the target set , where .thus , the tree will have a depth of , where the root is assumed to be at depth .algorithm 1 builds _ ldt _ from the input target set , on which algorithm 2 described below , will be run to produce the droplets in the target set . return as an illustration , let us consider . let be a linear gradient of targets to be generated from and , i.e. , .the corresponding ( _ ldt _ ) is shown in fig .[ fig : ldt ] , which is generated by algorithm 1 .we traverse the tree in depth - first order and produce the droplets in a post - order mixing sequence .we assume that the two boundary and are supplied .initially , we generate two droplets with by mixing one droplet of and each ( represented as the root in fig . [fig : ldt ] ) .one of these droplets is stored and the other one is mixed with to produce two droplets of . again , one of them is stored and the other one is mixed with to generate two droplets of ( leftmost leaf ) , out of which one droplet is sent to the output and the other one is stored .next , the two droplets with and , which were stored in the first two steps , are mixed to produce two droplets of .one of them is sent to the output ; the remaining one is mixed with the one with stored in the third step .this step regenerates two droplets with , which were consumed in earlier steps .one of them is stored again and now the other one is transported to the output .similar _ mix - split _ sequences are performed on the right half of _ ldt _ in post - order fashion , and finally , two droplets of ( represented as the root ) are regenerated by mixing with .it may be observed that _ no waste droplet _ is produced for generating the entire linear dilution sequence .only one droplet for every non - boundary value in the gradient is produced , excepting the median one , for which two droplets are produced .the following observations are now immediate .the droplets with boundary are used only along the leftmost and the rightmost root - to - leaf path in _ ldt_. [ obs:1 ] the droplets with the values corresponding to each internal node of _ ldt _ are used in subsequent mixing operations after their production and are regenerated later for replenishment .[ obs:2 ] the number of copies of each droplet generated at depth during the process is when , and is when ( leaf node ) , where .[ lem:1 ] we proof the lemma using induction on , i.e. , the target set size .basis : for , ; in this case , we need to generate values from boundary concentrations . fig .[ fig : tree_structure ] shows the linear dilution tree . the total number of droplets corresponding to the median target ( at the root , depth ) is and target concentration at depth is 2 each .hence the lemma [ lem:1 ] is true for .induction hypothesis : assume the statement is true for all .inductive steps : consider the target set of size i.e. , .one can split into three parts : that contains the first targets of i.e. , ; ; .the elements in can be generated by using and as boundary targets .similarly , those in can be generated by using and as boundary targets .one can easily generate by using and as boundary targets ( see fig .[ fig : tree_structure ] ) . by induction hypothesis ,the number of each droplet generated during the process at depth of and is when , and is when . ignoring the regeneration part, the number of each droplet generated during the process at depth of and is when , and is at depth . from observation [ obs:1 ] it follows that is used only in the rightmost path of and in the leftmost path of as shown in fig .[ fig : tree_structure ] . by inductive hypothesis ,the total number of droplets generated ( ignoring the regeneration part ) is .hence , the required number of droplets is .since the number of regenerated droplets is 2 , the total number of droplets generated at the root ( depth ) is .this completes the proof .the number of each boundary droplet required is for , where .[ lem:2 ] from observation [ obs:1 ] it follows that boundary droplets are needed only for the nodes lying on the leftmost and rightmost paths of _ldt_. note that the regeneration process for an internal node does not require any boundary droplet .so the number of droplets generated excluding regeneration , is at depth and is at depth , along the left- or rightmost path in _ ldt_.the total number of droplets along these paths is .hence , the total number of required boundary droplets will be given by algorithm 2 generates a linear dilution gradient ) in mix - split steps without producing any waste droplets , when droplets of each boundary are supplied .[ thm:1 ] the _ ldt _ has nodes including leaf nodes .each leaf node requires only one _ mix - split _ operation . by lemma [ lem:1 ]the number of each droplet generated at depth is for , where the constant 2 accounts for its regeneration from its two children .regeneration requires _ mix - split _ steps .hence the total number of _ mix - split _ operations will be the fact that no waste droplet is generated in this process follows easily by counting the number of input droplets ( lemma 2 ) and the output droplets .[ obs:3 ] the values of the gradient excluding the two boundary appear at the output of the generator in accordance to the post - order traversal sequence of _ldt_. the following theorem provides an upper bound on the storage requirement during gradient generation .algorithm 2 requires at most storage electrodes at any instant of time , where .[ thm:2 ] we proof the lemma using induction on .basis : for , .one needs to generate three from boundary concentrations .it is easy to check that we require at most 2 intermediate storage elements in this case .hence the theorem is true for .inductive hypothesis : assume the statement is true for .inductive steps : consider the target set of size .one can split into three parts : that contains the first targets of , i.e. , ; that contains the median target of ; ( see fig .[ fig : tree_structure ] ) . by inductive hypothesis, the left subtree requires storage .additionally , we need to store one droplet of ( ) corresponding to the root .so , a total of storage is required in order to generate all the on the left subtree .when we generate the target set for the right subtree , we need to store the root of the left subtree for regeneration purpose . by analogous argument, we can claim the right subtree requires storage .hence , the total number of storage required is for a linear dilution tree of size . with multiple demand ] in order to produce a dilution gradient of size , we need to supply droplets for each boundary ( lemma [ lem:2 ] ) . here, we demonstrate how a low - cost dilution engine can be integrated on - chip for this purpose .we will illustrate the technique using the following example .let be a boundary .the corresponding dilution tree is shown in the fig .[ fig : dil - engine ] .each root - to - leaf path represents a sequence of _ mix - split _ operations needed to generate the droplet by applying the _ bs _ method .one can store the intermediate droplets into a stack and generate two target droplets each time by repeatedly mixing the droplet on top of the stack with either sample or buffer as needed .the dilution tree in fig .[ fig : dil - engine ] has target droplets generated therein . to admit a maximum error of ,an -depth dilution tree would suffice , and hence the number of storage elements needed to produce multiple droplets with the given will be at most . if the number of elements in the gradient is not of the form , the above procedure needs certain modification .let denote the -bit binary representation of and denote the number of s between leftmost and rightmost in it . to illustrate the modification ,let us assume that the target set be , i.e. , .we consider another target set of size .[ fig : partial - tree ] shows the dilution tree for , where the extra part of the tree is not generated ( shown as dotted ) .clearly , , i.e. , .now , and the number of _ waste droplets _ is equal to ( ) .the following theorem can be proved easily . .25 ( b)-architectural layout , title="fig : " ] .25 ( b)-architectural layout , title="fig : " ] the number of _ waste droplets _produced while generating is equal to , where . [ thm:3 ]an architectural layout for producing a linear dilution gradient is shown in fig .[ fig : arch ] .if boundary other than and are needed , a dilution engine can be used for generating them .we provide two dilution engines for generating the boundary , which can be run in parallel to reduce sample preparation time .each dilution engine is equipped with a stack of storage droplets in order to increase the throughput of boundary droplets and to reduce the number of _ waste _ droplets .the detailed layout of the dilution engine can be found elsewhere . to generate the gradient part, we use one mix - split module and additional storage cells ( theorem [ thm:2 ] ) .thus , to produce a gradient of size , with a maximum error of in each of the target , one needs a total of storage electrodes .the overall execution time for generating the gradient can further be minimized by adopting a scheduling algorithm for the best utilization of resources .we have performed extensive simulation on various target sets ( table [ tab : ts ] ) and calculated the number of -_mix - split _ steps and _ waste _ droplets .we have compared our results with the methods of mitra et al . , hsieh et al . , and huang et al . .the results are shown in table [ tab : mix - split - and - waste ] . the number of _ waste _ droplets in _ ldt _ for the proposed method is shown within parentheses in table [ tab : mix - split - and - waste ] along with total _ mix - split _ steps ( ) and _ waste _ ( ) droplets .[ tab : ts ] [ tab : mix - split - and - waste ] .25 .25 in our experiments , we have considered 6 different linear dilution gradient sets of size for . for each , we have chosen random sets in the range of and , assuming .hence the error in concentration factors will be at most . assuming sample and buffer as the two boundary , we have counted the total number of -_mix - split _ steps and _ waste _ droplets considering both the dilution engines and the gradient generator .comparative results with respect to earlier methods are shown as histograms in fig .[ fig : test ] , where the horizontal axis indicates the size of the target set ( ) for , and the vertical axis represents the average number of _ mix - split _ steps and _ waste _ droplets required in these methods .note that the most of the _ waste _ droplets that are generated in our method correspond to those produced by the dilution engines .we observe that our method produces a significantly fewer number of _ waste _ droplets compared to all the three earlier methods .further , the proposed method performs better in terms of the number of _ mix - split _ steps up to a target set of size , i.e. , up to .we have presented an algorithm for generating linear dilution gradients on a digital microfluidic platform .when the boundary concentration factors of a gradient of size are supplied , our method produces the rest without generating any waste droplet , thereby saving costly stock solutions . for other gradient sizes, it produces only a few waste droplets .we have also designed a suitable layout architecture to implement the generator on - chip .our method is adaptive to the size of dilution gradient as well to the desired accuracy of concentration factor .thus , the proposed approach will provide a flexible and programmable environment for catering to any need of arbitrary linear gradient during sample preparation .generation of other dilution gradients such as parabolic or sinusoidal with a digital microfluidic biochip may be studied as a future problem .m. g. pollack , r. b. fair , and a. d. shenderov , `` electrowetting - based actuation of liquid droplets for microfluidic applications , '' _ applied physics letters _ , vol .77 , no . 11 , pp .1725 1726 , sept .d. brassard , l. malic , c. miville - godin , f. normandin , and t. veres , `` advanced ewod - based digital microfluidic system for multiplexed analysis of biomolecular interactions , '' in _ micro electro mechanical systems ( mems ) _ , jan .2011 , pp . 153156 .r. b. fair , a. khlystov , t. d. tailor , v. ivanov , r. d. evans , v. srinivasan , v. k. pamula , m. g. pollack , p. b. griffin , and j. zhou , `` chemical and biological applications of digital - microfluidic devices , '' _ ieee design & test of computers _ , vol .24 , no . 1 ,pp . 1024 , 2007 .s. k. cho , h. moon , and c .- j .kim , `` creating , transporting , cutting , and merging liquid droplets by electrowetting - based actuation for digital microfluidic circuits , '' _ microelectromechanical systems , journal of _ , vol .12 , no . 1 ,7080 , feb .y. fouillet , d. jary , c. chabrol , p. claustre , and c. peponnet , `` digital microfluidic design and optimization of classic and new fluidic functions for lab on a chip systems , '' _ microfluidics and nanofluidics _ , vol . 4 , pp . 159165 , 2008 .s. sugiura , k. hattori , and t. kanamori , `` microfluidic serial dilution cell - based assay for analyzing drug dose response over a wide concentration range , '' _ analytical chemistry _ , vol .82 , no .19 , pp . 82788282 , 2010 .g. v. doern , r. vautour , m. gaudet , and b. levy , `` clinical impact of rapid in vitro susceptibility testing and bacterial identification , '' _ journal of clinical microbiology _ , vol .32 , no . 7 , pp .17571762 , july 1994 .k. lee , c. kim , g. jung , t. kim , j. kang , and k. oh , `` microfluidic network - based combinatorial dilution device for high throughput screening and optimization , '' _ microfluidics and nanofluidics _ , vol . 8 , pp . 677685 , 2010 .s. wang , n. ji , w. wang , and z. li , `` effects of non - ideal fabrication on the dilution performance of serially functioned microfluidic concentration gradient generator , '' in _nano / micro engineered and molecular systems ( nems ) _ , 2010 , pp . 169172 .s. k. w. dertinger , d. t. chiu , n. l. jeon , and g. m. whitesides , `` generation of gradients having complex shapes using microfluidic networks , '' _ analytical chemistry _73 , no . 6 , pp . 12401246 , 2001 .jang , m. j. hancock , s. b. kim , s. selimovic , w. y. sim , h. bae , and a. khademhosseini , `` an integrated microfluidic device for two - dimensional combinatorial dilution , '' _ lab chip _ , vol .11 , pp . 32773286 , 2011 .hsieh , t .- y . ho , and k. chakrabarty , `` a reagent - saving mixing algorithm for preparing multiple - target biochemical samples using digital microfluidics , '' _ ieee trans .on cad of integrated circuits and systems _ , vol . 31 , no . 11 , pp . 16561669 , 2012 .huang , c .- h .liu , and t .- w .chiang , `` reactant minimization during sample preparation on digital microfluidic biochips using skewed mixing trees , '' in _ computer - aided design ( iccad ) , 2012 ieee / acm international conference _ , nov .2012 , pp . 377383 .h. ren , v. srinivasan , and r. fair , `` design and testing of an interpolating mixing architecture for electrowetting - based droplet - on - chip chemical dilution , '' in _ transducers , solid - state sensors , actuators and microsystems _ , vol . 1 , 2003 , pp .619622 .k. lee , c. kim , b. ahn , r. panchapakesan , a. r. full , l. nordee , j. y. kang , and k. w. oh , `` generalized serial dilution module for monotonic and arbitrary microfluidic gradient generators , '' _ lab chip _ , vol . 9 , pp . 709717 , 2009 .s. roy , b. b. bhattacharya , and k. chakrabarty , `` optimization of dilution and mixing of biochemical samples using digital microfluidic biochips , '' _ ieee trans .on cad of integrated circuits and systems _ , vol . 29 , no . 11 , pp .16961708 , 2010 .
digital microfluidic ( dmf ) biochips are now being extensively used to automate several biochemical laboratory protocols such as clinical analysis , point - of - care diagnostics , and polymerase chain reaction ( pcr ) . in many biological assays , e.g. , in bacterial susceptibility tests , samples and reagents are required in multiple concentration ( or dilution ) factors , satisfying certain gradient " patterns such as linear , exponential , or parabolic . dilution gradients are usually prepared with continuous - flow microfluidic devices ; however , they suffer from inflexibility , non - programmability , and from large requirement of costly stock solutions . dmf biochips , on the other hand , are shown to produce , more efficiently , a set of random dilution factors . however , all existing algorithms fail to optimize the cost or performance when a certain gradient pattern is required . in this work , we present an algorithm to generate any arbitrary linear gradient , on - chip , with minimum wastage , while satisfying a required accuracy in the concentration factor . we present new theoretical results on the number of _ mix - split _ operations and _ waste _ computation , and prove an upper bound on the storage requirement . the corresponding layout design of the biochip is also proposed . simulation results on different linear gradients show a significant improvement in sample cost over three earlier algorithms used for the generation of multiple concentrations .
the discovery that systems at equilibrium exhibit universality near a phase transition has been a path - breaking achievement of statistical physics in the previous century .however , despite considerable effort , fluctuation behavior in biological and socio - economic systems that are far from equilibrium are not yet well understood .indeed , strong evidence for universality of non - equilibrium transitions is still lacking . the large diversity seen in non - equilibrium critical phenomenaposes a major challenge for those trying to uncover general principles underlying the collective dynamics of complex systems occurring in nature and society .such systems , apart from comprising a large number of interacting components , are often characterized by a large degree of heterogeneity in the properties of individual elements .for example , components of a complex system may exhibit qualitatively distinct dynamics . the local connection density among the elements in different parts may also greatly differ .it is known that such heterogeneity can result in deviation from universal behavior expected near phase transitions .a prototypical example of a complex system with a highly heterogeneous composition is the de - centralized international trade in foreign exchange ( forex ) which constitutes the largest financial market in the world in terms of volume .an advantage of studying its fluctuation behavior over that of other complex systems with many degrees of freedom is the availability of large quantities of high - resolution digital data that are relatively easily accessible for analysis .the different currencies that are traded in the market are each subject to multifarious influences , e.g. , related to geographical , economic , political or commercial factors , which can affect them in many different ways .such a highly heterogeneous system provides a stark contrast to the relatively simpler systems having homogeneous composition that have typically been investigated by physicists .in particular , we can ask whether the components of such a system can be expected to show universal features , i.e. , phenomena independent of microscopic details , which may potentially be explained using tools of statistical physics . for the specific case of the forex market , establishing any robust empirical regularity will be an important contribution towards understanding the underlying self - organizing dynamics in such systems .moreover it would be the first ever identification of an universal signature in macroeconomic processes .in contrast , the domain of microeconomics has seen accumulating evidence for universal phenomena , the most robust being for the nature of the heavy - tailed distributions of fluctuations in individual stock prices , as well as , equity market indices , often referred to as the `` inverse cubic law '' .the analogous distribution in forex , viz ., of fluctuations in the exchange rates of currencies , has been the subject of several earlier investigations .while some of these have indeed reported heavy tails for different currencies , there is little agreement concerning the values of the power - law exponents characterizing such tails , not even whether they lie outside the levy - stable regime .this suggests that the nature of the fluctuation distribution for a currency could be related to some intrinsic properties of the underlying economy . in this paperwe show that there is indeed a systematic deviation from a putative universal signature - which we refer to as `` inverse square law '' - for the fluctuation behavior of different currencies depending on two key macroeconomic indicators related to the economic performance and the diversity of exports of the corresponding countries .thus , several underdeveloped ( frontier ) economies exhibit currency fluctuations whose distribution appear to be of a levy - stable nature , while those of most developed economies fall outside this regime .the median value of the exponents quantifying the heavy - tailed nature of the cumulative fluctuation distributions for all the currencies occur close to 2 , i.e. , at the boundary of the levy - stable regime .our study demonstrates how robust empirical regularities in complex systems can be uncovered when they are masked by the intrinsic heterogeneity among the individual components .we have also characterized the distinct nature of the exchange rate dynamics of different currencies by considering their self - similar scaling behavior .our analysis reveals that while currencies of developed economies follow uncorrelated random walks , those of emerging and frontier economies exhibit sub - diffusive ( or mean - reverting ) dynamics . as our results suggest that the nature of the fluctuation distribution is related to the state of the economy , by employing a metric for measuring the distance between pairs of such distributions , we have been able to cluster different economies into similarity groups .this provides an alternative to the approach of grouping components based on dynamical cross - correlations of respective time - series which have limitations .a temporally resolved analysis of the nature of the distributions at different periods shows strong disruption of the otherwise regular pattern of systematic deviation during the severe crisis of 2008 - 09 , indicating its deep - rooted nature affecting the real economy . [ cols="<,<,<,<,<,<,<,<",options="header " , ] [ table1 ]the data - set we have analyzed in this study comprises the daily exchange rates with respect to the us dollar ( usd ) of currencies ( see table [ table1 ] ) for the period october 23 , 1995 to april 30 , 2012 , corresponding to days .the rate we use is the midpoint value , i.e. , the average of the bid and ask rates for 1 usd against a given currency . the data is obtained from a publicly accessible archive of historical rates maintained by the oanda corporation , an online currency conversion site . for each day, the site records an average value that is calculated over all rates collected over a 24 hour period from the global foreign exchange market .the rate used by us is the interbank rate for the currency which is the official rate quoted in the media and that apply to very large transactions ( typically between banks and financial institutions ) with margin close to zero .we have chosen usd as the base currency for the exchange rate as it is the preferred currency for most international transactions and remains the reserve currency of choice for most economies .the choice of currencies used in our study is mainly dictated by the exchange rate regime in which they operate .in particular , we have not considered currencies whose exchange rate with respect to usd is constant over time .most of the currencies in our database are floating , either freely under the influence of market forces or managed to an extent with no pre - determined path . among the remaining currencies ,a few are pegged to usd or some other important currency ( such as eur ) , but with some variation within a band ( which may either be fixed or moving in time ) .note that as the eur was introduced in january 1 , 1999 , i.e. , within the time interval considered by us , we have used the exchange rate for the ecu ( european currency unit ) for the period october 23 , 1995 to december 31 , 1998 . in order to explore whether the nature of the fluctuation distribution of a particular currency could be related to the characteristics of the underlying economy , the countries to which these currencies belongare grouped into three categories , viz ., developed , emerging and frontier markets , as per the morgan stanley capital international ( msci ) market classification framework .this is done on the basis of several criteria such as , the sustainability of economic development , number of companies meeting certain size and liquidity criteria , ease of capital flow , as well as , efficiency and stability of the institutional framework . to make the connection between deviation from universality and the heterogeneity of the constituents more explicit ,we have examined in detail certain macro - economic factors characterizing a national economy for the role they may play in determining the nature of the fluctuation dynamics of a currency . in particular , we find that a prominent role is played by ( a ) the gross domestic product ( gdp ) per capita , as well as , ( b ) the theil index of export products , which we define below .the _ gdp per capita _ of a country is obtained by dividing the annual economic output , i.e. , the aggregate value of all final goods and services produced in it during a year , by the total population .it is one of the primary indicators of the economic performance of a country , with higher gdp per capita indicating a higher standard of living for the people living in it .the annual gdp per capita of the countries whose currencies have been included in our study are obtained from publicly accessible data available in the website of the international monetary fund ( imf ) .we have averaged the data over the 18 year period ( 1995 - 2012 ) considered in our study to obtain the mean gdp per capita .the _ theil index _ measures the diversity of the export products of a country and is defined as where is the total value ( in usd ) of the -th export product of a country , is the average value of all export products and is total number of different products that are exported .a high value of indicates large heterogeneity in the values of the different exported products , indicating that a few products dominate the export trade .by contrast , low implies that a country has a highly diversified portfolio of export products and therefore , relatively protected from the vagaries of fluctuations in the demand for any single product . to compute the theil index we have used the annual export product data of different countries available from the observatory of economic complexity at mit .we have used the four digit level of the standard international trade classification for categorizing different products which corresponds to distinct export products in the data set .we have averaged the annual theil indices over the period 1995 - 2012 to obtain the mean theil index for each country. for currencies of developed economies , e.g. , sek ( a ) , shows relatively lower amplitude variations compared to that of currencies of frontier economies , e.g. , ttd ( b ) , in general ( note the different scales in the ordinate of the two panels ) .however , the distributions of for all currencies show a heavy - tailed nature , shown in ( c ) for currencies from a developed economy , sek ( circle ) , an emerging economy , inr ( square ) , and a frontier economy , ttd ( triangle ) .the nature of the positive ( right inset ) and negative tails ( left inset ) of the complementary cumulative distributions for these returns are also shown together with the best power - law fits ( broken lines ) obtained using maximum likelihood ( ml ) estimation . ]( b ) and ( c ) obtained by maximum likelihood estimation for the positive and negative tails , respectively , of the individual return distributions for the 75 currencies , show a peak around 3 with median values of 3.11 ( for ) and 3.28 ( for ) .points lying closer to the diagonal ( , indicated by a broken line ) in ( a ) imply a higher degree of symmetry in the distribution of for the corresponding currency , viz ., positive and negative fluctuations of similar magnitude are equally probable .there appears to be a systematic trend towards higher values of the exponent with a more developed state of the underlying economy .the heavy - tailed nature of the distributions characterized by the tail - exponents correspond closely to their peakedness measured using the kurtosis , as shown by the scatter plot between ( d ) and and ( e ) and for the currencies .the best log - linear fits , indicated by broken lines , correspond to ] ( removing the self contribution from the measure of volatility ) , obtaining the normalized return , .we observe that the standard deviations for the different currencies do not show any systematic variation with any of the factors that characterize the economies underlying the currencies which we have considered below , e.g. , gdp per capita or the theil index . as can be seen from fig .[ fig1 ] ( a - b ) , the returns quantifying the fluctuations in the exchange rate of currencies can appear extremely different even though they have been normalized by their volatilities .the temporal variation of for sek [ shown in fig .[ fig1 ] ( a ) ] , the currency of a developed economy , is mostly bounded between a narrow interval around 0 - the fluctuations never exceeding 6 standard deviations from the mean value . by contrast ,[ fig1 ] ( b ) shows that ttd , belonging to a frontier economy , can occasionally exhibit extremely large fluctuations , even exceeding 20 standard deviations - an event extremely unlikely to have been observed had the distribution been of a gaussian nature .these observations suggest that the distributions of the exchange rate fluctuations have long tails and that different currencies may have significantly different nature of heavy - tailed behavior . as shown in fig .[ fig1 ] ( c ) , where the distributions of for sek , ttd and an emerging economy currency , inr , is displayed , this is indeed the case .the complementary cumulative distribution function ( ccdf ) for the positive and negative returns [ see the insets of fig . [ fig1 ] ( c ) ] shows clearly the nature of the heavy tails , where the best fit to a power - law decay for the probability distribution having the functional form , obtained by maximum likelihood estimation , is shown .while both the positive and negative returns show heavy tails , we note that the exponents characterizing them need not be identical for a currency , such that the corresponding return distribution is asymmetric or skewed . the scatter plot in fig .[ fig2 ] ( a ) shows how the positive and negative tail exponents , and respectively , are related to each other for the different currencies . currencies that occur closer to the diagonal line have similar nature of upward and downward exchange rate movements .however , currencies which occur much above the diagonal ( i.e. , ) will tend to have a higher probability of extreme positive returns compared to negative ones , while those below the diagonal are more likely to exhibit very large negative returns .we note in passing that the skewness depends , to some extent , on the state of the economy of the country to which a currency belongs , with return distributions of developed economies being the least asymmetric in general , having mean skewness , while those of emerging and frontier economies are relatively much higher , being and , respectively . the distribution of the exponents characterizing the power - law nature of the exchange - rate returns shown in figs .[ fig2 ] ( b - c ) peaks around 3 for both the positive and negative tails . as a probability distribution function with a power law characterized by exponent value implies that the corresponding ccdf also has a power - law form but with exponent value , this result suggests an `` inverse square law '' governing the nature of fluctuations in the currency market in contrast to the `` inverse cubic law '' that has been proposed as governing the price and index fluctuations in several financial markets .however , as is the case here , such a `` law '' is only manifested on the average , as the return distributions for individual assets can have quite distinct exponents . here , we observe that the different currencies can have exponents as low as 2 and as high as 6. moreover , there appears to be a strong correlation between the nature of the tail and the state of the underlying economy to which the currency belongs .thus , developed economy currencies tend to have the largest exponents , while most of the lowest values of exponents belong to currencies from the frontier economies .this provides evidence of an intriguing relation between currency fluctuations and the state of the underlying economy , that could possibly be quantified by one or more macroeconomic indicators .this theme is explored in detail later in this paper .the character of the heavy tails of the returns is closely related to the peaked nature of the distribution that can be quantified by its kurtosis which is defined as , where is the expectation while and are the mean and standard deviation , respectively , of .[ fig2 ] ( d - e ) shows the relation between the kurtosis and the exponents for the tails of the returns distributions of the different currencies .the fitted curve shown qualitatively follows the theoretical relation between the two which can be derived by assuming that the distribution is pareto , i.e. , follows a power law ( although for such a situation , the kurtosis is finite only for exponent values ) .we observe that the relation between the exponents and kurtosis suggested by the scatter plots can be approximately fit by the function $ ] with , for the positive tail and , for the negative tail [ fig .[ fig2 ] ( d ) and ( e),respectively ] .the strong correlation between the peakedness of the distribution and the character of the heavy tails can be quantified by the pearson correlation coefficients between log( ) and log(log( ) ) , viz ., ( ) for the positive returns and ( ) for the negative returns .thus , instead of using two different exponent values ( corresponding to the positive and negative tails ) for each return distribution , we shall henceforth focus on the single kurtosis value that characterizes the distribution .given the variation in the nature of fluctuation distribution of different currencies from a single universal form , we ask whether the deviations are systematic in nature .note that , the currencies belong to countries having very diverse economies , trade in distinct products and services with other countries and may have contrasting economic performances . an intuitive approach would be to relate the differences in the return distributions with metrics which capture important aspects of the economies as a whole .[ fig3 ] shows that there is indeed a significant correlation between the kurtosis of the return distributions for the currencies and the two macroeconomic indicators of the underlying economies , viz ., the gdp per capita , , and the theil index , ( the meanings of the two metrics are explained in the data description ) .[ fig3 ] ( a ) shows that the scatter of kurtosis against can be approximately fit by a power law of the form : . the pearson correlation coefficient between the logarithms of the two quantities is ( . thus , in general , currencies of countries having higher gdp per capita tend to be more stable , in the sense of having low probability of extremely large fluctuations . however , there are exceptions ( e.g. , hkd and isk which are indicated in the figure ) where currencies exhibit high kurtosis even when they belong to countries with high gdp per capita . in these cases ,the peakedness of the distribution may reflect underlying economic crises , e.g. , the icelandic financial crisis in the case of isk .furthermore , we observe that currencies belonging to high gdp per capita economies that are dependent on international trade of a few key resources also exhibit high kurtosis ( e.g. , kwd and bnd [ not shown ] ) .this suggests a dependence of the nature of the fluctuation distribution on the diversity of their exports , which is indeed shown in fig .[ fig3 ] ( b ) .the dependence of the kurtosis on ( which is a measure of the variegated nature of trade ) of the corresponding economy is approximately described by a power - law relation : .the pearson correlation coefficient between the logarithms of the two quantities is ( .this implies that , in general , currencies of countries having low , i.e. , having well - diversified export profile , tend to be more stable .note that , the fluctuations of the currencies depend on both of these above macroeconomic factors , and the differences in their nature can not be explained exclusively by any one of them .it is therefore meaningful to perform a multi - linear regression of as a function of both gdp per capita and theil index using an equation of the form : log( ) log() log( ) , where the constants and are the best - fit regression coefficients .the coefficient of determination , which measures how well the data fits the statistical model , is found to be ( ) .this indicates that together the macroeconomic factors of gdp per capita ( related to the overall economic performance ) and theil index ( related to the international trade of the country ) explain over of the variation between the nature of the return distributions of the different currencies .we have also considered the possible dependence of the nature of the fluctuation distribution on other economic factors , such as the foreign direct investment ( fdi ) net inflow , but in most cases these do not appear to be independent of any one of the two factors considered above . of exchange rate fluctuation distributions of different currencies with ( a ) annual gdp per capita , ( in usd ) and ( b ) annual theil index of the export products , , for the corresponding countries , averaged over the period 1995 - 2012 .the pearson correlation coefficient between log( ) and log( ) is ( ) , the best - fit functional relation between the two being .currencies of developed economies that are outliers from this general trend , viz . ,isk and hkd that have high kurtosis despite having high gdp per capita , are explicitly indicated in ( a ) .a similar analysis shows that the pearson correlation coefficient between log( ) and log( ) is ( ) , with the best - fit functional relation being .different symbols are used to indicate currencies from developed ( circles ) , emerging ( squares ) and frontier ( triangles ) economies , while symbol size is proportional to log( ) of the corresponding countries . ]obtained using detrended fluctuation analysis of the exchange rate time series , and ( b ) the variance ratio ( ) of the exchange rate fluctuations calculated using lag ( ) , with the kurtosis of the normalized logarithmic return distributions of different currencies .different symbols are used to indicate currencies from developed ( circles ) , emerging ( squares ) and frontier ( triangles ) economies , while symbol size is proportional to log( ) of the corresponding countries .the broken lines in ( a ) and ( b ) indicate the values of and corresponding to an uncorrelated random walk .currencies of developed economies that are outliers , viz . , isk and hkd that have much higher kurtosis than others in the group , are explicitly indicated . ]to investigate the reason for the strong relation between the kurtosis of the return distribution for a currency and the corresponding underlying macroeconomic factors , we need to delve deeper into the nature of the dynamics of the exchange rate fluctuations .for this we first look into the self - similar scaling behavior of the time - series of exchange rate of a currency using the detrended fluctuation analysis ( dfa ) technique suitable for analyzing non - stationary processes with long - range memory . here, a time - series is de - trended over different temporal windows of sizes using least - square fitting with a linear function .the residual fluctuations of the resulting sequence , measured in terms of the standard deviation , is seen to scale as , where is referred to as the dfa exponent .the numerical value of this exponent lying between 0 and 1 provides information about the nature of the fractional brownian motion undertaken by the system . for ,the process is said to be equivalent to a random walk subject to white noise , while ( ) implies that the time - series is correlated ( anti - correlated ) . as seen from fig .[ fig4 ] ( a ) , the dfa exponents of currencies for most developed economies - which also have the lowest kurtosis - are close to 0.5 , indicating that these currencies are following uncorrelated random walk .in contrast , currencies of the emerging and frontier economies , that have higher values of kurtosis , typically have indicating sub - diffusive dynamics .to understand the reason for this sub - diffusive behavior we have analyzed the exchange rate time - series using the variance ratio ( vr ) test .this technique , based on the ratio of variance estimates for the returns calculated using different temporal lags , is often used to find how close a given time - series is to a random walk . for a sequence of log returns , the variance ratio for a lag is defined as : )},\ ] ] where and are the mean and variance of the sequence .an uncorrelated random walk is characterized by a vr value close to 1 .if , it indicates mean aversion in the time - series , i.e. , the variable has a tendency to follow a trend where successive changes are in the same direction .in contrast , suggests a mean - reverting series where changes in a given direction are likely to be followed by changes in the opposite direction preventing the system from moving very far from its mean value .[ fig4 ] ( b ) shows the vr values for different currencies as a function of their kurtosis .consistent with the dfa results reported above , it is seen that for currencies of developed economies the vr is close to 1 indicating uncorrelated brownian diffusion as the nature of their exchange rate dynamics .however , for most frontier and a few emerging economy currencies , the vr value is substantially smaller than 1 , implying that their trajectories have a mean - reverting nature . as in fig .[ fig3 ] , we note that hkd and isk appear as outliers in fig .[ fig4 ] in that , although belonging to the group of countries having high gdp per capita , they share the characteristics shown by most emerging and frontier economies .we can now understand the sub - diffusive nature of the dynamics of these currencies as arising from the anti - correlated nature of their successive fluctuations which prevents excursions far from the average value .thus , when we consider the time - series of all currencies after normalizing their variance , the fluctuations of the emerging and frontier economy currencies mostly remain in the neighborhood of the average value with rare , occasional deviations that are very large compared to developed economy currencies .this accounts for the much heavier tails of the return distributions of the former and the corresponding high value of kurtosis .it is intriguing to consider whether the difference in the nature of the movement of exchange rates of the currencies could be possibly related to the role played by speculation in the trading of these currencies .we also note that these results are in broad agreement with the fact that efficient markets follow uncorrelated random walks and the notion that the markets of developed economies are far more efficient than those of emerging and frontier ones. obtained from the jensen - shannon divergence between the corresponding normalized logarithmic return distributions of a pair of currencies has been used as the clustering metric .the currencies have been clustered using complete linkage algorithm and the height of a branch measures the linkage function , i.e. , the distance between two clusters . using a threshold of , the largest number of distinct clusters ( viz ., 20 clusters represented by the different colored branches of the dendrogram , black branches indicating isolated nodes ) can be identified , the largest of which comprises only currencies of developed economies with the exception of huf ( belonging to an emerging economy ) .currencies are distinguished according to the average annual gdp per capita of the corresponding economy ( represented by font size , which scales logarithmically with ) and the geographical region to which they belong ( represented by font color , viz . , black : americas , red : europe , blue : middle east , magenta : asia - pacific , green : africa and brown : asia ) . ] we have investigated the inter - relation between the different currencies by considering how similar they are in terms of the nature of their fluctuations . for thiswe have measured the difference between the normalized logarithmic return distributions of each pair of currencies using a probability distance metric , viz ., the similarity distance between a pair of return distributions and .it is defined as the square root of the jensen - shannon ( js ) divergence , which in turn can be defined in terms of the kullback - leibler ( kl ) divergence for a pair of probability distributions and of a discrete random variable : the limitations of kl divergence , viz ., that it is asymmetric and also undefined when either or is zero for any value of , is overcome by the js divergence defined as : where , .as returns are continuous variables , in order to calculate the divergences between their distributions , we have discretized the values using a binning procedure ( involving intervals ) .note that , the related generalized js measure has been used earlier to measure the similarity of tick frequency spectrograms for different currency exchange rates .the matrix of similarity distances between all pair of currencies is used for clustering them in a hierarchical manner .given a set of nodes to be clustered and a matrix specifying the distances between them , the method of hierarchical clustering involves ( i ) initially considering each node as a cluster , ( ii ) merging the pair of clusters which have the shortest distance between them , ( iii ) re - computing the distance between all clusters , and repeating the steps ( ii ) and ( iii ) until all nodes are merged into a single cluster .clustering methods can differ in the way the inter - cluster distance is calculated in step ( iii ) .if this distance is taken as the maximum of the pairwise distances between members of one cluster to members of the other cluster , it is known as _ complete - linkage _ clustering .on the other hand , in the _ single - linkage _ or nearest neighbor clustering , the minimum of the distance between any member of one cluster to any member of the other cluster is chosen .average - linkage clustering , as the name implies , considers the mean of the pairwise distances between members of the two clusters .note that , the hierarchical clustering obtained using the complete - linkage method will be same as one obtained using a threshold distance to define membership of a cluster , while that constructed using the single - linkage method is identical to the minimal spanning tree .we have shown the hierarchical clustering ( using complete - linkage clustering ) of the different currencies considered in this study in fig .[ fig5 ] using a polar dendrogram representation .we note that the technique divides the currencies at the coarsest scale into two groups , the smaller of which is exclusively composed of currencies from frontier economies that are characterized by large fluctuations .although some of these currencies ( e.g. , kwd and ttd ) belong to countries having high gdp per capita , they typically also have a high theil index indicating that their economy is based on export of a few key products ( e.g. , crude oil ) .their currencies are therefore potentially highly susceptible to fluctuations in the worldwide demand .focusing now on the larger group , we observe that it is further divided broadly into two clusters , the larger of which is dominated by relatively stable currencies from developed and emerging economies , with only a small fraction of frontier economies being represented ( viz . ,bnd , fjd , gtq , hrk , lkr and lvl ) .note that these latter have relatively higher gdp per capita than the other frontier economies . on the other hand ,the smaller cluster is composed of currencies from emerging as well as frontier economies .the largest number of significant clusters into which the currencies can be grouped is obtained for a threshold value of .most of the developed economies are in the same largest significant cluster consisting of 13 currencies , indicating that these economies have a relatively similar high degree of stability for their currency exchange rates . as expected from fig .[ fig3 ] almost all of them have high gdp per capita and low theil index .the members of this highly stable group that are not in the developed category are either members of the european union ( czk and huf ) or , as in the case of morocco ( mad ) , their economy is tightly connected with that of eu through trade , tourism and remittances from migrant workers .other statistically significant clusters that are observed also tend to group together economies that have similar gdp per capita and/or theil index .similar clustering is observed among currencies using the alternative single- and average - linkage clustering methods . of the distributions with annual gdp per capita , ( in usd ) and that of ( d - f ) the variance ratio ( vr ) of the different normalized fluctuations time series with kurtosis , are shown for three different periods , viz ., period i : oct 23 , 1995 - apr 25 , 2001 ( a & d ) , period ii : apr 26 , 2001 - oct 28 , 2006 ( b & e ) and period iii : oct 29 , 2006 - apr 30 , 2012 ( c & f ) , which divide the duration under study in three equal , non - overlapping segments . the gdp per capita of the different countries for each period are obtained by averaging the annual values over the corresponding periods .the pearson correlation coefficients between log( ) and log( ) for the three periods are ( -value ) , ( -value ) and ( -value ) .for the first two periods , the best - fit functional relation between the two is , while for the third period , . comparingthe variance ratio values for the three different periods show a higher degree of mean aversion in the third period .period iii , during which the major economic crisis of 2008 - 09 occurred , is distinguished by large deviation from the trends seen in the other two periods .different symbols are used to indicate currencies from developed ( circles ) , emerging ( squares ) and frontier ( triangles ) economies , while symbol size is proportional to log( ) of the corresponding countries . ] in the analysis presented above we have considered the entire duration which our data - set spans . however , as the world economy underwent significant changes during this period , most notably , the global financial crisis of 2008 , it is of interest to see how the properties we investigate have evolved with time . for this purposewe divide the data - set into three equal non - overlapping periods comprising 2011 days , corresponding to period i : oct 23 , 1995 - apr 25 , 2001 , period ii : apr 26 , 2001 - oct 28 , 2006 and period iii : oct 29 , 2006 - apr 30 , 2012 .note that the last period corresponds to the crisis of the global economy spanning 2007 - 2009 . for each of these, we carry out the same procedures as outlined above for the entire data - set . as seen from fig .[ fig6 ] , the behavior in the first two intervals appear to be quite similar in terms of the various properties that have been measured , but large deviations are seen in the third interval .this is apparent both for the relation between kurtosis and mean gdp per capita [ fig .[ fig6 ] ( a ) ] , as well as that between kurtosis and mean theil index ( figure not shown ) .the dependence of the nature of the fluctuation distribution on the properties of the underlying economy seem to have weakened in period iii .for example , while there is significant strong negative correlation between log( ) and log( ) for the first two intervals , viz ., ( ) and ( ) , respectively , it decreases to only ( ) for the third interval .furthermore , the first two intervals show a dependence of the kurtosis , same as that seen for the entire period that we have reported earlier. however , this is not true for the last interval where the best fit for the dependence is closer to .similarly , we have found significant high correlation between log( ) and log( ) , corresponding to pearson coefficients ( ) and ( ) , respectively , for the first two intervals .in contrast , for the third interval we observe a relatively small correlation ( ) .in addition , the relation between the variance ratio and the kurtosis of the returns [ fig . [ fig6 ] ( b ) ] , as well as that between the dfa exponent and the kurtosis ( figure not shown ) , are seen to be similar in the first two intervals but very different in the third - in part because the vr for the developed and some emerging economies have adopted values ( i.e. , exhibiting mean aversion ) in this last interval , while earlier they were close to 1 ( i.e. , similar to a random walk ) . while periods i and ii had their share of economic booms and busts , it is instructive to note that the 2008 crisis was severe enough to disrupt systemic features that were otherwise maintained over time .the hierarchical clustering of the currencies also show striking changes over time when they are constructed separately for each of the three intervals mentioned above . as for the data - set covering the entire period, the method classifies the currencies broadly into two categories with the larger one comprising the relatively stable currencies of developed and emerging economies . as can be seen from fig .[ fig7 ] many currencies have changed their relative position with respect to other currencies between these intervals , with only the developed economies largely remaining members of the same cluster .however , a prominent exception is the icelandic currency ( isk ) that moved closer to the neighborhood of other developed economies between period i and ii , but moved far away from this group in period iii this is possibly related to the icelandic crisis of 2008 - 2010 that saw a complete collapse of its financial system . the global crisis of 2008 - 09 is also reflected in the fragmentation of the cluster of currencies belonging to developed economies in period iii .the work we report here underscores the importance of studying economic systems , especially financial markets , for gaining an understanding of the collective dynamics of heterogeneous complex systems . at the largest scale ,such a system encompasses the entire world where the relevant entities are the different national economies interacting with each other through international trade and the foreign exchange market .the far - from - equilibrium behavior of this highly heterogeneous complex system has been investigated here by focusing on the fluctuations of exchange rates of the respective currencies .understanding the overall features of this dynamics is crucially important in view of the human and social cost associated with large - scale disruptions in the system , as was seen during the recent 2008 world - wide economic crisis .it is with this aim in view that we have examined the occurrence of robust empirical features in the nature of the exchange rate fluctuations .our results indeed show evidence for a universal signature in the dynamics of exchange rates , possibly the first such seen in macroeconomic phenomena .this is in contrast to microeconomic systems like individual financial markets where robust stylized facts such as the ` inverse cubic law ' has been established for some time .the ` inverse square law ' that we report here also has a fundamental distinction in that distributions characterized by ccdf exponents belong to the levy - stable regime .by contrast , the logarithmic return distributions of equities and indices of financial markets that have exponent values around are expected to converge to a gaussian form at longer time scales .it suggests that extreme events corresponding to sudden large changes in exchange rates , in particular for currencies belonging to emerging and frontier economies , should be expected more often in the the forex market compared to that in other financial markets , e.g. , those dealing with equities .the ` inverse square law ' has recently been also reported in at least one other market , viz ., that of bitcoin in the initial period following its inception .we note that agent - based modeling of markets suggest that such a distribution can arise if market players are relatively homogeneous in their risk propensity .possibly the most important observation to be made from the results of our study is that heterogeneity in the intrinsic properties of the components of a complex system can mask universal features in their behavior .however , one can infer the existence of such an empirical regularity by relating these properties with a systematic divergence from an invariant form .thus , for the forex market , the deviation of the different currencies from an universal form in the nature of their fluctuation dynamics is explained by the corresponding countries possessing different macroeconomic features ( viz . , the standard of living indicated by the gdp per capita and the diversity of goods exported as measured by the theil index ) .note that , in financial markets also , stocks exhibit a diversity of exponent values characterizing the heavy tails of their return distribution , even though the majority of them may be clustered around the characteristic value of 3 .it is intriguing to speculate whether in this case too the extent of deviation exhibited by a stock from the inverse cubic law can be related to intrinsic properties , e.g. , the turnover or net profit , of the corresponding company . quantifyingthe nature of the return distributions of different currencies in terms of their higher - order statistics leads naturally to a metric for the degree of similarity between their fluctuation behavior .this in turn provides a procedure for clustering the corresponding economies - the resulting groups comprising countries with similar economic performance .while , in principle , it is possible for the exchange rate regime for a currency to also have influenced its fluctuation behavior , we notice that the clustering of currencies do not appear to support such a dependence . in this paperwe have used the jensen - shannon divergence measure for the difference between two distributions . in principle, one can use other definitions for the distance between probability distributions , such as the total variation distance and the bhattacharyya distance . apart from revealing broad features of the dynamics of the forex market, our analysis reveals strikingly anomalous behavior for certain currencies that may be connected to major economic disruptions affecting the corresponding countries .for example , in spite of having gdp per capita and theil index similar to other members of the group of developed economies to which they belong , both hkd and isk are outliers in terms of their kurtosis , dfa scaling exponent and variance ratio ( see figs .[ fig3]-[fig4 ] ) .furthermore , we observe that these currencies have changed their position relative to other currencies in the dendrograms representing hierarchical clustering of the currencies at different eras ( fig .[ fig7 ] ) .isk lies close to the cluster of developed economy currencies in the first two periods considered , but neighbors emerging and frontier economy currencies in the last period .this helps us to connect the atypical characteristics shown by the currency with the effects of the major financial crisis that affected iceland in this era .triggered by the default of all three major privately - owned commercial banks in iceland in 2008 , the crisis resulted in the first systemic collapse in any advanced economy .a sharp drop in the value of isk followed , with exchange transactions halted for weeks and the value of stocks in the financial market collapsing .the crisis led to a severe economic depression lasting from 2008 - 2010 .by contrast , hkd appears close to other developed economy currencies in periods i and iii , but in the neighborhood of emerging and frontier currencies in period ii .this again helps us to link the unusual behavior of hkd with the crisis triggered by the sars epidemic of 2003 affecting mainland china , taiwan and large parts of southeast asia , that caused extensive economic damage to hong kong with unemployment hitting a record high . for the hong kong currency and banking system that had survived the asian financial crisis of 1997 - 98 , the epidemic was an unexpected shock , with a net capital outflow observed during the persistent phase of the disease . in addition , the dominance of the service sector in the hong kong economy meant that the reduction in contact following the epidemic outbreak had a large negative impact on the gdp .thus , the deviation in the behavior of specific currencies from that expected because of the macro - economic characteristic can be traced to particular disruptive events that specifically affected them .to conclude , the results of our study help in revealing a hidden universality in a highly heterogeneous complex system , viz ., the forex market .the robust feature that we identify here is a power law characterizing the heavy - tailed nature of the fluctuation distributions of exchange rates for different currencies .the systematic deviation of individual currencies from the universal form ( the `` inverse square law '' ) , which is quantified in terms of their kurtosis measuring the peakedness of the return distributions , can be linked to metrics of the economic performance and degree of diversification of export products of the respective countries . in particular , the currencies of several frontier economies are seen to exhibit fluctuations whose distributions appear to belong to the levy - stable regime , while those of most developed economies seem to be outside it . by doing detrended fluctuation analysis , the distinct behavior of currencies corresponding to developed , emerging and frontier markets can be linked to the different scaling behaviors of the random walks undertaken by these currencies . by considering the degree of similarity of different currencies in the nature of their fluctuations , we have defined a distance metric between them .this allows constructing a hierarchical network relating the currencies in our study which shows clustering of currencies belonging to similar economies .more importantly , the clustering seen in relatively normal periods of the forex market are seen to be disrupted during the 2008 economic crisis . considering the temporal dimension in our analysis allows us to relate particularly strong economic shocks to changes in the relative positions of currencies in their hierarchical clustering .our work shows how robust empirical regularities among the components of a complex system can be uncovered even when the system is characterized by a large number of heterogeneous interacting elements exhibiting distinct local dynamics .it would be of interest to see if a similar approach can be successful in identifying universal features in other biological and socio - economic phenomena .we thank anindya s. chakrabarti , tanmay mitra and v. sasidevan for helpful suggestions .we gratefully acknowledge the assistance of uday kovur in the preliminary stages of this work . this work was supported in part by imsc econophysics ( xii plan )project funded by the department of atomic energy , government of india .90 k. g. wilson , rev .phys . * 55 * , 583 ( 1983 ) .l. henrickson and b. mckelvey , proc .usa * 99 * , 7288 ( 2002 ) .h. hinrichsen , adv . phys .* 49 * , 815 ( 2000 ) .r. cohen , d. ben - avraham and s. havlin , phys .lett . * 66 * , 036113 ( 2002 ) .m. loretan and p. c. b. phillips , j. empirical finance * 1 * , 211 ( 1994 ) . j. w. mcfarland , r. richardson pettit and s. k. sung , j. finance * 37 * , 693 ( 1982 ) .k. g. koedijk , m. a. schafgans and c. g. de vries , j. int .* 29 * , 93 ( 1990 ) .d. m. guillaume , m. m. dacorogna , r. r. dave , u. a. muller , r. b. olsen and o. v. pictet , finance stochast .* 1 * , 95 ( 1997 ) .l. laloux , p. cizeau , j. p. bouchaud and m. potters , phys .* 83 * , 1467 ( 1999 ) .v. plerou , p. gopikrishnan , b. rosenow , l. a. nunes amaral and h. e. stanley , phys .lett . * 83 * , 1471 ( 1999 ) .m. mcdonald , o. suleman , s. williams , s. howison and n. f. johnson , phys .e * 72 * , 046106 ( 2005 ) .d. n. reshef , y. a. reshef , h. k. finucane , s. r. grossman , g. mcvean , p. j. turnbaugh , e. s. lander , m. mitzenmacher and p. c. sabeti , science * 334 * , 1518 ( 2011 ) . j. b. kinney and g. s. atwal , proc .usa * 111 * , 3354 ( 2014 ) .a. w. lo and a. c. mackinlay , j. econometrics * 40 * , 203 ( 1989 ) .b. brown , _ what drives global capital flows ? _ ( palgrave macmillan , new york , 2006 ) .d. m. endres and j. e. schindelin , ieee trans .theory * 49 * , 1858 ( 2003 ) .j. lin , ieee trans .inf . theory * 37 * , 145 ( 1991 ) .s. c. johnson , psychometrika * 2 * , 241 ( 1967 ) .j. c. gower and g. j. s. ross , j. roy .c * 18 * , 54 ( 1969 ) .h. malka , _morocco s african future _ , center for strategic and international studies , washington dc ( 2013 ) .d. carey , oecd economics department working paper 725 , 2009 .t. o. sigurjonsson and m. w. mixa , thunderbird international business review * 53 * , 209 ( 2011 ) .s. easwaran , m. dixit and s. sinha , in f. abergel _et al _ ( eds . ) _ econophysics and data driven modelling of market dynamics _( springer , cham , 2015 ) , p. 121 .s. v. vikram and s. sinha , phys .e * 83 * , 016101 ( 2011 ) .l. feng , b. li , b. podobnik , t. preis and h. e. stanley , proc .usa * 109 * , 8388 ( 2012 ) .a. bhattacharyya , bull .35 * , 99 ( 1943 ) .j. danielsson and g. zoega , institute of economic studies discussion paper w09:03 , 2009 .
identifying universal behavior is a challenging task for far - from - equilibrium complex systems . here we investigate the collective dynamics of the international currency exchange market and show the existence of a semi - invariant signature masked by the high degree of heterogeneity in this complex system . the cumulative fluctuation distribution in the exchange rates of different currencies possess heavy tails characterized by exponents varying around a median value of 2 . the systematic deviation of individual currencies from this putative universal form ( the `` inverse square law '' ) can be partly ascribed to the differences in their economic prosperity and diversity of export products . the distinct nature of the fluctuation dynamics for currencies of developed , emerging and frontier economies are characterized in detail by detrended fluctuation analysis and variance - ratio tests , which shows that less developed economies are associated with sub - diffusive random walk processes . we hierarchically cluster the currencies into similarity groups based on differences between their fluctuation distributions as measured by jensen - shannon divergence . these clusters are consistent with the nature of the underlying economies - but also show striking divergences during economic crises . indeed a temporally resolved analysis of the fluctuations indicates significant disruption during the crisis of 2008 - 09 underlining its severity .
the most fundamental element in communications is a task in which for a receiver ( bob ) discriminates among different physical entities sent by a sender ( alice ) . in general , however , different quantum states can not be discriminated with certainty because of non - orthogonality .interestingly , this fundamental limit makes quantum communication more intriguing and more useful in some cases .let us consider the task .if the number of entities is a finite number , the task is normally called quantum state discrimination ( qsd ) .if the number of entities is infinite and continuous , it is called quantum state estimation ( qse ) .qsd maximizes the probability of correct guessing for the entity . in qse , however , the probability of correct guessing is zero because the number of entities is infinite , and thus is a meaningless quantity .hence a function ( a figure of merit ) is introduced which assigns a score for the pair of a state sent by alice and a state guessed by bob .normally the figure of merit is monotonous , that is , as the two states are closer , the score is higher .the average score is optimized .now let us consider the qsd in the context of a figure of merit . in the normal qsd ,only those events in which bob makes a correct guess are counted .qsd does not consider how close the guessed one is to the correct one .here we can generalize the qsd , by introducing a function , the figure of merit , which assigns a score for the pair of a correct state and a guessed one .we maximize the average score as in qse . as we see , normal qsd is a special case of `` qsd with general figures of merit '' .( `` qsd with general figures of merit '' is equivalent to `` qse with discrete set of states '' . )recently it has been shown that the no - signaling ( no superluminal communication ) principle can greatly simplify analysis in quantum information including analysis of qsd and qse . in this paper , we consider the problem of the `` qsd with general figures of merit '' , for an even number , of symmetric states of quantum bits ( qubits ) , with use of the no - signaling principle . here both methods used for qsd and qse are combined to get solutions .it turns out that optimal measurements are the same for all monotonous ( symmetric ) figures of merit in qsd .it is true that the no - signaling principle is not essential in our argument .that is , `` no - signaling principle '' can be replaced by `` impossibility of discriminating two different decompositions of states corresponding to the same density operator '' in refs . and throughout this paper .however , we adopt the no - signaling principle here because it makes the result more concrete .this paper is organized as follows . in sec .ii , we introduce a set of symmetric ( mixed ) qubit states .we show that with use of the no - signaling principle , conditional probability in qsd for the symmetric set always has the form for any ( symmetric ) figure of merit . here are constants and is determined by the bloch vectors of the prepared qubit and the guessed qubit .we also bound the conditional probability with use of the no - signaling principle again .we observe that the function giving the maximal score is realized by a simple symmetric measurement , which is just the optimal measurement in normal qsd .thus the measurement becomes an optimal measurement in qsd with any monotonous figure of merit .let us give a description of qsd . in this paper, denotes the situation in which different quantum states are generated with probability by alice , where . is a conditional probability that an output is given for an input by certain optimal measurements of bob . herethe guessing probability denotes the maximal probability of correct guessing : where maximization is done over measurements . in the normal qsd ,the guessing probability is maximized ( or , equivalently , error probability is minimized ) .however , we can generalize qsd by introducing a figure of merit , a function .the average score is maximized : in this paper we consider only symmetric figures of merit .that is , we assume that and ( mod ) for each and .the normal qsd corresponds to a case where .here when and when .then let us describe bloch representation in which any qubit state can be expressed as here , and , where are pauli operators . for pure states and for mixed states .an even number of symmetric qubit states to be discriminated can be parameterized as ( see figs . 1 and 2 ) , denotes which is bloch vector of qubits to be discriminated . s are symmetrically oriented.,width=340 ] is projected on xy plane to be .,width=340 ] here where .we consider only the symmetric case where are all equal , and .let us consider a communication scenario in which qsd is incorporated ._ proposition 1_. if two different decompositions of states corresponding to the same density operator can be discriminated , superluminal communication can be achieved between two parties sharing appropriate entangled states : assume that alice and bob are sharing an ensemble of entangled states .in this case bob s reduced density operator is fixed . consider two different decompositions of states and corresponding to the .( here and . )that is , . by the gisin - hughston - jozsa - wootters theorem ,however , alice can generate any decomposition she wants by performing an appropriate measurement .that is , there exists a measurement ( the other measurement ) such that if alice performs ( ) on her quantum states then a decomposition ( ) is generated at bob s site .they can communicate as follows .if alice wants to send a bit ( ) , she performs ( ) . now let us consider a decomposition , that corresponds to a density operator . here and ( see figs . 1 and 2 ) .by a simple geometric argument , we can find a such that an angle between and in the projected plane is .now we get by eqs .( [ a ] ) and ( [ d ] ) , clearly we have on the other hand , suppose we have a device for qsd , a `` state discriminator '' . here for a moment we neither specify a figure of merit nor require that the state discriminator is an optimal one . however , we assume that the state discriminator has symmetry , and ( mod ) .it is not that all state discriminators should be symmetric even if the input states has symmetry as in our case .consider a ( useless ) state discriminator that always gives a fixed output for any input , which is clearly not symmetric .however , as we show below , for any asymmetric state discriminator there exists a symmetric one whose score is the same as the score of the asymmetric one .hence it is sufficient for us to consider only symmetric one because our goal is to find optimal state discriminator .now let us show that , for any asymmetric state discriminator , there exists a symmetric state discriminator whose score is the same as the one by the asymmetric one .let us denote conditional probabilities of the asymmetric one by , which gives a score let us consider a symmetric one with we can see that score by is the same as score by : here symmetry of the figure of merits is used to get second equality .the state discriminator we consider is a `` black box , '' which can include quantum measurement device and anything else helpful for the task . for a certain input state ,the state discriminator will give an output as the optimal guess .now let us consider the left - hand and right - hand sides of eq .( [ e ] ) as the two different decompositions in the context of proposition 1 . by the no - signaling principle and proposition 1, the two decompositions can not be discriminated by any means .therefore the two decompositions can not be discriminated by the state discriminator . for an output 0 , this implies by linearity of quantum mechanics .here and are constants and thus they can be set to be and , respectively . by symmetry , we have . from eqs .( [ d-2 ] ) and ( [ f ] ) , by setting and , we have . by symmetry , we have , for each and , note that a state discriminator with \{ } and another with \{ } can be interconverted to each other by simply re - labeling the output , , so they are equivalent to each other .therefore , we confine ourselves to the case without loss of generality .we can see that as becomes larger , the function becomes narrower . here is the minimal one between and .the case when and gives maximal information .conditional probability of this case also gives maximal score . in the case of pure states ,the optimal conditional probability is achieved by the uniform measurement , which thus becomes an optimal measurement . in our case , however , we can expect that the conditional probability would have a more broad form because the states to be discriminated are mixed states and thus more nonorthogonal . before we discuss how to get the more broad one ,let us refresh qsd by the no - signaling principle .here what we need to do first is to construct a set of states such that for all s and _ proposition-2 _ . from the no - signaling principle , the guessing probability in any state discriminatormust be bounded as assume that alice and bob are sharing an ensemble of an entangled state for which . by the theorem of gisin - hughston - jozsa - wootters , alice can generate any decomposition , that she wants at bob s site .now let be the conditional probability that an output is given by a state discriminator when is generated .note that the state discriminator gives a certain output even when is input .hence we have . by the no - signaling principle, we have for all . herethus we have . by combining eq .( [ k ] ) and the above equations , we have . now let us derive bounds on the conditional probability by using proposition 2 . in our casein which all s are the same , all s are the same by eq .( [ k ] ) .the bound reduces to when .the in our case is calculated as .thus the bound is note that eq .( [ n ] ) is valid for any state discriminator .one may say that eq .( [ n ] ) is bound for normal qsd . however , the state discriminator made for any figure of merit can also work for normal qsd .now we can see that by some calculations including normalization for and , if the function in eq .( [ h ] ) becomes narrower than then the function violates the bound in eq .( [ n ] ) .hence the conditional probability can not be narrower than .namely , possible ranges are and .thus is the optimal conditional probability in our case .all results so far are valid for any figure of merit since we have not specified a figure of merit yet .remarkably , conditional probability has the same form in eq ( [ h ] ) for any figure of merit .now let us consider a specific figure of merit .we can say that almost ( useful ) figure of merit should have monotonicity : the figure of merit increases with decreasing .provided that the monotonicity is satisfied , the function gives the highest average score .now if we find a measurement that realize the function , the measurement becomes an optimal one .now let us discuss how the function is achieved .let us try the optimal measurement in the normal qsd for our states .the optimal measurement is a symmetric one whose positive operator - valued measure ( povm , ) is where is a unit vector in the plane . after direct calculation, we can see that the measurement gives the function .hence the measurement by eq .( [ p ] ) is an optimal measurement for a state discriminator with monotonous figure of merits , and the function is the corresponding optimal conditional probability .the result applies to any ( symmetric ) black box which gives an outcome for the input states with fixed conditional probability , regardless of whether the black - box is made for state discrimination .that is , conditional probability of any such black box must have the form in eq . ( [ h ] ) . in some cases like in the case of information gain , the score can not be written in the form of eq .( [ b ] ) .however , the result here is applicable . as discussed above, conditional probability has the form in eq .( [ h ] ) .a conditional probability within allowed range of , which maximize the score , is the optimal one . in the case of informationgain the optimal one should be one with maximal .we considered qsd with general ( symmetric ) figure of merit .we first solved the problem for an even number of symmetric qubits with use of the no - signaling principle . here both methods used for qsd and qse are combined to get the solutions .we showed that , with use of the no - signaling principle , conditional probability always has the form .we tightened the range for the conditional probability with use of the no - signaling principle again .remarkably , these results are valid for any ( symmetric ) figure of merit .the optimal conditional probability is achieved by a simple symmetric measurement .therefore , the measurement is the optimal one for our case .this study was supported by basic science research program through the national research foundation of korea ( nrf ) funded by the ministry of education , science and technology ( 2010 - 0007208 ) .this study was financially supported by woosuk university .m. a. nielsen and i. l. chuang , _ quantum computation and quantum information _ , ( cambridge university press , cambridge , 2000 ) .a. s. holevo , probl .* 10 * , 317 ( 1974 ) .h. p. yuen , r. s. kennedy , and m. lax , ieee trans . inf .theory * 21 * , 125 ( 1975 ) .a. chefles , contemporary phys . *41 * , 401 ( 2000 ) , and references therein .s. massar and s. popescu , phys .lett . * 74 * , 1259 ( 1995 ) .n. gisin , phys .a * 242 * , 1 ( 1998 ) .s. m. barnett and e. andersson , phys .a * 65 * , 044307 ( 2002 ) .hwang , phys .a * 71 * , 062315 ( 2005 ) .j. bae , j. w. lee , j. kim , and w .- y .hwang , phys .a * 78 * , 022335 ( 2008 ) .hwang and j. bae , j. math .51 * , 022202 ( 2010 ) .j. bae , w .- y .hwang , and y .- d .han , phys .. lett . * 107 * , 170403 ( 2011 ) . y .-han , j. bae , x .- b .wang , and w .- y .hwang , phys .a * 82 * , 062318 ( 2010 ) .n. herbert , found .* 12 * , 1171 ( 1982 ) .n. gisin , helv .acta * 62 * , 363 ( 1989 ) .l. p. hughston , r. jozsa , and w. k. wootters , phys .a * 183 * , 14 ( 1993 ) .r. tarrach and g. vidal , phys .60 * , 3339 ( 1999 ) .
we solve the problem of quantum state discrimination with `` general ( symmetric ) figures of merit '' for an even number of symmetric quantum bits with use of the no - signaling principle . it turns out that conditional probability has the same form for any figure of merit . optimal measurement and corresponding conditional probability are the same for any monotonous figure of merit .
increasing incidence of infectious diseases such as the sars and the recent a(h1n1 ) pandemic influenza , has led to the scientific community to build models in order to understand the epidemic spreading and to develop efficient strategies to protect the society .since one of the goals of the health authorities is to minimize the economic impact of the health policies , many theoretical studies are oriented to establish how the strategies maintain the functionality of a society at the least economic cost .the simplest model that mimics diseases where individuals acquire permanent immunity , such as the influenza , is the pioneer susceptible - infected - recovered ( sir ) model . in this epidemiological modelthe individuals can be in one of the three states : i ) susceptible , which corresponds to a healthy individual who has no immunity , ii ) infected , a non - healthy individual and iii ) recovered , that corresponds to an individual who can not propagate anymore the disease because he is immune or dead . in this modelthe infected individuals transmit the disease to the susceptible ones , and recover after a certain time since they were infected .the process stops when the disease reaches the steady state , , when all infected individuals recover .it is known that , in this process , the final fraction of recovered individuals is the order parameter of a second order phase transition .the phase transition is governed by a control parameter which is the effective probability of infection or transmissibility of the disease . above a critical threshold ,the disease becomes an epidemic , while for the disease reaches only a small fraction of the population ( outbreaks ) .the first sir model , called random mixing model , assumes that all contacts are possible , thus the infection can spread through all of them .however , in realistic epidemic processes individuals have contact only with a limited set of neighbors . as a consequence , in the last two decadesthe study of epidemic spreading has incorporated a contact network framework , in which nodes are the individuals and the links represent the interactions between them .this approach has been very successful not only in an epidemiological context but also in economy , sociology and informatics .it is well known that the topology of the network , the diverse patterns of connections between individuals plays an important role in many processes such as in epidemic spreading .in particular , the degree distribution that indicates the fraction of nodes with links ( or degree ) is the most used characterization of the network topology .according to their degree distribution , networks are classified in i ) homogeneous , where node s connectivities are around the average degree and ii ) heterogeneous , in which there are many nodes with small connectivities but also some nodes , called hubs or super - spreaders , with a huge amount of connections .the most popular homogeneous networks is the erds rnyi ( er ) network characterized by a poisson degree distribution . on the other hand ,very heterogeneous networks are represented by scale - free ( sf ) distributions with , with , where represents the heterogeneity of the network .historically , processes on top of complex networks were focused on homogeneous networks since they are analytically tractable .however , different researches showed that real social , technological , biological networks , etc , are very heterogeneous .other works showed that the sir model , at its steady state , is related to link percolation . in percolation processes , links are occupied with probability . above a critical threshold ,a giant component ( gc ) emerges , which size is of the order of the system size ; while below there are only finite clusters .the relative size of the gc , , is the order parameter of a geometric second order phase transition at the critical threshold . using a generating function formalism , it was shown that the sir model in its steady state and link percolation belong to the same universality class and that the order parameter of the sir model can be exactly mapped with the order parameter of link percolation .for homogeneous networks the exponents of the transitions have mean field ( mf ) value , although for very heterogeneous network the exponents depend on .almost all the researches on epidemics were concentrated in studying the behavior of the infected individuals .however , an important issue is how the susceptible network behaves when a disease spreads .recently , valdez _ et . studied the behavior of the giant susceptible component ( gsc ) that is the functional network , since the gsc is the one that supports the economy of a society .they found that the susceptible network also overcomes a second order phase transition where the dilution of the gsc during the first epidemic spreading can be described as a `` node void percolation '' process , which belongs to the same universality class that intentional attack process with mf exponents .understanding the behavior of the susceptible individuals allows to find strategies to slow down the epidemic spread , protecting the healthy network .various strategies has been proposed to halt the epidemic spreading .for example , vaccination programs are very efficient in providing immunity to individuals , decreasing the final number of infected people . however , these strategies are usually very expensive and vaccines against new strains are not always available during the epidemic spreading . as a consequence ,non - pharmaceutical interventions are needed to protect the society .one of the most effective and studied strategies to halt an epidemic is quarantine but it has the disadvantage that full isolation has a negative impact on the economy of a region and is difficult to implement in a large population .therefore , other measures , such as social distancing strategies can be implemented in order to reduce the average contact time between individuals .these `` social distancing strategies '' that reduce the average contact time , usually include closing schools , cough etiquette , travel restrictions , etc .these measures may not prevent a pandemic , but could delay its spread . in this review ,we revisit two social distancing strategies named , `` social distancing induced by quenched disorder '' and `` intermittent social distancing '' ( isd ) strategy , which model the behavior of individuals who preserve their contacts during the disease spreading . in the former ,links are static but health authorities induce a disorder on the links by recommending people to decrease the duration of their contacts to control the epidemic spreading . in the latter , we consider intermittent connections where the susceptible individuals , using local information , break the links with their infected neighbors with probability during an interval after which they reestablish the connections with their previous contacts .we apply these strategies to the sir model and found that both models still maps with link percolation and that they may halt the epidemic spreading .finally , we show that the transmissibility does not govern the temporal evolution of the epidemic spreading , it still contains information about the velocity of the spreading .one of the most studied version of the sir model is the time continuous kermack - mckendrick formulation , where an infected individual transmits the disease to a susceptible neighbor at a rate and recovers at a rate .while this sir version has been widely studied in the epidemiology literature , it has the drawback to allow some individuals to recover almost instantly after being infected , which is a highly unrealistic situation since any disease has a characteristic recovering average time . in order to overcome this shortcoming, many studies use the discrete reed - frost model , where an infected individual transmits the disease to a susceptible neighbor with probability and recovers time units after he was infected . in this model ,the transmissibility that represents the overall probability at which an individual infects one susceptible neighbor before recover , is given by it is known that the order parameter , which is the final fraction of recovered individuals , overcomes a second order phase transition at a critical threshold , which depends on the network structure .one of the most important features of the reed - frost model ( that we will hereon call sir model ) is that it can be mapped into a link percolation process , which means that is possible to study an epidemiological model using statistical physic tools .heuristically , the relation between sir and link percolation holds because the effective probability that a link is traversed by the disease , is equivalent in a link percolation process to the occupancy probability . as a consequence ,both process have the same threshold and belong to the same universality class .moreover , each realization of the sir model corresponds to a single cluster of link percolation .this feature is particularly relevant for the mapping between the order parameters of link percolation and for epidemics , as we will explain below . for the simulations , in the initial stage all the individuals are in the susceptible state .we choose a node at random from the network and infect it ( patient zero ) .then , the spreading process goes as follows : after all infected individuals try to infect their susceptible neighbor with a probability , and those individuals that has been infected for time steps recover , the time increases in one .the spreading process ends when the last infected individual recovers ( steady state ) . in a sir realization ,only one infected cluster emerges for any value of .in contrast , in a percolation process , for many clusters with a cluster size distribution are generated . therefore we must use a criteria to distinguish between epidemics ( gc in percolation ) and outbreaks ( finite clusters ) .the cluster size distribution over many realizations of the sir process , close but above criticality , has a gap between small clusters ( outbreaks ) and big clusters ( epidemics ) .thus , defining a cutoff in the cluster size as the minimum value before the gap interval , all the diseases below are considered as outbreaks and the rest as epidemics ( see fig .[ sc11]a ) .note that will depend on .then , averaging only those sir realizations whose size exceeds the cutoff , we found that the fraction of recovered individuals maps exactly with ( see fig .[ sc11]b ) . for our simulations, we use for . vbmbfig01.eps ( 80,50)*(a ) * vbmbfig02.eps ( 85,50)*(b ) * it can be shown that using the appropriate cutoff , close to criticality , all the exponents that characterizes the transition are the same for both processes .thus , above but close to criticality with the exponent of the finite cluster size distribution in percolation close to criticality is given by for the sir model and for a branching process ( see sec . [ mathperc ] ) , there is only one `` epidemic '' cluster , thus near criticality the probability of a cluster of size , , has exponent , where is given by eq .( [ eq.tau ] ) ( see fig .[ sc11]a ) . for sf networks with , in the thermodynamic limit, the critical threshold is zero , and there is not percolation phase transition . on the other hand ,for and er networks , all the exponents take the mean field ( mf ) values .given a network with a degree distribution , the probability to reach a node with a degree by following a randomly chosen link on the graph , is equal to , where is the average degree .this is because the probability of reaching a given node by following a randomly chosen link is proportional to the number of links of that node and is needed for normalization .note that , if we arrive to a node with degree following a random chosen link , the total number of outgoing links or branches of that node is .therefore , the probability to arrive at a node with outgoing branches by following a randomly chosen link is also .this probability is called excess degree probability . in order to obtain the critical threshold of link percolation , let us consider a randomly chosen and occupied link .we want to compute the probability that through this link an infinite cluster _ can not _ be reached . for simplicity , we assume to have a cayley tree . herewe will denote a cayley tree as a _single _ tree with a given degree distribution .notice that link percolation can be thought as many realizations of cayley tree with occupancy probability , which give rise to many clusters . by simplicitywe first consider a cayley tree as a deterministic graph with a fixed number of links per node . assuming that , the probability that starting from an occupied link we can not reach the shell through a path composed by occupied links , is given by ^ 2.\ ] ] here, the exponent takes into account the number of outgoing links or branches , and is the probability that one outgoing link is not occupied plus the probability that the link is occupied ( , at least one shell is reached ) but it can not lead to the following shell . in the case of a cayley tree with a degree distribution , we must incorporate the excess degree factor which accounts for the probability that the node under consideration has outgoing links and sum up over all possible values of .therefore , the probability to _ not reach _ the generation can be obtained by applying a recursion relation ^{k-1},\\ & = & g_{1}[(1-p)+pq_{n-1}(p)],\end{aligned}\ ] ] where is the generating function of the excess degree distribution .as increases , and the probability that we can not reach an infinite cluster is .\end{aligned}\ ] ] thus , the probability that the starting link connects to an infinite cluster is . from eq ( [ qinf ] ), is given by .\end{aligned}\ ] ] ) .the straight line represents the left hand side of the equation .the dot - dashed line represents the right hand side ( r.h.s ) for , where the r.h.s .is tangential to at the origin .the dashed curve represents the r.h.s . for .the vertical arrows indicate the points at which the identity function intersects with .both cases are computed for the poisson degree distribution with .[ f : iterations ] ] the solution of equation can be geometrically understood in fig .[ f : iterations ] as the intersection of the identity line and , which has at least one solution at the origin , , for any value of .but if the derivative of the right hand side of eq .( [ finfeq ] ) with respect to , '\vert_{x=0}=pg_1'(1)>1 ] .thus the fraction of nodes that belong to the gc is ^k ] and thus . for pure sf networks , with ,the generating function of the excess degree distribution is proportional to the poly - logarithm function , where is the riemann function . in the current literature ,the epidemic spreading is usually described in terms of compartmental quantities , such as the fraction of infected or susceptible individuals during an epidemic , and very little has been done to describe how the disease affects the topology of the susceptible network that can be considered as the functional network . in the following section ,we explain how an epidemic affects the structure of the functional network in the steady state .we define `` active '' links as those links pairing infected and susceptible individuals . during the epidemic spreading ,the disease is transmitted across active links , leading in the steady state to a cluster composed by recovered individuals and clusters of susceptible individuals .alternatively , the growing process of the infected cluster can also be described as a dilution process from the susceptible point of view . under this approach , as the `` infectious '' cluster grows from a root , the sizes of the void clusters , those clusters composed by susceptible individuals , are reduced as in a node dilution process , since when a link is traversed a void cluster loses a node and all its edges. however , the susceptible nodes are not randomly uniform reached by the disease because they are chosen following a link . as a consequencehigher degree nodes are more likely to be reached than the ones with small degrees .we will call `` node void percolation '' to this kind of percolation process in which the void nodes are not removed at random . in this dilution process, there exists a second critical value of the transmissibility ( with ) , above which the giant susceptible component ( gsc ) is destroyed . similarly to link percolation , in a cayley tree ( branching process ) the analytical treatment for the dilution of the susceptible network uses a generating function formalism , that allows to compute the existence of a gsc and its critical threshold . considering the same growing infected cluster process as in the previous section , for large generations can also be interpreted as the probability that starting from a random chosen link , a path or branch leads to the gc .thus , if we can not reach a gc through a link , as we have a single tree , that link leads to a void node .thus the probability to reach a void node through a link is given by \end{aligned}\ ] ] which is also the probability to reach a susceptible individual by following a link at a given transmissibility .it was shown that is a fundamental observable to describe the temporal evolution of an epidemic .as in the usual percolation process , there is a critical threshold at which the susceptible network undergoes a second order phase transition . above gsc exists while at and below susceptible individuals belong only to finite components . as a consequence , the transmissibility needed to reach this point fulfills .\ ] ]therefore , from eq ( [ condition_p ] ) we obtain the self consistent equation ,\ ] ] where is the solution of eq .( [ pestrellafinal ] ) and is given by ] . the solid line that corresponds to that is for an er network with , separates the epidemic phase from the epidemic free phase region shown in dark gray .the dashed line shows that is below which a giant component of susceptible emerges .the light gray region is the phase in which the gsc and the giant recovered cluster coexists . ] in this strategy , there are no restrictions on which individual to get away from . another strategy could be to advise people to cut completely their connection with their infected contacts ( when possible ) for a given period of time .this kind of strategy will be analyzed in the next section .in the previous strategy , individuals set a quenched disorder on the intensity of the interaction with their neighbors in order to protect themselves from the epidemic spreading .an alternative strategy consists of susceptible individuals that inactivate the interactions with their infected neighbors , but reestablish their contacts after some fixed time .this strategy that we call intermittent social distancing ( isd ) strategy mimics a behavioral adaptation of the society to avoid contacts with infected individuals for a time interval , but without losing them permanently .this is an example of adaptive network where the topology coevolves with the dynamical process . specifically , we study an intermittent social distancing strategy ( isd ) in which susceptible individuals , in order to decrease the probability of infection , break ( or inactivate ) with probability their links with infected neighbors for intermittent periods of length .we closely follow the presentation of this model from ref .assuming that the disease spreads with probability through the active links and that the infected individuals recovers after time steps , at each time step the infected individual tries first to transmit the disease to his susceptible neighbors , and then if he fails , susceptible individuals break their links with probability for a period .these dynamic rules generate an intermittent connectivity between susceptible and infected individuals that may halt the disease spreading . in the limit case of ,the isd strategy is equivalent to a permanent disconnection , because when the link is restored the infected neighbor is recovered ( or dead ) and can not transmit the disease anymore . in order to compute the transmissibility for this strategy, we first introduce the case and then we generalize for any value of . for the case , let consider that an active link appears and denote the first time step of its existence as . at this time step , the active link tries to transmit the disease with probability , if it fails that link will be broken for the next time steps . after restoring that active link ,the process is periodically repeated with period , until the disease is transmitted or the infected individual recovers . on the other hand ,the time steps at which the link is active are located at times where is an integer number defined in the interval ] is the maximum number of disconnection periods that leaves at the end at least one time step to transmit the disease . in particular ,the probability to transmit the disease at the next time after disconnection periods is given by . then summing over all possible values of , the total transmissibility is given by }(1-\beta)^{u}\right),\nonumber\\ & = & 1-\left(1-\beta\right)^{\left[\frac{t_r-1}{t_b+1}\right]+1}.\end{aligned}\ ] ] for the case , first consider the example with only one disconnection period ( ) , , and the infectious transmission at the time step , that is illustrated in the first line of table [ table1 ] .note that in this case , there are only time units at which the link is active .then , for this example the transmissibility is proportional to four factors : i ) since there are active time steps at which the infected individual can not transmit the disease , and at the last time unit the disease is transmitted , ii ) , because the link is broken one time , iii ) , because during active time steps the infected individual does not break the link except just before each inactive period and the last day , and iv ) that is the total number of configurations in which we can arrange one inactive period in a period of length ( this factor only takes into account the first time units , because the disease is transmitted at time . see the first line of table [ table1 ] ) . in the general case ,for all the values , the disease spreads with a total transmissibility given by , in the first term of eq .( [ ec.trans ] ) , is the probability that an active link is lost due to the infection of the susceptible individual at time step given that the active link has never been broken in the steps since it appears . in the second term of eq .( [ ec.trans ] ) , denotes the probability that an active link is lost due to the infection of the susceptible individual at time given that the link was broken at least once in the first time units .the probability , which is only valid for is given by }\binom{m - u\;t_{b}-1}{u}\sigma^{u}\times \nonumber\\ & & ( 1-\sigma)^{m-1-u(t_{b}+1)}(1-\beta)^{m-1-u\;t_{b}},\end{aligned}\ ] ] where ] , which for er networks . near the critical threshold of the susceptible network , the values of and from eq .( [ eqs112 ] ) are near to , in which we can approximate the function as a parabola .thus , where and are constants . doing some algebra on eq .( [ eqs112 ] ) around , we obtain , is in the middle between and . rewriting and as and , with , then near criticality , eq .( [ eqs113 ] ) can be approximated by on the other hand , near criticality we have that therefore , using the relations ( [ plaw1 ] ) and ( [ plaw2 ] ) , we obtain with , that is a mf exponent . note that we have not made any assumption on the form of or .thus , this result is valid for homogeneous and heterogeneous networks .m. bogu , r. pastor - satorras and a. vespignani , epidemic spreading in complex networks with degree correlations , in _ statistical mechanics of complex networks _ , eds .r. pastor - satorras , m. rubi and a. diaz - guilera , lecture notes in physics , vol .625 ( springer berlin heidelberg , 2003 ) pp . 127147 .
the recurrent infectious diseases and their increasing impact on the society has promoted the study of strategies to slow down the epidemic spreading . in this review we outline the applications of percolation theory to describe strategies against epidemic spreading on complex networks . we give a general outlook of the relation between link percolation and the susceptible - infected - recovered model , and introduce the node void percolation process to describe the dilution of the network composed by healthy individual , , the network that sustain the functionality of a society . then , we survey two strategies : the quenched disorder strategy where an heterogeneous distribution of contact intensities is induced in society , and the intermittent social distancing strategy where health individuals are persuaded to avoid contact with their neighbors for intermittent periods of time . using percolation tools , we show that both strategies may halt the epidemic spreading . finally , we discuss the role of the transmissibility , , the effective probability to transmit a disease , on the performance of the strategies to slow down the epidemic spreading .
the residuals carry important information concerning the appropriateness of assumptions that underlie statistical models , and thereby play an important role in checking model adequacy .they are used to identify discrepancies between models and data , so it is natural to base residuals on the contributions made by individual observations to measures of model fit .the use of residuals for assessing the adequacy of fitted regression models is nowadays commonplace due to the widespread availability of statistical software , many of which are capable of displaying residuals and diagnostic plots , at least for the more commonly used models . beyond special models , relatively little is known about asymptotic properties of residuals in general regression models .there is a clear need to study second - order asymptotic properties of appropriate residuals to be used for diagnostic purposes in nonlinear regression models .the unified theory of generalized linear models ( glms ) , including a general algorithm for computing the maximum likelihood estimates ( mles ) is extremely important for analysis of real data . in these models ,the random variables are assumed independent and each has a density function in the linear exponential family ,\ ] ] where and are known appropriate functions .we assume continuous and a probability density function with respect to lebesgue measure and that the precision parameter , is the so - called dispersion parameter , is the same for all observations , although possibly unknown .we do not consider the discrete distributions in the form ( ) such as poisson , binomial and negative binomial . for two - parameter full exponential family distributions with canonical parameters and , the decomposition holds. the mean and variance of are , respectively , and , where is the variance function . for gamma models ,the dispersion parameter is the reciprocal of the index , whereas for normal and inverse gaussian models , is the variance and , respectively .the parameter is a known one - to - one function of .a linear exponential family is characterized by its variance function , which plays a key role in estimation .a glm is defined by the family of distributions ( [ exp ] ) and the systematic component , where is a known one - to - one continuously twice - differentiable function , is a specified model matrix of full rank and is a set of unknown linear parameters to be estimated .let be the mle of .residuals in glms were first discussed by pregibon ( 1981 ) , though ostensibly concerned with logistic regression models , williams ( 1984 , 1987 ) and pierce and schafer ( 1986 ) .mccullagh and nelder ( 1989 ) provided a survey of glms with substantial attention to definition of residuals .pearson residuals are the most commonly used measures of overall fit for glms and are defined by , where and are respectively the fitted mean and fitted variance function of . in this paper we consider only pearson residuals appropriate to our particular asymptotic aims when the sample size . cordeiro ( 2004 ) obtained matrix formulae for the expectations , variances and covariances of these residuals and defined adjusted pearson residuals having zero mean and unit variance to order .pearson residuals defined by cordeiro ( 2004 ) are proportional to , although we are considering here as usual without the precision parameter . while cordeiro s adjusted pearson residuals do correct the residuals for equal mean and variance , the distribution of these residuals is not equal to the distribution of the true pearson residuals to order .further , cordeiro and paula ( 1989 ) introduced the class of exponential family nonlinear models ( efnlms ) which extend the glms .later , wei ( 1998 ) gave a comprehensive introduction to these models .recently , simas and cordeiro ( 2008 ) generalized cordeiro s ( 2004 ) results by obtaining matrix formulae of the expectations , variances and covariances of pearson residuals in efnlms . in a general setup ,the distribution of residuals usually differ from the distribution of the true residuals by terms of order .cox and snell ( 1968 ) discussed a general definition of residuals , applicable to a wide range of models , and obtained useful expressions to this order for their first two moments .loynes ( 1969 ) derived , under some regularity conditions , and again to order , the asymptotic expansion for the density function of cox and snell s residuals , and then defined corrected residuals having the same distribution as the random variables which they are effectively estimating . in all but the simplest situations , the use of the results by cox and snell and loynes will require a considerable amount of tedious algebra .our chief goal is to obtain an explicit formula for the density of pearson residuals to order which holds for all continuous glms . in section 2we give a summary of key results from loynes ( 1969 ) applied to pearson residuals in glms .the density of pearson residuals in these models corrected to order is presented in section 3 .we provide in section 4 applications to some common models . in section 5we compare the corrected residuals with the adjusted residuals proposed by cordeiro ( 2004 ) .we present in section 6 simulation studies to assess the adequacy of the approximations for a gamma model with log link .some concluding remarks are given in section 7 .finally , in the appendix , we give a more rigorous proof of the general results discussed by loynes ( 1969 ) .the contribution for the score function from the observation follows from ( [ exp ] ) where is the weight function and from now on the dashes indicate derivatives with respect to .let be the true pearson residual corresponding to the pearson residual .suppose we write the pearson residual as .we can write the following conditional moments given to order ( loynes , 1969 ) where is the element of the inverse information matrix for , is the bias of and is the conditioned score function . the mean and variance of the asymptotic distribution of , given , are to order where , , and .we obtain by simple differentiation and conditioning on leads to and where and for canonical models ( ) , ( [ efunction ] ) and ( [ hfunction ] ) become conditioning the score function on , yields , and then using ( ) we find where , is the diagonal matrix of weights , is a -vector with the element equal to one and all other elements equal to zero and is an -vector with one in the position and zeros elsewhere . defining , we can easily verify that cordeiro and mccullagh ( 1991 ) showed that the bias of is given by where , , is a diagonal matrix with the diagonal elements of and is an -vector of ones .the asymptotic covariance matrix of the mle of the linear predictor is simply .we obtain where is the element of the bias of .the bias expression depends on the model matrix , the variance function and the first two derivatives of the link function .also , the conditional mean from ( [ meancond ] ) is then a second - degree polynomial in given by where and are obtained from ( ) and ( ) .we now compute the conditional variance . from ( )it follows hence , is also a second - degree polynomial in .a simple calculation from ( ) gives the probability density function ( pdf ) of the true pearson residual ,\ ] ] where .table 1 gives the densities of the true residuals for the normal , gamma and inverse gaussian distributions , where is the gamma function ..densities of the true residuals for some distributions . [ cols="^,^,^",options="header " , ] we conclude the study providing an application of the corrected residuals to assess the adequacy of the above gamma model .we could expect that under a well - specified model , the distribution of the corrected residuals will follow approximately the distribution of the true residuals .however , even though it is common to compare the distribution of the pearson residuals with the normal distribution , it is not clear that this approximation should be good in small samples .therefore , we compare the empirical distribution of the corrected residuals with the distribution of the true residuals and the distribution of the uncorrected residuals with the normal distribution . for doing this ,we use a qqplot which displays a quantile - quantile plot of the sample quantiles of the corrected and uncorrected residuals versus theoretical quantiles from the estimated distribution of the true residuals and the normal distribution with mean zero and variance , respectively .if the distribution of the corrected residuals is well approximated by the distribution of the true residuals , the plot will be close to linear .therefore , we expect that a qqplot of the studentized corrected residuals versus the estimated distribution of the true residuals should be closer to the diagonal line than that qqplot of the uncorrected residuals against the normal distribution .moreover , we also consider the qqplot of the adjusted residuals suggested by cordeiro ( 2004 ) against the theoretical quantiles of a standard normal distribution .figure 1 gives two qqplots , one for the vector of the ordered uncorrected residuals and other for the vector of the ordered corrected residuals .these figures show that even for a well - specified model , the plot for the uncorrected residuals is very distant from the diagonal line when compared with the plot for the corrected residuals .the adjusted residuals given in figure 2 provides an improvement in regard to the uncorrected residuals , but the plot is also distant from the diagonal line when compared to the corrected residuals .therefore , the corrected residuals have a good behavior that leads to the right conclusion , i.e. , that the model is well - specified .we thus recommend the corrected residuals to build up qqplots .using the results given in loynes ( 1969 ) , we calculate the distribution of the pearson residuals in glms ( see , for instance , mccullagh and nelder , 1989 ) .it is important to mention that the distribution of residuals in regression models are typically unknown , and therefore all inference regarding these residuals are done by asymptotic assumptions which may not hold in small or moderate sample sizes .then we can use this knowledge to define corrected pearson residuals in these models in such a way that the corrected residuals will have , to order , the same distribution of the true pearson residuals , which is known .the corrected residuals have practical applicability for all continuous glms .we simulate a gamma model with log link to conclude the superiority of the corrected pearson residuals over the uncorrected residuals and also over the adjusted residuals suggested by cordeiro ( 2004 ) with regard to the approximation to the reference distribution , which for the corrected and uncorrected residuals was the distribution of the true residuals and for the adjusted residuals was the standard normal distribution .the paper is concluded with an application of the corrected residuals to assess the adequacy of the model .suppose we write the residual in terms of the true residual as , where and are absolutely continuous random variables with respect to lebesgue measure and is of order .our goal is to define a corrected residual having the same density of to order .initially , we have expanding in a taylor series around gives let and .thus , where is the density function of . by using formulae ( 25 ) and ( 26 ) from cox and snell ( 1968 ) with , it is possible to conclude that and var ( and thus ) are of order and , in the same way , that the higher moments of are of order . in a similar manner , we can show that and var are also of order , and that the higher - order conditional moments are of order .then , we can rewrite equation ( [ expect ] ) as note that we can express the integral on the right side of ( [ expect2 ] ) as a sum of three integrals .then , integration by parts , one time for the integral containing on the integrand and two times for the integral containing on the integrand , yields the following formula + o(n^{-1}).\ ] ] the uniqueness theorem for characteristic functions yields the density of to order equation ( ) is identical to formula ( 5 ) in loynes ( 1969 ) .further , we now define corrected residuals of the form where is a function of order used to recover the distribution of .we may proceed as above , noting that , to obtain the density of to order since the quantities and are all of order , we have that to this order .therefore , the densities of and will be the same to order if integration gives equation ( [ eqloynes ] ) is identical to equation ( 6 ) given by loynes ( 1969 ) and it is clear from the proof that the support of does not need to be the entire line and we can have proper intervals as support .we should note that the assumptions needed can be made weaker if we require that an expansion of the taylor polynomial of order two with a remainder term ( for instance , lagrange remainder ) can be done instead of the complete series .we could also prove loynes ( 1969 ) results by using the equivalence of ( 3c ) and ( 4c ) , together with ( 5 ) and ( 6 ) of cox and reid ( 1987 ) and appropriate regularity conditions .the idea to this approach is as follows : consider in equation ( 3c ) of cox and reid ( 1987 ) , and .this means that we are writing as , where and are of orders and , respectively .then , from ( 4c ) , ( 5 ) and ( 6 ) of cox and reid ( 1987 ) , we can write de cdf of as where and are the cdf and pdf of , respectively . the expression above implies equation ( [ densloynes ] ) . we can also obtain the expansion for from the equivalence of ( 3c ) and ( 4c ) of cox and reid ( 1987 ) by setting and . the rest of the proof is identical to the one given before .note also , that for this proof does not need to have a support in the entire line since this is not an assumption in the usual regularity conditions .20 anderson , t. w. and darling , d. a. , 1952 .asymptotic theory of certain `` goodness - of - fit '' criteria based on stochastic processes ._ annals of mathematical statistics _ , 23 , 193 - 212 .cordeiro , g. m. , 2004 . on pearson s residuals in generalized linear models . _ statistics and probability letters _ , 66 , 213 - 219 .cordeiro , g. m. and ferrari , s. l. p. , 1998 . a note on bartlett - type correction for the first few moments of test statistics . _ journal of statistical planning and inference _ , 71 , 261 - 269 .cordeiro , g. m. and ferrari , s. l. p. , 1998 .generalized bartlett corrections .theory and methods _, 27 , 509 - 527 .cordeiro , g. m. and mccullagh , p. , 1991 .bias correction in generalized linear models . _b _ , 53 , 629 - 643 .cordeiro , g.m . and paula , g.a . , 1989 .improved likelihood ratio statistic for exponential family nonlinear models ._ biometrika _ , 76 , 93 - 100 .cox , d. r. and reid , n. , 1987 .approximations to noncentral distributions . _ the canadian journal of statistics _ , 15 , 105 - 114 .cox , d. r. and snell , e. j. , 1968 . a general definition of residuals ( with discussion ) .b _ , 30 , 248 - 275 .loynes , r. m. , 1969 . on cox and snell s general definition of residuals .b _ , 31 , 103 - 106 .mccullagh , p. and nelder , j. a. , 1989 ._ generalized linear models _ , chapman and hall , london .pierce , d. a. and schafer , d. w. , 1986 .residuals in generalized linear models ._ j. amer .assoc . , _ 81 , 977 - 986 .pregibon , d. , 1981 . logistic regression diagnostics ., _ 9 , 705 - 724 .simas , a.b . and cordeiro , g.m . , 2008adjusted pearson residuals in exponential family nonlinear models ._ j. statist ._ , to appear .thode jr . , h. c. 2002 ._ testing for normality ._ new york : marcel dekker .wei , b - c . , 1998 ._ exponential family nonlinear models _ ,singapore : springer .williams , d. a. , 1984 .residuals in generalized linear models .12th international biometrics conference , tokyo , _ 59 - 68 .williams , d. a. , 1987 .generalized linear model diagnostics using the deviance and single case deletions .statist . , _ 36 , 181 - 191 .
in general , the distribution of residuals can not be obtained explicitly . we give an asymptotic formula for the density of pearson residuals in continuous generalized linear models corrected to order , where is the sample size . we define corrected pearson residuals for these models that , to this order of approximation , have exactly the same distribution of the true pearson residuals . applications for important generalized linear models are provided and simulation results for a gamma model illustrate the usefulness of the corrected pearson residuals . + _ keywords : _ exponential family ; generalized linear model ; pearson residual ; precision parameter
the free - energy landscape of protein molecules represents the key - information for understanding processes of biomolecular self - organization such as folding . the free - energy landscape , indeed , determines all observable properties of the folding process , ranging from protein stability to folding rates . unfortunately , for real proteins , sophisticated all - atom computational methods fail to characterize the free - energy surface , since they are currently limited to explore only few stages of the folding process . as an alternative, one can argue that , taking into account all the complex details of chemical interactions is not necessary to understand how proteins fold into their native state .rather , elementary models incorporating the fundamental physics of folding , while still leaving the calculation and simulations simple , can reproduce the general features of the free - energy landscape and explain a number of experimental results .this attitude , typical of a statistical mechanics approach , agrees with the widely accepted view that `` a surprising simplicity underlies folding '' ( baker ) .in fact several experimental and theoretical studies indicate the topology of protein native state as a determinant factor of folding . as examples, one can mention the fact that even heavy changes in the sequence that preserve the native state , have a little effect on the folding rates. moreover , the latter are found to correlate to the average contact order, which is a topological property of native state .finally , proteins with similar native state but low sequence similarity often have similar transition state ensembles. within this context , elementary models which correctly embody the native state topology and interactions , are believed to be useful in describing the energy landscape of real proteins . in this paper , we study one of such topology - based models proposed by galzitskaya and finkelstein ( gf), which was developed to identify the folding nucleus and the transition state configurations of proteins . the model employs a free - energy function with a reasonable formulation of the conformational entropy , which is certainly the most difficult contribution to describe .the energetic term , instead , takes into account only native state attractive interactions . in the original paper, the modelwas combined with a dynamic programming algorithm to search for transition states of various proteins . to reduce the computational cost of the search , two kinds of approximations were introduced :the protein was regarded as made up of `` chain links '' of 2 - 4 residues , that fold / unfold together ; besides , only configurations with up to three stretches of contiguous native residues were considered in the search ( `` triple - sequence approximation '' ) . as shown in ref ., the effect of such assumptions is a drastic entropy reduction of the unfolded state and possibly of the transition state .this produces free energy profiles very different from the true ones , thus spoiling the evaluation of -values . here ,we apply the model in a more general statistical mechanical philosophy : namely , we develop three different mean - field approaches of increasing complexity , and compare their prediction with the exact results , obtained by exhaustive enumeration of all the configurations , in the case of a 16-residues - long peptide ( c - terminal 41 - 56 fragment of the streptococcal protein g - b1) which is known to fold , in isolation , to a -hairpin structure. our main goal here is to test the model against experimental findings and to test the mean - field predictions against the exact results . in the futurewe will use this knowledge to apply the appropriate mean - field approach to the case of real proteins , for which exhaustive enumeration is unfeasible .the paper is organized as follows . in the next section ,we present and describe the main features of the gf model . in sectionii , we introduce and discuss three mean - field approximations : the usual scheme , and two other approaches stemming from the knowledge of the exact solution for the muoz - eaton model. in section iii , we apply the model and its mean - field approximations to study the folding transition of the -hairpin and discuss our results .the gf model assumes a simple description of the polypeptide chain , where residues can stay only in an ordered ( native ) or disordered ( non - native ) state .then , each micro - state of a protein with residues is encoded in a sequence of binary variables , ( ) . when ( ) the -th residue is in its native ( non - native ) conformation .when all variables take the value the protein is correctly folded , whereas the random coil corresponds to all s . since each residue can be in one of the two states , ordered or disordered , the free energy landscape consists of configurations only .this drastic reduction of the number of available configurations represents , of course , a restrictive feature of the model , however , follows the same line of the well known zimm - bragg model widely employed to describe the helix to coil transition in heteropolymers . the effective hamiltonian ( indeed , a free - energy function ) is where is given by : \ , .\label{eq : s}\ ] ] is the gas constant and the absolute temperature . the first term in eq .( [ eq : finkel ] ) is the energy associated to native contact formation .non native interactions are neglected : this assumption , that can be just tested _ a posteriori _ , is expected to hold if , during the folding process , the progress along the reaction coordinate is well depicted on the basis of the native contacts ( that is , the reaction coordinate(s ) must be related to just the native contacts ) . moreover, such progress must be slow with respect to all other motions , so that all non - native interaction can be `` averaged - out '' when considering the folding process. denotes the element , of the contact matrix , whose entries are the number of heavy - atom contacts between residues and in the native state . herewe consider two amino - acids in contact , when there are at least two heavy atoms ( one from aminoacids and one from ) separated by a distance less than .the matrix embodies the geometrical properties of the protein .notice that , in the spirit of considering the geometry more relevant than the sequence details , every ( heavy ) atom - atom contact is treated on equal footing : the chemical nature of the atoms is ignored , together with a correct account for the different kind of interactions . the second term in ( [ eq : finkel ] ) is the conformational entropy associated to the presence of unfolded regions along the chain , and vanishes in the native state .more precisely the first term in eq .( [ eq : s ] ) is a sort of `` internal '' entropy of the residues , that can be attributed to the ordering of the main and side - chains degrees of freedom upon moving from the coil to the native state .indeed , represents the entropic difference between the coil and the native state of a single residue , as can be noticed by considering that in the fully unfolded state the first and last term vanish , and the entropy is given by . the quantity in eq .( [ eq : s ] ) , instead , is the entropy pertaining to the disordered closed loops protruding from the globular native state; it reads : according to ref . , we take: in this context a disordered loop is described by a strand of all `` 0 ' 's between two `` 1 ' 's : for instance the configuration contains two loops involving and residues respectively .the product in expression ( [ eq : sloop ] ) warrants that only uninterrupted sequences of `` 0 '' can contribute to the loop entropy .the configuration of a disordered loop going from residues to , with and in their native positions , is assimilated to a gaussian chain of beads ( atoms ) with end - to - end distance , the latter being the distance between c atoms of residues and in the native state .the parameters and are the average distance of consecutive s along the chain and persistence length respectively . other forms for could also be used ( see , e.g. ref . ) ; yet , here we are interested in evaluating the original gf model and devising good mean - field approximations to it , and we will not discuss this subject any further .the interested reader may refer to the original articles for a derivation of eq .( [ eq : j ] ) .mean field approach ( mfa ) is certainly the first attempt to investigate the thermodynamical properties of complex systems , because it provides a qualitative picture of the phase diagram that in many cases is only partially modified by more accurate refinement of the theory . in its variational formulation ,mfa , for a system with hamiltonian and corresponding free - energy , starts from the bogoliubov - feynman inequality where is a solvable trial hamiltonian is the corresponding free - energy , both depending on free parameters ( variational parameters ) .such parameters have to be chosen to minimize the second member of ( [ eq : bogofey ] ) to get the minimal upper bound of and accordingly its better approximation .this method defines a variational free - energy whose minimization leads to the self consistent equations that in their general form read with .we implement different versions of the mfa for the gf model that differ each from the other by the choice of the trial hamiltonian . to implement the standard mfa for the gf model , we regard the free energy function ( [ eq : finkel ] ) as an effective hamiltonian .the trial hamiltonian we choose , corresponds to applying an inhomogeneous external field with strengths along the chain with to be determined by minimizing the variational free - energy where is the free energy associated to , thermal averages , performed through the hamiltonian , factorize .the approximate average site `` magnetization '' depends only on the field , and is given by instead of working with external fields s , it is more intuitive to use the corresponding `` magnetizations '' s , writing as a function of the s . due to the choice of , eq .( [ eq : h_0mfa1 ] ) , and to the expression ( [ eq : m_i ] ) , evaluating the thermal average amounts to replacing , in the hamiltonian eq .( [ eq : finkel ] ) , each variable by its thermal average ( [ eq : m_i ] ) . in the endwe get : where and is obtained from eq .( [ eq : s ] ) by substituting .the last term corresponds to in eq .( [ eq : genericfvar ] ) : it is the entropy associated to the system with hamiltonian and is the typical term that stems from this kind of mfa. carrying out the minimization of function ( [ eq : fmvar ] ) with respect to leads to self - consistent equations : equations ( [ eq : self ] ) can be solved numerically by iteration and provide the optimal values of the magnetizations that we denote by . once the set of solutions is available , we can compute the variational free - energy that represents the better estimation of the system free - energy . in a mean - field approach , the ( connected ) correlation function between residues and , can be recovered through a differentiation of : where the subscript indicates that the derivative is evaluated on the solutions .explicitating each term of we obtain the expression the correlation function matrix is given by the inversion of above matrix .the quality of the mfa improves when we make a less naive choice for .one of the possible is suggested by the muoz - eaton model that was proven to be fully solvable in ref . .in fact , even if the two models are not equivalent , there is an interesting formal relationship between that model and the present one . in the muoz - eaton model , the ( effective ) energy of a configuration results from the contributions coming from the stretches of contiguous native residues it presents , plus an entropic contribution from each of the non - native residues. here the effective energy eq .( [ eq : finkel ] ) boils down to the contributions of stretches of contiguous non - native residues ( the loops ) , plus the sum of pairwise interactions of native residues .this latter term makes the model harder to solve than muoz - eaton s one .if we neglect this interaction , and replace it with a residue - dependent contribution , the model can be mapped on the muoz - eaton model .indeed , a trial hamiltonian of the kind : with given by eqs .( [ eq : s],[eq : sloop ] ) , can be recast as upon the substitution , where is a constant , and with \,,\\ \mu_i= & -r t(q+ j_{i-1,i+1 } ) -x_i\,,\end{aligned}\ ] ] ( here of eq .( [ eq : j ] ) ; ) .now the trial hamiltonians reads formally as the muoz - eaton hamiltonian : see eq .( 1 ) of ref . , where the symbol was used instead of .hence , we choose eq .( [ eq : h0easy ] ) as the trial hamiltonian , and write down the mean field equations eq .( [ eq : meanfield ] ) : = 0 \label{eq : mfeasy}\ ] ] for .these equations involve the functions [ eq : ceasy ] where averages are evaluated by the same transfer matrix technique as in ref . .using the fact that cvm is exact for the muoz - eaton model , it can also be proven that the three - point functions can be written as a function of the two - point ones : , for .this greatly reduces the computational cost of minimizing the variational free energy and makes the approach particularly suitable for long polypeptide chains .correlations could still be evaluated as in eq .( [ eq : standard_cij ] ) , but now the dependence of upon can not be worked out explicitly , and the derivatives must be evaluated resorting to the dependence on the fields : namely .however , this entails to evaluate the four - point averages , with a consequent relevant computational cost , for this reason , we will not pursue this strategy in the following . in the previous mfa version ,the entropic term was treated exactly while the energy contribution was very roughly approximated .this new version aims to better incorporate the energy contributions and we shall see that results are in excellent agreement with the exact solution obtained by exact enumeration on the -hairpin .we consider the set of configurations of the proteins with native residues ( ) .we then take as the trial hamiltonian where is the kronecker delta , and is the hamiltonian restricted to the configurations with natives : with .each residue , in a generic configuration with native residues , feels an interaction which it would feel in the native state , weakened by a factor ( accounting for the fact that not all the residues are native ) , times the external field , to be fixed by the mean field procedure .this scheme is useful for taking correlations into account in a better way than in the usual mfa , so to gain some insight on the parts of the chain that fold first and to investigate folding pathways . in this frameworkthe partition function is : where the symbol above the sum indicates that the sum is restricted to configurations with native residues .the mean field equations ( [ eq : meanfield ] ) reads =0 \ , , \label{eq : mfdifficult}\ ] ] for each , where [ eq : cdifficult ] are the contributions to the correlation associated to configurations with native residues .the transfer - matrix method applied in ref .allows keeping track separately of the contributions coming from the configurations with a given total number of native residues , therefore it is possible to evaluate exactly the partition functions , and all the averages eq .( [ eq : cdifficult ] ) involved in the mean field equations eq .( [ eq : mfdifficult ] ) .the computational cost is relevant , though : in fact , due to the necessity of evaluating all and some ( the ones actually occurring in eq .( [ eq : mfdifficult ] ) ) , elementary multiplications are required .as far as correlations are concerned , the same discussion of the mfa2 case holds .we compare the mfa results with numerical simulations on the -hairpin , the fragment 41 of the naturally occurring protein gb1 ( 2gb1 in the protein data bank). this peptide has been widely studied experimentally, through all - atom simulations and simplified models. thus it represents a good test for the validity of the model and its approximations .since the -hairpin contains only aminoacids , we can carry out exact enumeration over the possible configurations to compute explicitly the partition function of the model . once the function is known , all the thermal properties are available and it is possible to completely characterize the thermal folding of the hairpin peptide. however , first , we have to adjust the model free parameters and to reproduce experimental data on the hairpin equilibrium folding .experimental results on tryptophan fluorescence, show that , in the folded state , the 99% of molecules contain a well formed hydrophobic cluster made of trp , tyr , phe and val . in the model ,the formation of the hydrophobic cluster is described by the behaviour of the four - points correlation function ( notice that , here and in the following , residues are renumbered from 1 to 16 , instead of 41 ) .fraction of native residues ( see [ eq : q ] ) during thermal folding , according to the gf model .full dots are the exact result obtained by exhaustive enumeration. dashes and full lines indicate mfa1 and mfa3 approximations , respectively .inset : fit of the hydrophobic cluster ( ) population ( solid ) to the experimental data from ( triangles ) . ]the choice of the model parameters and ( kcal / mol ) provides the best fit of to the behavior of the experimental fraction of folded molecules ( cfr .inset of fig .[ fig : order ] with fig . 3 of ref . ) .we can now assess the goodness of the model and its mean - field approximations , by comparing their predictions with the experimental results and simulations .averages and correlations within the mean - field schemes will be evaluated as follows : for mfa1 , the self - consistent mean - field equations ( [ eq : self ] ) are solved by iteration , substituting an arbitrary initial value for at the right - hand side of eq .( [ eq : self ] ) , evaluating from the left - hand side , and substituting again the latter value in the right - hand side , until convergence is achieved . in the present case, this procedure converges quickly to two different solutions ( depending on the starting values of the fields ) , corresponding to different phases : the folded one ( ) at low temperature and the unfolded ( ) at high temperature .starting from the unfolded phase and lowering the temperature the solution of eqs .( [ eq : self ] ) remains trapped into a set of misfolded metastable states . only at temperatures well below the folding temperature the solution collapses into the one representing the folded state .the opposite happens when the temperature is increased starting from the folded phase .this is a typical scenario of first - order like transitions , which is reproduced by the mean field approach .the situation is well illustrated by the behaviour of the mean field free - energy , which exhibits two branches and as shown by the dashed lines in the inset of fig .[ fig : cfr ] .the intersection of the two branches defines the mean - field folding temperature . at a given temperature ,the free - energy of the protein is obtained by selecting the minimum of the two branches comparison between mfa and the exact enumeration .behaviour of specific heat ( kcal mol k ) and free - energy ( kcal mol ) with temperature , as obtained by the exact enumeration of the gf model applied to the hairpin .dots indicate the exact results , while dashed and solid lines correspond to mfa1 and mfa3 , respectively . in the inset ,the mean - field free energies and the exact free energy are plotted against temperature rendering conventions as before .notice the crossing of two branches of mfa1 at the transition temperature . ] in this approximation other observables present a jump at transition : this reflects the fact that in the thermodynamic limit ( here corresponding to infinitely long proteins ) , only the solution with the lowest free - energy would be physical . to take into account finite - size effect, we decide to introduce an interpolating formula to deal with a continuous quantity : where and are the averages of the observable in the above mentioned branches . in this waywe compute the average magnetization ( i.e. the fraction of correctly folded residues ) of the protein : as well as its energy . in the latter case , are evaluated as .differentiating the energy with respect to the temperature , we get the specific heat , reported in fig . [fig : cfr ] . notice that this is the correct recipe to take into account also the contributions to the specific heat coming from the change of the native fraction of molecules : the alternative one , obtained with the direct application of eq .( [ eq : matching ] ) to the specific heats and , would neglect the change in the number of folded molecules , and account only for the variations of the energy within the pure native or unfolded state . for the same reason , eq .( [ eq : matching ] ) is not useful to match the correlation functions evaluated on the two branches .it would yield only a linear superposition of the s relative to native and unfolded states , while the correct functions should account for the contributions coming from all the configuration space .coming to mfa2 , we observe that it keeps exactly into account the entropic term eq .( [ eq : s ] ) . yet , solving the mean - field equations yields again two different solutions at each temperature .thus , mfa2 presents the same kind of problems in characterizing the folding transition states as mfa1 .this is why in the following we will present results just for mfa1 and mfa3 , that behave in a substantially different way . with mfa3 , in fact ,a unique set of fields is observed , independent of the starting values , for any temperature in the interesting range around the transition , and no empirical connection rule eq .( [ eq : matching ] ) is required .moreover , at odds with mfa1 and mfa2 , the difference between and in eq .( [ eq : genericfvar ] ) happens to be negligible at all the relevant temperatures : is a very good approximation to .this suggests that the correct correlation functions , which would be very hard to evaluate , can be replaced by the ones involving averages with the trial hamiltonian : .thus , within mfa3 it is possible to give a substantially correct characterization both of the native and unfolded states , and of the folding nucleus . in fig .[ fig : order ] we plot of eq . ( [ eq : q ] ) as a function of the temperature , for the original model , for mfa1 ( with the help of eq .( [ eq : matching ] ) ) and mfa3 .at low temperatures , where the protein assumes its native state , , while in coil configurations ( i.e. at high temperatures ) .mean field approximations appear to be slightly more `` cooperative '' than the original model , according to their steeper sigmoidal shape .the temperature at which is an estimate of the folding temperature : we have k for the original model , and k for both mfa1 and mfa3 . in fig .[ fig : cfr ] we plot the specific heat : and the free energy .the peak of , which provides another definition for the folding temperature , occurs around k for the exact model and its mean field approximations .notice that mfa1 and mfa3 substantially recover the position of the exact peak , even if the transition appear a little sharper in the mean - field cases .the above estimates of the folding temperatures are somewhat higher than the experimental ones , k and k .interestingly , appears to be higher than the experimental value also for `` united atom '' simulation ( k in the go - model case , k with the full potential introduced in that paper ) , and for all - atoms simulations. free - energy profiles , for various temperatures , are plotted in fig .[ fig : profiles ] versus the number of native residues , that we use as the folding reaction coordinate .free energy landscape for the hairpin , i.e. plot the free - energy ( kcal mol ) of the system vs. the number of native residues .solid lines : exact results for the gf model ; dotted lines : mfa3 .temperatures are 285 k , 300 k , 315 k , 330 k , 345 k , 360 k , from top to bottom . ]profiles suggest that the -hairpin folding is well described by a two state process , i.e. exhibits two minima separated by a barrier that has to be overcome in order to reach the native / unfolded state .notice , though , that this does nt rule out the possibility that folding might not be a two - state process in this case : this could happen if the number of native residues was not a good reaction coordinate. other alternative order parameters should be considered , in addition to , to completely ascertain the nature of the transition .the comparison between exact and mean - field results reveals that the barrier appears to be overestimated in the mf scheme , where it is also shifted towards higher values of the reaction coordinate : again , the mfa appears to be more cooperative than the original model .notice however that the free - energy and position of the native and unfolded minima , and hence the stability gap , are correctly recovered , especially at temperatures close to transition ( i.e. the second and third plots from top down ) .another interesting characterization of the folding pathway comes from the temperature behavior of the pairwise correlation functions between residues that provides an insight on the probability of contact formation during the thermal folding , as shown in refs .in fact , each function develops a peak at a characteristic temperature , which can be regarded as the temperature of formation / disruption of the contact . in fig .[ fig : correla ] , we plot the correlation functions between trp45 and residues to which it is in native interaction . temperature behavior of the correlation function of native contacts involving the tryptophan ( trp45 ) .symbols correspond to the exact solution , while solid lines indicate the mfa3 results for the contacts 1 , 3 , 3 , 3 , 3 , from bottom to top . ]the height of each peak indicates the relevance of the contact from a thermodynamical point of view. thus , each contact turns out to be characterized thermodynamically by the location ( temperature ) and the height of the corresponding peak .this provides a criterion for ranking contacts in order of temperature and relevance ( see refs . ) . for example , at the folding temperature , the contacts that mainly contribute to the folding transition must be searched among those with the characteristic temperature located around and with highest peak of .correlation analysis for the hairpin is summarized in table [ tab : correla ] , where we report the temperature and the height of correlation function peaks , between residues which share a native contact .contacts are sorted in temperature and whenever a tie occurs the sorting runs over the heights of the peaks .in this way , we can have a picture of how contacts are established during the thermal folding ..[tab : correla ] ranking of native contacts according to characteristic temperature and height of the correlation peak. contacts 1 - 16 and 2 - 16 have been neglected : they yield bad results because they are not stable even in the experimental native structure. the first three columns refer to the exact solutions , the others to mfa3 results .[ cols= " > , < , < , > , < , < " , ] assuming that the order of contact stabilization upon decreasing the temperature reflects the order of formation during folding , this is also a qualitative indication of the folding pathway .we see , from the first three columns of table [ tab : correla ] , that gf model predicts that the -hairpin folding begins with the formation of contacts 6 and 6 , 9 and 6 , located in the region between the turn ( 8 ) and the hydrophobic cluster . then , upon lowering the temperature , the folding proceeds with the formation of the other contacts that complete -hairpin structure .this is at odds with the results of more detailed models and simulations predicting that folding starts with the formation of contacts between the side chains of the hydrophobic cluster , and proceeds with the stabilization of the hydrogen bonds in the loop region ( there is no agreement on the order of hydrogen - bonds formation , though ) .gf model predictions are different also from those of the muoz - eaton model, where the hairpin starts folding from the loop region and proceeds outwards in a zipper fashion .experimental results relying on point mutations witness the importance of the hydrophobic residues 3 , 5 , 12 and , to a minor extent , 14 , in stabilizing the hairpin structure .remarkably , contacts between residues 6 , 9 , 11 appear to be partially present also in denaturing conditions. it is interesting to notice , however , that , according to table [ tab : correla ] , contacts 3 , 4 , 3 , 12 , 4 of the hydrophobic cluster are mainly established around the folding temperature , which suggests that also in gf model the hydrophobic cluster plays a central role .this is a nice feature of the model because it is consistent with the experimental evidence ( fluorence signal ) for the formation of the tryptophan hydrophobic environment at the folding .the estimation of correlation functions provided by mfa3 is only in qualitative agreement with exact results ( see fig . [fig : correla ] ) : contacts are formed in a narrower range of temperatures , and a direct comparison would be meaningless . however we can ask what kind of information can be extracted from the mean - field results , wondering , for instance , whether the ranking of contact formation provided by mfa3 is `` statistically equivalent '' to that given by exact solution . thus, we apply the spearman rank - order correlation test. this test amounts to computing spearman correlation where , and are the integer indicating the positions of the -th contact in the two ranking respectively .the parameter , is when the order in the two ranks is the same , while , when the order is reverse . for data in table[ tab : correla ] , we obtain the value , that has a probability to take place if the null hypothesis of uncorrelated ranks holds .this indicates that the order between the contacts obtained with exact and approximate methods is extremely significative : the mean - field approach basically recovers the correct order of contact formation and relevance as obtained with the true original model .one of the most important experimental techniques for characterizing the folding nucleus of a protein ( more precisely of a protein with two - state folding ) consists in the evaluation of . measure the effect of `` perturbation '' introduced in a protein by site - directed mutagenesis. a mutation performed on the -th residue may affect the thermodynamics and kinetics , by altering the free - energy difference between the native and unfolded state ( i.e. the stability gap ) or the height of the folding / unfolding barrier .its effect is quantified through the -value , defined as where , , and and are the variations , with respect to the wild type protein , introduced by the mutation in the folding barrier and stability gap .experimentally , is derived from the changes in the kinetic rates induced by different denaturant concentrations , while is extracted from the changes in the equilibrium population .-values are different for different mutations of a residue ; in any case , a -value close to one implies that the mutated residue has a native - like environment in the transition state and hence is involved in the folding nucleus .a value close to zero , instead , indicates that the transition state remains unaffected by the mutation , and hence the mutated residue is still unfolded at transition . in our theoretical description ,a mutation at site is simulated by weakening the strength of the couplings of % between residue and the others .we choose a small perturbation because we can not predict what kind of rearrangements in the local structure , and hence in the contact map , a true residue - to - residue mutation would involve .our choice warrants that the effect of mutation remains local and does not disrupt completely the state . in figure[ fig : varprof ] , we show the effect of a `` mutation '' of the sixth residue ( asp46 ) on the free - energy profiles .variation on the free - energy profile induced by the perturbation on all the interactions involving the sixth residue ( asp46 ) of the hairpin .the variation ( in kcal mol ) is computed for both the exact solution and mfa3 , at the respective temperatures of equal populations of the native and unfolded basins .solid and dashed lines indicate wild - type and mutated profiles respectively for the exact solution ; dot - dashed and points refer to wild - type and mutated profile respectively in the mfa3 . ] to evaluate the -values , we compute the variations in free energy profiles induced by each mutation , for the exact solution and mfa3 . and are evaluated as , where are , respectively , the partition functions restricted to unfolded and native basins in the free - energy profile , i.e. the regions to the left and right of the top of the barrier in fig . [ fig : profiles ] . is the free - energy of the top of the barrier . through expression eq .( [ eq : phi ] ) we obtain the for each residue . in fig .[ fig : phival ] we report the distributions .there is a good overall correlation between the profiles , that increase and decrease together .effects of `` mutations '' as measured by on each residue .full circles : from the exact solution ; open circles : within mfa3 approach .the temperatures are in both cases those whereby .results depend only slightly on temperature , anyway . ]this is a further confirmation that the relevant features of the model are conserved when applying the mf approach .mean - field results yield smoother profiles , as it could be expected .the ends of the hairpin are characterized by low -values , that become negative for mfa3 : this would correspond to mutations that increase the stability gap but decrease the barrier , or vice - versa . according to these results ,the folding nucleus would be made up by residues 6 , 8 , 9 , 11 , which is in contrast with the already mentioned simulations .in this work we developed and discussed three different mean - field schemes for the galzitskaya - finkelstein model , that represent valid ways to deal with the model for characterizing the thermodynamical properties of a protein and its folding pathway as well . these approaches offer viable alternatives both to the procedure proposed by galzitskaya and finkelstein, and to mc simulations , that become computationally demanding for long polymers and usually affected from the sampling problems .we applied the model to the -hairpin fragment 41 - 56 of the gb 1 protein , since , for this simple system , mean field results can be compared with a brute force solution of the model , and both can be checked against experimental data and simulation published by other groups .our results suggest that , as far as specific heat and simple thermodynamic quantities are concerned , the standard mean - field mfa1 is enough to yield correct results , provided that one uses the recipe eq .( [ eq : matching ] ) to connect the two branches of the solution . for more sophisticated quantities like free - energy profiles , correlations and -values ,mfa3 is to be preferred , since it correctly recovers the main features of the exact solution .the hope is that mean - field results are still representative of the exact ones in the case of longer and more complex proteins , where the latter can not be evaluated .gf model itself yields results that are somewhat in contrast with the mc and md simulations on more detailed models for the hairpin .this discrepancy is probably due to the extreme simplicity of the hamiltonian eq .( [ eq : finkel ] ) , where no distinction is made among the different kinds of interactions , such as hydrogen - bonds , side chain hydrophobicity , and so on .indeed , we expected that a model accounting just for the topology of the native state will not score very well when applied to the -hairpin , where detailed sequence information is relevant. predictions of the model could possibly improve if these elements were taken into account .we thank a. maritan , c. micheletti , a. flammini and a. pelizzola for their suggestions and useful discussions about the model . f.c .thanks a. vulpiani and u.m.b .marconi and acknowledges the financial support of cofin murst 2001 on `` fisica statistica di sistemi classici e quantistici ''. p.b . acknowledges the financial support of cofin murst 2001 `` approccio meccanico - statistico ai biopolimeri '' .bryngelson and p.g .wolynes , _ proc .usa _ * 84 * , 7524 ( 1999 ) .wolynes , j.n .onuchic and d. thirumalai , _ science _ * 267 * , 1619 ( 1995 ) .onuchic , z. luthey - schulten and p.g .wolynes , _ ann . rev . phys. chem . _ * 48 * , 545 ( 1997 ) .dill and h.s .chan , _ nature struct ._ * 4 * , 10 ( 1997 ) .chan and k.a .dill , _ proteins : struct .funct . and genetics _* 30 * , 2 ( 1998 ) .v. muoz and l. serrano , fold . des .* 1 * , r71 ( 1996 ) .jackson , _ fold . des . _ * 3 * , r81 ( 1998 ) .fersht , _ proc .usa _ * 92 * , 10869 ( 1995 ) .m. karplus , _ j. phys .chem . _ * 104 * , 11 ( 2000 ) .d. baker , _ nature _ , * 405 * 39 ( 2000 ) .riddle , v.p .grantcharova , j.v .santiago , e. alm , i. ruczinski and d. baker _nature struct .biol . _ * 6 * , 1016 ( 1999 ) .a. fersht , _ proc . natl .usa _ * 97 * , 1525 ( 1999 ) .j.c . martinez and l. serrano ,_ nature struct ._ * 6 * , 1010 ( 1999 ) .miller , k.f .fischer and s. marqusee , _ proc .usa _ * 99 * , 10359 ( 2002 ) .plaxco , k.t .simons , i. ruczinski and d. baker , _ biochemistry _ * 39 * , 11177 ( 2000 ) . c. micheletti , j.r .banavar , a. maritan , and f. seno , _ phys ._ * 82 * , 3372 ( 1999 ) . c. clementi , h. nymeyer and j.n .onuchic , _ j. mol .biol . _ * 298 * , 973 ( 2000 ) . c. clementi , p.a .jennings and j.n .onuchic , _ proc .* 97 * , 5871 ( 2000 ) .d.s . riddle , j.v .santiago , s.t .bray - hall , n. doshi , v.p .grantcharova , q. yi and d. baker , _ nature struct .* 4 * , 805 ( 1997 ). f. chiti , n. taddei , p.m. white , m. bucciantini , f. magherini , m. stefani and c.m .nature struct .biol . _ * 6 * , 1005 ( 1999 )plaxco , k.t .simons and d. baker , _ j. mol ._ * 277 * , 985 ( 1998 ) .e. alm and d. baker , _ proc .usa _ * 96 * , 11305 ( 1999 ) .n. go , _ macromolecules _ * 9 * , 535 ( 1976 ) .n. go and h. abe , _ biopolymers _ * 20 * , 991 ( 1981 ) .v. munoz , p.a .thompson , j. hofrichter and w.a .eaton , _ nature _ * 390 * , 196 ( 1997 ) . c. micheletti , j.r .banavar , and a. maritan , _ phys ._ * 87 * , 088102 ( 2001 ) .o. v. galzitskaya and a. v. finkelstein , _ proc .usa _ * 96 * , 11299 ( 1999 ) .p. bruscolini and a. pelizzola , _ phys .lett . _ * 88 * , 258101 ( 2002 ) .the sequence is gewtyddatktftvte .zimm and j.k .bragg , _ j. chem. phys . _ * 31 * , 526 ( 1959 ) .a.v . finkelstein and a.ya .badretdinov , _ mol .biol . _ * 31 * , 391 ( 1997 ) .there is a misprint in eq .( 2 ) of the original paper : should read ( o. galzitskaya , private communication ) .m. plischke and b. bergersen , _ equilibrium statistical physics _ , world scientific , singapore 1989 . v. munoz , e.r .henry , j. hofrichter and w. a. eaton , _ proc .usa _ * 95 * , 5872 ( 1998 ) .v. munoz and w.a .eaton , _ proc .usa _ * 96 * , 11311 ( 1999 ) .we gratefully acknowledge dr .a. pelizzola for this observation .s. honda , n. kobayashi , and e. munekata , _j. mol .biol . _ * 295 * , 269 ( 2000 ) .n. kobayashi , s. honda , h. yoshii , and e. munekata , _ biochemistry _ * 39 * , 6564 ( 2000 ) .a.r . dinner , t. lazaridis , and m. karplus , _ proc .usa _ * 96 * , 9068 ( 1999 ) .v.s . pande and d.s .rokhsar , _ proc .usa _ * 96 * , 9602 ( 1999 ) .d.k . klimov and d. thirumalai ,. acad .usa _ * 97 * , 2544 ( 2000 ) . c. guo , h. levine and d.a .kessler , _ phys .* 84 * , 3490 , ( 1999 ) .indeed , this is what karplus and coworkers suggest in ref . ; yet other results do not support this view .f. cecconi , c. micheletti , p. carloni , and a. maritan , _ proteins : struct .and genetics _ * 43 * , 365 ( 2001 ) . c. micheletti , f. cecconi , a. flammini and a. maritan _ protein sci . _* 11 * , 1878 ( 2002 ) .a. scala , n.v .dokholyan , s.v .buldyrev and h.e .stanley , _ phys .e _ , * 63 * , 032901 ( 2001 ) .g. settanni , t.x .hoang , c. micheletti and a. maritan _ biophys .j. _ * 83 * , 3533 ( 2002 ) .w.h . press , b.p .flannery , s.a .teukolsky , and w.t .vetterling , _ numerical recipes _ , ( cambridge university press : 1993 ) .a. fersht , _ enzyme and structure mechanism _ , 2nd edition , w.h freeman and company , new york 1985 .
we study the thermodynamical properties of a topology - based model proposed by galzitskaya and finkelstein for the description of protein folding . we devise and test three different mean - field approaches for the model , that simplify the treatment without spoiling the description . the validity of the model and its mean - field approximations is checked by applying them to the -hairpin fragment of the immunoglobulin - binding protein ( gb1 ) and making a comparison with available experimental data and simulation results . our results indicate that this model is a rather simple and reasonably good tool for interpreting folding experimental data , provided the parameters of the model are carefully chosen . the mean - field approaches substantially recover all the relevant exact results and represent reliable alternatives to the monte carlo simulations .
the spectral fluid algorithm ( canuto et .al . 1987 ) uses fast fourier transforms ( ffts ) to compute gradients , the most precise means possible .finite - difference gradients based on a polynomial fit execute faster than ffts but with less accuracy , necessitating more grid zones to achieve the same resolution as the spectral method .the loss of accuracy outweighs the gain in speed and the spectral method has more resolving power than the finite - difference method .we introduce an alternative finite - difference formula not based on a polynomial fit that executes as quickly but with improved accuracy , yielding greater resolving power than the spectral method . in section [ finite ]we derive high - wavenumber finite difference formulas and exhibit their effect on the resolving power of a turbulence simulation in section [ flop ] . in section [ mhd ]we apply finite differences for the purpose of mimicking the spectral algorithm , and then proceed to other applications in section [ applications ] .define a function on a set of grid points with j an integer . then construct a gradient at from sampling a stencil of grid points with radius ( or order ) on each side .the familiar result for the gradient on a radius-1 stencil is which is obtained from fitting a polynomial of degree to for a degree 4 polynomial on a radius-2 stencil , for a stencil of order , where consider the finite - difference error at x=0 for a fourier mode cosine modes can be ignored because they do nt contribute to the derivative at note that the wavenumber is scaled to grid units so that corresponds to the maximum wavenumber expressible on the grid , also known as the nyquist limit .the finite difference formula ( [ stencil ] ) gives whereas the correct value should be define an error function figure shows for stencils of radius 1 through 24 .the first order finite difference formula is quite lame , delivering 1 percent accuracy only for less than brandenburg ( 2001 ) recognized that higher - order finite differences can significantly extend the wavenumber resolution of the polynomial - based finite difference .the 8th order finite difference accuracy is better than 1 percent up to and the 16th order finite difference up to nevertheless these are still far from the nyquist limit of and even higher - order finite - differences yield little further progress .a fourier transform gives the correct gradient for all up to unity .this is why the spectral algorithm delivers the most resolution per grid element .the resolution limit is set by the maximum for which gradients can be captured accurately .although the 8th order finite - difference formula involves considerably fewer floating point operations than an fft , the loss of resolution still renders it less efficient than the spectral method .polynomial - based finite differences have high accuracy at low but fail at large we can instead construct a more practical scheme to improve high- accuracy at the expense of low- accuracy , yet the loss of low- accuracy is negligibly small . from equation [ eqerror ] , we see that the problem of computing accurate gradients reduces to optimizing , or tuning " the coefficients to minimize the error over a suitable range of k , or equivalently to construct a sine series that best mimics a linear function .a set of tuned coefficients appear in table 2 with the associated error functions in figure 1 .the error in the radius-8 tuned finite difference is less than 1 percent up to a dramatic improvement over the radius-8 polynomial .an algorithm based on tuned gradients still has a lower maximum than the spectral algorithm but due to its increased speed it has greater resolving power ( section [ flop ] ) .henceforth we denote these tuned gradients as hypergradients . "minimizing the error function involves a multiparameter optimization of the coefficients - a problem - dependent task with multiple legitimate options .in fact , a high degree of customization is possible for the form of the error function . for this applicationwe proceed as follows .define a target and an indicator for the quality of the tuned coefficients : ^ 4.\ ] ] then , perform a multi - dimensional optimization on the use of a fourth power ensures an approximately uniform error within although a weight function could be added to further customize the form of the error function . is then adjusted until the error is 1 percent .the procedure is repeated for each order to yield the coefficients in table 2 .it is worth noting that the radius-8 tuned coefficients are similar to the radius-24 polynomial coefficients .this is not surprising because the polynomial coefficients are too small to matter outside of radius 8 .[ cols="<,<,<,<,<,<,<,<,<,<",options="header " , ] _ table 7 : index of simulations .all simulations have , and except for z677 , which has indicates that the fft divergence correction is never applied .all finite - difference simulations utilize a radius-8 stencil and a phase - shift dealiasing correction . _the most accurate and most expensive interpolation procedure is drect evaluation of the fourier series .tuned finite differences provide a less expensive interpolation high - wavenumber interpolation .for example , in 2d , the centered interpolation ( equation [ eqinterp ] ) provides function values halfway between grid points , and another interpolation along the diagonal yields the values at the grid centers .we have thus doubled the resolution of the grid , which we can do again if we wish .note that we can do this for the entire grid or just for a subsection .after a few doublings , a simple linear interpolation serves to provide the function value at any point in space , yielding a 2d interpolation with the same wavenumber resolution as the component 1d interpolation .this procedure generalizes to arbitrary dimension . as if it was nt enough trouble to run large simulations on thousands of cpus , one is next confronted with analyzing large data cubes that are too big to fit in the memory of one machine .tuned operators allow for the construction of local local alternatives to global functions like the fft .these include derivatives , dealiasing , filtering , and grid interpolation .large output files can be stored in small easy - to - manage segments and operated on successively . for the purpose of data analysis , we have providedradius-16 tuned operators in table 3 that are accurate to 0.3 percent .we thank eric blackman , benjamin chandran , peggy varniere , yoram lithwick , and tim dennis for useful discussions , and yoram lithwick for the fft benchmark results .the simulations were run at the pittsburgh supercomputing center s alpha es45 lemieux " with the national resource allocation committee grant mca03s010p , and at the national center for supercomputing applications intel itanium titan " with the grant ast020012n .the author was supported by dr .eric blackman s doe grant de - fg02 - 00er54600 at the university of rochester , and by dr .benjamin chandran s nsf grant ast-0098086 and doe grants de - fg02 - 01er54658 and de - fc02 - 01er54651 , at the university of iowa .
we introduce a fluid dynamics algorithm that performs with nearly spectral accuracy , but uses finite - differences instead of ffts to compute gradients and thus executes 10 times faster . the finite differencing is not based on a high - order polynomial fit . the polynomial scheme has supurb accuracy for low - wavenumber gradients but fails at high wavenumbers . we instead use a scheme tuned to enhance high - wavenumber accuracy at the expense of low wavenumbers , although the loss of low - wavenumber accuracy is negligibly slight . a tuned gradient is capable of capturing all wavenumbers up to 80 percent of the nyquist limit with an error of no worse than 1 percent . the fact that gradients are based on finite differences enables diverse geometries to be considered and eliminates the parallel communications bottleneck .
community detection in complex networks has attracted a lot of attention in the last years ( for a review see ) .the main reason is that complex networks are made of a large number of nodes and that so far most of the quantitative investigations were focusing on statistical properties disregarding the roles played by specific subgraphs . detecting communities ( or _ modules_ ) can then be a way to identify relevant substructures that may also correspond to important functions . in the case of the world wide web , for instance, communities are sets of web pages dealing with the same topic .relevant community structures were also found in social networks , biochemical networks , the internet , food webs , and in networks of sexual contacts . loosely speakinga community is a subgraph of a network whose nodes are more tightly connected with each other than with nodes outside the subgraph .a decisive advance in community detection was made by newman and girvan , who introduced a quantitative measure for the quality of a partition of a network into communities , the so - called _modularity_. this measure essentially compares the number of links inside a given module with the expected value for a randomized graph of the same size and degree sequence .if one takes modularity as the relevant quality function , the problem of community detection becomes equivalent to modularity optimization .the latter is not trivial , as the number of possible partitions of a network in clusters increases exponentially with the size of the network , making exhaustive optimization computationally unreachable even for relatively small graphs .therefore , a number of algorithms have been devised in order to find a good optimization with the least computational cost .the fastest available procedures uses greedy techniques and extremal optimization , and are at present time the only algorithms capable to detect communities on large networks .more accurate results are obtained through simulated annealing , although this method is computationally very expensive .modularity optimization seems thus to be a very effective method to detect communities , both in real and in artificially generated networks .the modularity itself has however not yet been thoroughly investigated and only a few general properties are known .for example , it is known that the modularity value of a partition does not have a meaning by itself , but only if compared with the corresponding modularity expected for a random graph of the same size , as the latter may attain very high values , due to fluctuations . in this paperwe focus on communities defined by modularity .we will show that modularity contains an intrinsic scale which depends on the number of links of the network , and that modules smaller than that scale may not be resolved , even if they were complete graphs connected by single bridges .the resolution limit of modularity actually depends on the degree of interconnectedness between pairs of communities and can reach values of the order of the size of the whole network .it is thus _ a priori _ impossible to tell whether a module ( large or small ) , obtained through modularity optimization , is indeed a single module or a cluster of smaller modules .this result thus introduces some caveats in the use of modularity to detect community structure . in section [ sec2 ]we recall the notion of modularity and discuss some of its properties . section [ sec3 ] deals with the problem of finding the most modular network with a given number of nodes and links . in section [ sec4 ]we show how the resolution limit of modularity arises . in section [ sec5 ]we illustrate the problem with some artificially generated networks , and extend the discussion to real networks .our conclusions are presented in section [ sec6 ] .the modularity of a partition of a network in modules can be written as , \label{eq : mod}\ ] ] where the sum is over the modules of the partition , is the number of links inside module , is the total number of links in the network , and is the total degree of the nodes in module . the first term of the summands in eq .( [ eq : mod ] ) is the fraction of links inside module ; the second term instead represents the expected fraction of links in that module if links were located at random in the network ( under the only constraint that the degree sequence coincides with that in the original graph ) .if for a subgraph of a network the first term is much larger than the second , it means that there are many more links inside than one would expect by random chance , so is indeed a module .the comparison with the null model represented by the randomized network leads to the quantitative definition of community embedded in the _ ansatz _ of eq .( [ eq : mod ] ) .we conclude that , in a modularity - based framework , a subgraph with internal links and total degree is a module if let us express the number of links joining nodes of the module to the rest of the network in terms of , i.e. with .so , and the condition ( [ eq2 ] ) becomes ^ 2>0 , \label{eq2bis}\ ] ] from which , rearranging terms , one obtains if , the subgraph is a disconnected part of the network and is a module if which is always true . if is strictly positive , eq . ( [ eq2ter ] ) sets an upper limit to the number of internal links that must have in order to be a module .this is a little odd , because it means that the definition of community implied by modularity depends on the size of the whole network , instead of involving a `` local '' comparison between the number of internal and external links of the module . for onehas , which means that the total degree internal to the subgraph is larger than its external degree , i.e. . the attributes `` internal '' and `` external '' here mean that the degree is calculated considering only the internal or the external links , respectively . in this case , the subgraph would be a community according to the `` weak '' definition given by radicchi et al . . for the right - hand - side of inequality ( [ eq2ter ] )is in the interval ] .sufficient conditions for which these constraints are always met are then in section [ sec4 ] we shall only consider modules of this kind . according to eq .( [ eq2 ] ) , a partition of a network into actual modules would have a positive modularity , as all summands in eq .( [ eq : mod ] ) are positive . on the other hand , for particular partitions, one could bump into values of which are negative .the network itself , meant as a partition with a single module , has modularity zero : in this case , in fact , , , and the only two terms of the unique module in cancel each other .usually , a value of larger than is a clear indication that the subgraphs of the corresponding partition are modules .however , the maximal modularity differs from a network to another and depends on the number of links of the network . in the next section we shall derive the expression of the maximal possible value can attain on a network with links .we will prove that the upper limit for the value of modularity for any network is and we will see why the modularity is not scale independent .in this section we discuss of the most modular network which will introduce naturally the problem of scales in modularity optimization . in ref . , the authors consider the interesting example of a network made of identical complete graphs ( or ` cliques ' ) , disjoint from each other . in this case , the modularity is maximal for the partition of the network in the cliques and is given by the sum of equal terms . in each cliquethere are links , and the total degree is , as there are no links connecting nodes of the clique to the other cliques .we thus obtain = m\left(\frac{1}{m}-\frac{1}{m^2}\right)=1-\frac{1}{m } , \label{eq3}\ ] ] which converges to when the number of cliques goes to infinity .we remark that for this result to hold it is not necessary that the connected components be cliques .the number of nodes of the network and within the modules does not affect modularity .if we have modules , we just need to have links inside the modules , as long as this is compatible with topological constraints , like connectedness . in this way ,a network composed by identical trees ( in graph theory , a forest ) has the same maximal modularity reported in eq .( [ eq3 ] ) , although it has a far smaller number of links as compared with the case of the densely connected cliques ( for a given number of nodes ) .a further interesting question is how to design a _ connected _ network with nodes and links which maximizes modularity . to address this issue, we proceed in two steps : first , we consider the maximal value for a partition into a fixed number of modules ; after that , we look for the number that maximizes .let us first consider a partition into modules .ideally , to maximize the contribution to modularity of each module , we should reduce as much as possible the number of links connecting modules .if we want to keep the network connected , the smallest number of inter - community links must be .for the sake of clarity , and to simplify the mathematical expressions ( without affecting the final result ) , we assume instead that there are links between the modules , so that we can arrange the latter in the simple ring - like configuration illustrated in fig . [ fig1 ] .design of a connected network with maximal modularity .the modules ( circles ) must be connected to each other by the minimal number of links.,width=188 ] the modularity of such a network is , \label{eq4}\ ] ] where it is easy to see that the expression of eq .( [ eq4 ] ) reaches its maximum when all modules contain the same number of links , i.e. . the maximum is then given by =1-\frac{m}{l}-\frac{1}{m}. \label{eq6}\ ] ] we have now to find the maximum of when the number of modules is variable . for this purposewe treat as a real variable and take the derivative of with respect to which vanishes when .this point indeed corresponds to the absolute maximum of the function .this result coincides with the one found by the authors of for a one - dimensional lattice , but our proof is completely general and does not require preliminary assumptions on the type of network and modules .since is not a real number , the actual maximum is reached when equals one of the two integers closest to , but that is not important for our purpose , so from now on we shall stick to the real - valued expressions , their meaning being clear .the maximal modularity is then and approaches if the total number of links goes to infinity .the corresponding number of links in each module is .the fact that all modules have the same number of links does not imply that they have as well the same number of nodes .again , modularity does not depend on the distribution of the nodes among the modules as long as the topological constraints are satisfied .for instance , if we assume that the modules are connected graphs , there must be at most nodes in each module .the crucial point here is that modularity seems to have some intrinsic scale of order , which constrains the number and the size of the modules . for a given total number of nodes and linkswe could build many more than modules , but the corresponding network would be less `` modular '' , namely with a value of the modularity lower than the maximum of eq .( [ eq8 ] ) .this fact is the basic reason why small modules may not be resolved through modularity optimization , as it will be clear in the next section .we analyze a network with links and with at least three modules , in the sense of the definition of formula ( [ eq2quater ] ) ( fig .[ fig2 ] ) .we focus on a pair of modules , and , and distinguish three types of links : those internal to each of the two communities ( and , respectively ) , between and ( ) and between the two communities and the rest of the network ( and ) . in order to simplify the calculations we express the numbers of external links in terms of and , so , and , with . since and are modules by construction , we also have , and ( see section [ sec2 ] ) . scheme of a network partition into three or more modules .the two circles on the left picture two modules , the oval on the right reprensents the rest of the network , whose structure is arbitrary.,width=302 ] now we consider two partitions and of the network . in partition , and are taken as separate modules , and in partition they are considered as a single community .the split of the rest of the network is arbitrary but identical in both partitions .we want to compare the modularity values and of the two partitions .since the modularity is a sum over the modules , the contribution of is the same in both partitions and is denoted by . from eq .( [ eq : mod ] ) we obtain ^ 2+\frac{l_2}{l}+\nonumber\\ & & -\left[\frac{(a_2+b_2 + 2)l_2}{2l}\right]^2 ; \label{eq9}\end{aligned}\ ] ] ^ 2 .\label{eq10}\end{aligned}\ ] ] the difference is /(2l^2 ) .\label{eq11}\ ] ] as and are both modules by construction , we would expect that the modularity should be larger for the partition where the two modules are separated , i.e. , which in turn implies . from eq .( [ eq11 ] ) we see that is negative if we see that if , i.e. if there are no links between and , the above condition is trivially satisfied. instead , if the two modules are connected to each other , something interesting happens .each of the coefficients , , , can not exceed and and are both smaller than by construction but can be taken as small as we wish with respect to . in this way , it is possible to choose and such that the inequality of eq .( [ eq12 ] ) is not satisfied .in such a situation we can have and the modularity of the configuration where the two modules are considered as a single community is larger than the partition where the two modules are clearly identified .this implies that by looking for the maximal modularity , there is the risk to miss important structures at smaller scales .to give an idea of the size of and at which modularity optimization could fail , we consider for simplicity the case in which and have the same number of links , i.e. . the condition on for the modularity to miss the two modules also depends on the fuzziness of the modules , as expressed by the values of the parameters , , , . in order to find the range of potentially `` dangerous '' values of , we consider the two extreme cases in which * the two modules have a perfect balance between internal and external degree ( , ) , so they are on the edge between being or not being communities , in the weak sense defined in ; * the two modules have the smallest possible external degree , which means that there is a single link connecting them to the rest of the network and only one link connecting each other ( ) . in the first case , the maximum value that the coefficient of can take in eq .( [ eq12 ] ) is , when and , , so we obtain that eq .( [ eq12 ] ) may not be satisfied for which is a scale of the order of the size of the whole network . in this way, even a pair of large communities may not be resolved if they share enough links with the nodes outside them ( in this case we speak of `` fuzzy '' communities ) .a more striking result emerges when we consider the other limit , i.e. when . in this caseit is easy to check that eq .( [ eq12 ] ) is not satisfied for values of the number of links inside the modules satisfying if we now assume that we have two ( interconnected ) modules with the same number of internal links , the discussion above implies that the modules can not be resolved through modularity optimization , not even if they were complete graphs connected by a single link . as we have seen from eq .( [ eq14 ] ) , it is possible to miss modules of larger size , if they share more links with the rest of the network ( and with each other ) . for conclusion is similar but the scales are modified by simple factors .we begin with a very schematic example , for illustrative purposes . in fig .[ fig3](a ) we show a network consisting of a ring of cliques , connected through single links .each clique is a complete graph with nodes and has links .if we assume that there are cliques , with even , the network has a total of nodes and links .( * a * ) a network made out of identical cliques ( which are here complete graphs with nodes ) connected by single links .if the number of cliques is larger than about , modularity optimization would lead to a partition where the cliques are combined into groups of two or more ( represented by a dotted line ) .( * b * ) a network with four pairwise identical cliques ( complete graphs with and nodes , respectively ) ; if is large enough with respect to ( e.g. , ) , modularity optimization merges the two smallest modules into one ( shown with a dotted line).,width=188 ] the network has a clear modular structure where the communities correspond to single cliques and we expect that any detection algorithm should be able to detect these communities .the modularity of this natural partition can be easily calculated and equals on the other hand , the modularity of the partition in which pairs of consecutive cliques are considered as single communities ( as shown by the dotted lines in fig .[ fig3](a ) ) is the condition is satisfied if and only if in this example , and are independent variables and we can choose them such that the inequality of formula ( [ eq18 ] ) is not satistied .for instance , for and , and . an efficient algorithm lookingfor the maximum of the modularity would find the configuration with pairs of cliques and not the actual modules .the difference would be even larger if increases , for fixed .the example we considered was particularly simple and hardly represents situations found in real networks .however , the initial configuration that we considered in the previous section ( fig . [ fig2 ] ) is absolutely general , and the results make us free to design arbitrarily many networks with obvious community structures in which modularity optimization does not recognize ( some of ) the real modules .another example is shown in fig .[ fig3](b ) .the circles represent again cliques , i.e. complete graphs : the two on the left have nodes each , the other two nodes .if we take and , the maximal modularity of the network corresponds to the partition in which the two smaller cliques are merged [ as shown by the dotted line in fig .[ fig3](b ) ] .this trend of the optimal modularity to group small modules has already been remarked in , but as a result of empirical studies on special networks , without any complete explanation . in general, we can not make any definite statement about modules found through modularity optimization without a method which verifies whether the modules are indeed single communities or a combination of communities .it is then necessary to inspect the structure of each of the modules found . as an example , we take the network of fig . [ fig3](a ) , with identical cliques , where each clique is a with .as already said above , modularity optimization would find modules which are pairs of connected cliques . by inspecting each of the modules of the ` first generation ' ( by optimizing modularity , for example ), we would ultimately find that each module is actually a set of two cliques .we thus have seen that modules identified through modularity optimization may actually be combinations of smaller modules . during the process of modularity optimization, it is favorable to merge connected modules if they are sufficiently small .we have seen in the previous section that any two interconnected modules , fuzzy or not , are merged if the number of links inside each of them does not exceed .this means that the largest structure one can form by merging a pair of modules of any type ( including cliques ) has at least internal links . by reversing the argument, we conclude that if modularity optimization finds a module with internal links , it may be that the latter is a combination of two or more smaller communities if this example is an extreme case , in which the internal partition of can be arbitrary , as long as the pieces are modules in the sense discussed in section [ sec2 ] . under the condition ( [ eq19 ] ), the module could in principle be a cluster of loosely interconnected complete graphs .on the other hand , the upper limit of can be much larger than , if the substructures are on average more interconnected with each other , as we have seen in section [ sec4 ] .in fact , fuzzy modules can be combined with each other even if they contain many more than links .the more interconnected the modules , the larger will be the resulting supermodule . in the extreme case in which all submodules are very fuzzy, the size of the supermodule could be in principle as large as that of the whole network , i.e. .this result comes from the extreme case where the network is split in two very fuzzy communities , with internal links each and between them . by virtue of eq .( [ eq14 ] ) , it is favorable ( or just as good ) to merge the two modules and the resulting structure is the whole network .this limit is of course always satisfied but suggests here that it is important to carefully analyze all modules found through modularity optimization , regardless of their size .the probability that a very large module conceals substructures is however small , because that could only happen if all hidden submodules are very fuzzy communities , which is unlikely .instead , modules with a size or smaller can result from an arbitrary merge of smaller structures , which may go from loosely interconnected cliques to very fuzzy communities .modularity optimization is most likely to fail in these cases . in order to illustrate this theoretical discussion ,we analyze five examples of real networks : 1 . the transcriptional regulation network of _ saccharomyces cerevisiae _( yeast ) ; 2 .the transcriptional regulation network of _ escherichia coli _ ; 3 .a network of electronic circuits ; 4 . a social network ; 5 .the neural network of _ caenorhabditis elegans_. we downloaded the lists of edges of the first four networks from uri alon s website , whereas the last one was downloaded from the website of the collective dynamics group at columbia university . in the transcriptional regulation networks ,nodes represent operons , i.e. groups of genes that are transcribed on to the same mrna and an edge is set between two nodes a and b if a activates b. these systems have been previously studied to identify motifs in complex networks .there are nodes , links for yeast , nodes and links for _e. coli_. electronic circuits can be viewed as networks in which vertices are electronic components ( like capacitors , diodes , etc . ) and connections are wires .our network maps one of the benchmark circuits of the so - called iscas89 set ; it has nodes , links . in the social network we considered , nodes are people of a group and links represent positive sentiments directed from one person to another , based on questionnaires : it has nodes and links .finally , the neural network of _ c. elegans _ is made of nodes ( neurons ) , connected through links ( synapsis , gap junctions ) .we remark that most of these networks are directed , here we considered them as undirected .first , we look for the modularity maximum by using simulated annealing .we adopt the same recipe introduced in ref . , which makes the optimization procedure very effective .there are two types of moves to pass from a network partition to the next : individual moves , where a single node is passed from a community to another , and collective moves , where a pair of communities is merged into a single one or , vice versa , a community is split into two parts .each iteration at the same temperature consists of a succession of individual and collective moves , where is the total number of nodes of the network .the initial temperature and the temperature reduction factor are arbitrarily tuned to find the highest possible modularity : in most cases we took and between and .we found that all networks are characterized by high modularity peaks , with ranging from ( _ c . elegans _ ) to ( _ e .the corresponding optimal partitions consist of ( yeast ) , ( _ e .coli _ ) , ( electronic ) , ( social ) and ( _ c . elegans _ ) modules ( for _ e. coli _ our results differ but are not inconsistent with those obtained in for a slighly different database ; these differences however do not affect our conclusions ) .in order to check if the communities have a substructure , we used again modularity optimization , by constraining it to each of the modules found . in all cases, we found that most modules displayed themselves a clear community structure , with very high values of .the total number of submodules is ( yeast ) , ( _ e .coli _ ) , ( electronic ) , ( social ) and ( _ c . elegans _ ) , and is far larger than the corresponding number at the modularity peaks . the analysis of course is necessarily biased by the fact that we neglect all links between the original communities , and it may be that the submodules we found are not real modules for the original network . in order to verify that , we need to check whether the condition of eq .( [ eq2 ] ) is satisfied or not for each submodule and we found that it is the case .a further inspection of the communities found through modularity optimization thus reveals that they are , in fact , clusters of smaller modules .the modularity values corresponding to the partitions of the networks in the submodules are clearly smaller than the peak modularities that we originally found through simulated annealing ( see table [ tab1 ] ) . by restricting modularity optimization to a modulewe have no guarantee that we accurately detect its substructure and that this is a safe way to proceed .nevertheless , we have verified that all substructures we detected are indeed modules , so our results show that the search for the modularity optimum is not equivalent to the detection of communities defined through eq .( [ eq2 ] ) ..[tab1 ] results of the modularity analysis on real networks . in the second column ,we report the number of modules detected in the partition obtained for the maximal modularity .these modules however contain submodules and in the third column we report the total number of submodules we found and the corresponding value of the modularity of the partition , which is _ lower _ than the peak modularity initially found . [cols="^,^,^,^",options="header " , ] the networks we have examined are fairly small but the problem we pointed out can only get worse if we increase the network size , especially when small communities coexist with large ones and the module size distribution is broad , which happens in many cases . as an example , we take the recommendation network of the online seller amazon.com . while buying a product , amazon recommends items which have been purchased by people who bought the same product . in this way it is possible to build a network in which the nodes are the items ( books , music ) , and there is an edge between two items and if was frequently purchased by buyers of .such a network was examined in ref . and is very large , with nodes and edges .the authors analyzed the community structure by greedy modularity optimization which is not necessarily accurate but represents the only strategy currently available for large networks .they identified communities whose size distribution is well approximated by a power law with exponent . from the size distribution , we estimated that over of the modules have sizes below the limit of eq .( [ eq19 ] ) , which implies that basically all modules need to be further investigated .in this article we have analyzed in detail modularity and its applicability to community detection .we have found that the definition of community implied by modularity is actually not consistent with its optimization which may favour network partitions with groups of modules combined into larger communities .we could say that , by enforcing modularity optimization , the possible partitions of the system are explored at a coarse level , so that modules smaller than some scale may not be resolved .the resolution limit of modularity does not rely on particular network structures , but only on the comparison between the sizes of interconnected communities and that of the whole network , where the sizes are measured by the number of links .the origin of the resolution scale lies in the fact that modularity is a sum of terms , where each term corresponds to a module . finding the maximal modularity is then equivalent to look for the ideal tradeoff between the number of terms in the sum , i.e. the number of modules , and the value of each term .an increase of the number of modules does not necessarily correspond to an increase in modularity because the modules would be smaller and so would be each term of the sum .this is why for some characteristic number of terms the modularity has a peak .the problem is that this `` optimal '' partition , imposed by mathematics , is not necessarily correlated with the actual community structure of the network , where communities may be very heterogeneous in size , especially if the network is large .our result implies that modularity optimization might miss important substructures of a network , as we have confirmed in real world examples . from our discussionwe deduce that it is not possible to exclude that modules of virtually any size may be clusters of modules , although the problem is most likely to occur for modules with a number of internal links of the order of or smaller .for this reason , it is crucial to check the structure of all detected modules , for instance by constraining modularity optimization on each single module , a procedure which is not safe but may give useful indications .the fact that quality functions such as the modularity have an intrinsic resolution limit calls for a new theoretical framework which focuses on a local definition of community , regardless of its size .quality functions are still helpful , but their role should be probably limited to the comparison of partitions with the same number of modules .acknowledgments. we thank a. barrat , c. castellano , v. colizza , a. flammini , j. kertsz and a. vespignani for enlightening discussions and suggestions .we also thank u. alon for providing the network data .m. e. j. newman , eur .j. b * 38 * , 321 - 330 ( 2004 ) .l. danon , a. daz - guilera , j. duch and a. arenas , j. stat .. , p. p09008 , ( 2005 ) .barabsi and r. albert , rev .phys . * 74 * , 47 - 97 ( 2002 ) .m. e. j. newman , siam review * 45 * , 167 - 256 ( 2003 ) .r. pastor - satorras and a. vespignani , _ evolution and structure of the internet : a statistical physics approach _( cambridge university press , cambridge , 2004 ) . m. girvan and m.e. j. newman , proc .sci . * 99 * , 7821 - 7826 ( 2002 ) .d. lusseau and m. e. j. newman , proc .london b * 271 * , s477-s481 ( 2004 ) .l. adamic and n. glance , proc . int .workshop on link discovery , 36 - 43 ( 2005 ) .p. holme , m. huss and h. jeong , bioinformatics * 19 * , 532 ( 2003 ) .s. l. pimm , theor .* 16 * , 144 ( 1979 ) ; a. e. krause , k. a. frank , d. m. mason , r. e. ulanowicz and w. w. taylor , nature * 426 * , 282 ( 2003 ) .garnett , j. p. hughes , r. m. anderson , b. p. stoner , s. o. aral , w. l. whittington , h. h. handsfield and k. k. holmes , sexually transmitted diseases * 23 * , 248 - 257 ( 1996 ) ; s. o. aral , j. p. hughes , b. p. stoner , w. l. whittington , h. h. handsfield , r. m. anderson and k. k. holmes , american journal of public health * 89 * , 825 - 833 ( 1999 ) .m. e. j. newman and m. girvan , phys .e * 69 * , 026113 ( 2004 ) .m. e. j. newman , physical review e * 69 * , 066133 ( 2004 ) .a. clauset , m. e. j. newman and c. moore , phys .e * 70 * , 066111 ( 2004 ) .j. duch and a. arenas , phys .e * 72 * , 027104 ( 2005 ) .r. guimer , m. sales - pardo and l. a. n. amaral , phys .e * 70 * , 025101(r ) ( 2004 ) .j. reichardt and s. bornholdt , preprint cond - mat/0603718 ( 2006 ) .
detecting community structure is fundamental to clarify the link between structure and function in complex networks and is used for practical applications in many disciplines . a successful method relies on the optimization of a quantity called modularity [ newman and girvan , phys . rev . e * 69 * , 026113 ( 2004 ) ] , which is a quality index of a partition of a network into communities . we find that modularity optimization may fail to identify modules smaller than a scale which depends on the total number of links of the network and on the degree of interconnectedness of the modules , even in cases where modules are unambiguously defined . the probability that a module conceals well - defined substructures is the highest if the number of links internal to the module is of the order of or smaller . we discuss the practical consequences of this result by analyzing partitions obtained through modularity optimization in artificial and real networks .
to accommodate the exponential growth of data traffic over the last few years , the space - division multiplexing ( sdm ) based on multi - core optical fiber ( mcf ) or multi - mode optical fiber ( mmf ) is expected to overcome the barrier from capacity limit of single - core fiber .the main challenge in sdm occurs due to in - band crosstalk between multiple parallel transmission channels ( cores / modes ) .this non - negligible crosstalk can be dealt with using multiple - input multiple - output ( mimo ) signal processing techniques .those techniques are widely used for wireless communication systems and they helped to drastically increase channel capacity . assuming important crosstalk between cores and/or modes , negligible backscattering and near - lossless propagation , we can model the transmission optical channel as a random complex unitary matrix .. in , authors appealed to the jacobi unitary ensemble ( jue ) to establish the propagation channel model for mimo communications over multi - mode / multi - core optical fibers .the jue is a matrix - variate analogue of the beta random variable and consists of complex hermitian random matrices which can be realized at least in two different ways .one of them mimics the construction of the beta random variable as a ratio of two independent gamma random variables : the latter are replaced by two independent complex hermitian wishart matrices whose sum is invertible .otherwise , one draws a haar - distributed unitary matrix then takes the square of the radial part of an upper left corner . by a known fact for unitarily invariant - random matrices , the average of any symmetric function with respect to the eigenvalues density can be expressed through the one - point correlation function , also known as the single - particle density . in particular , the ergodic capacity of a matrix drawn from the jue can be represented by an integral where the integrand involves the christoffel - darboux kernel associated with jacobi polynomials ( , p.384 ) .the drawback of this representation is the dependence of this kernel on the size of the matrix . indeed , its diagonal is written either as a sum of squares of jacobi polynomials and the number of terms in this sum equals the size of the matrix least one , or by means of the christoffel - darboux formula as a difference of the product of two jacobi polynomials whose degrees depend on the size of the matrix . to the best of our knowledge ,this is the first study that derives exact expression of the ergodic capacity as a double integral over a suitable region . in this paper, we provide a new expression for the ergodic capacity of the jacobi mimo channel relying this time on the formula derived in for the moments of the eigenvalues density of the jacobi random matrix .the obtained expression shows that the ergodic capacity is an average of some function over the signal - to - noise ratio , and it has the merit to have a simple dependence on the size of the matrix which allows for easier and more precise numerical simulations . by a limiting transition between jacobi and laguerre polynomials , we derive a similar expression for the ergodic capacity of the gaussian mimo channel .the paper is organized as follows . in section [ sec : notation ] , we settle down some notations and recall the definitions of random matrices and special functions occurring in the remainder of the paper .section [ sec : model ] presents the system model .the main results of this paper are presented in section [ sec : ec ] and are illustrated in section [ sec : numerical ] by numerical simulations followed by several comments .finally , the proofs of these results are provided in appendices .throughout this paper , the following notations and definitions are used .we start with those concerned with special functions for which the reader is referred to the standard book .the pochhammer symbol with and is defined by for , it is clear that where is the gamma function . note that if is a non positive integer then next , the gauss hypergeometric function is defined for complex by the convergent power series where denotes the pochhammer symbol defined in and are real parameters with .the function has an analytic continuation to the complex plane cut along the half - line .in particular , the jacobi polynomials of degree and parameters , can also be expressed in terms of the gauss hypergeometric function as follows an important asymptotic property of the jacobi polynomial is that it can be reduced to the -th laguerre polynomial of parameter through the following limit now , we come to the notations and the definitions related with random matrices , and refer the reader to .firstly , the hermitian transpose and the determinant of a complex matrix are denoted by and respectively .secondly , the laguerre unitary ensemble ( lue ) is formed out of non negative definite matrices where is a rectangular matrix , with , whose entries are complex independent gaussian random variables . a matrix from the lue is often referred to as a complex wishart matrix and are its degrees of freedom and its size respectively .finally , let and be two independent and complex wishart matrices .assume , then is positive definite and belongs to the jue .the matrix is unitarily - invariant and satisfies where stand for the null and the identity matrices respectively and , we write when is a non negative matrix . ] .if then and are positive definite and the joint distribution of the ordered eigenvalues of has a density given by ^ 2 { \bf 1}_{0 < \lambda_1 < \dots < \lambda_n < 1 } \label{eq : tmpconc}\ ] ] with respect to lebesgue measure . here , , is a normalization constant read off from the selberg integral : and is the vandermonde polynomial .another construction of matrices from the jue is as follows : let be an haar - distributed unitary matrix .let and be two positive integers such that and .let also be the upper - left corner of , then the joint distribution of the ordered eigenvalues of is given by with parameters , , and .we consider an optical space - division multiplexing where the multiple channels correspond to the number of excited modes / cores within the optical fiber .the coupling between different modes and/or cores can be described by scattering matrix formalism . in this paper , we consider -channel lossless optical fiber with transmitting excited channels and receiving channels .the scattering matrix formalism can describe very simply the propagation through the fiber using scattering matrix given as where the block matrices and describe the reflection from left to left and from right to right of the fiber , respectively , and and describe the transmission through the fiber from left to right and from right to left , respectively .since the fiber is assumed to be lossless and time - reversal , the scattering matrix must be a complex unitary symmetric matrix , ( ) .therefore , the four hermitian matrices , , , and have the same set of eigenvalues .each of these transmission eigenvalues is a real number belong to the interval ] and use the taylor expansion to get consequently , changing the summation order and performing the index change in , we get now , observe that the product displayed in the right hand side of the last equality vanishes whenever due to the presence of the factor .thus , the first series terminates at and together with the index change in the product lead to next , we compute for each and similarly altogether , the ergodic capacity reads but the series as well as its derivatives with respect to converge uniformly in any closed sub - interval in ,1[ ] . from ( * ? ? ?* eq . ( 4.4.6 ) ) , we readily deduce that the hypergeometric function coincides up to a multiplicative factor with the jacobi function of the second kind in the variable related to by consequently , c(\rho ) = 2 b_{a , b , n } \frac{(1+\rho)^{b-2}}{\rho^{a+b-1}}p_{n-1}^{a-1,b}\left(\frac{\rho+2}{\rho}\right)q_n^{a-1,b-2}\left(\frac{\rho+2}{\rho}\right)\ ] ] where moreover , recall from ( * ? ? ?* eq . ( 4.4.2 ) ) , that ( note that ) as a result c(\rho ) & = b_{a , b , n } \frac{1}{2^{a+b-3}\rho^{2}}p_{n-1}^{a-1,b}\left(\frac{\rho+2}{\rho}\right)\int_{-1}^1(1-u)^{a-1}(1+u)^{b-2 } \\ & \frac{p_n^{a-1,b-2}(u)}{((\rho+2)/\rho ) - u } du \\ & = b_{a , b , n } \frac{1}{2^{a+b-3}\rho^{2}}\int_{-1}^1(1-u)^{a-1}(1+u)^{b-2}\left(p_{n-1}^{a-1,b}\left(\frac{\rho+2}{\rho}\right ) - p_{n-1}^{a-1,b}(u)\right ) \\ & \frac{p_n^{a-1,b-2}(u)}{((\rho+2)/\rho ) - u } du \\ & + b_{a , b , n } \frac{1}{2^{a+b-3}\rho^{2}}\int_{-1}^1(1-u)^{a-1}(1+u)^{b-2}p_{n-1}^{a-1,b}(u)\frac{p_n^{a-1,b-2}(u)}{((\rho+2)/\rho ) - u } du.\end{aligned}\ ] ] since is a polynomial of degree , then the orthogonality of the jacobi polynomials entails c(\rho ) & = b_{a , b , n } \frac{1}{2^{a+b-3}\rho^{2}}\int_{-1}^1(1-u)^{a-1}(1+u)^{b-2}p_{n-1}^{a-1,b}(u)\frac{p_n^{a-1,b-2}(u)}{((\rho+2)/\rho ) - u } du \\ & = b_{a , b , n } \frac{1}{2^{a+b-3}}\int_{-1}^1(1-u)^{a-1}(1+u)^{b-2}p_{n-1}^{a-1,b}(u)\frac{p_n^{a-1,b-2}(u)}{\rho(\rho+2- \rho u ) } du .\end{aligned}\ ] ] writing , \ , u \in [ -1,1],\ ] ] and using again the orthogonality of jacobi polynomials , we get c(\rho ) = -\frac{b_{a , b , n}}{2^{a+b-2}}\int_{-1}^1(1-u)^{a}(1+u)^{b-2}\frac{p_{n-1}^{a-1,b}(u)p_n^{a-1,b-2}(u)}{(\rho(1-u)+2 ) } du\end{aligned}\ ] ] which makes sense for .a first integration with respect to gives (\rho ) & = -\frac{b_{a , b , n}}{2^{a+b-2}}\int_{-1}^1(1-u)^{a-1}(1+u)^{b-2}p_{n-1}^{a-1,b}(u)p_n^{a-1,b-2}(u ) \\ & [ \ln(\rho(1-u)+2 ) - \ln 2 ] du\end{aligned}\ ] ] and a second one leads to }{v } dv\right\ } du.\end{aligned}\ ] ] performing the variable changes in the last expression , we end up with }{v } dv\right\ } du\end{aligned}\ ] ] for any . by analytic continuation, this formula extends to the cut plane and is in particular is valid for .specializing it to , and completes the proof of the theorem .perform the variable change in the definition of : on the other hand , our obtained expression for the ergodic capacity together with the variable change entail : now and similarly moreover , the limiting transition yields as a result , }{v } dv\right\ } du .\end{aligned}\ ] ] finally , is the normalization constant of the density of the joint distribution of the ordered eigenvalues of a complex wishart matrix .the theorem is proved .d. j. richardson , j. m. fini , and l. e. nelson , `` space - division multiplexing in optical fibres , '' in _ nat .7(5 ) , pp.354 - 362 , 2013 .r. ryf , s. randel , a. h. gnauck , c. bolle , a. sierra , s. mumtaz , m. esmaeelpour , e. c. burrows , r. essiambre , p. j. winzer , d. w. peckham , a. h. mccurdy , r. lingle , r. , `` mode - division multiplexing over 96 km of few - mode fiber using coherent mimo processing , '' in _ lightwave technology , journal of _ , vol.30 , no.4 , pp.521 - 531 , 2012 . w. klaus , j. sakaguchi , b. j. puttnam , y. awaji , n. wada , `` optical technologies for space division multiplexing , '' in information optics ( wio ) , 2014 13th workshop on , vol ., no . , pp.1 - 3 , 2014 .v. tarokh , n. seshadri , a. r. calderbank , `` space - time codes for high data rate wireless communication : performance criterion and code construction , '' in _ information theory , ieee transactions on _ , vol.44 , no.2 , pp.744 - 765 , 1998 . l. zheng , d. n. c. tse , `` diversity and multiplexing : a fundamental tradeoff in multiple - antenna channels , '' in _ information theory , ieee transactions on _ , vol.49 , no.5 , pp.1073 - 1096 , 2003 . m. taherzadeh , a. mobasher , a. k. khandani , `` lll reduction achieves the receive diversity in mimo decoding , '' in _ information theory , ieee transactions on _ , vol.53 , no.12 , pp.4801 - 4805 , 2007 .a. ghaderipoor , c. tellambura , a. paulraj , `` on the application of character expansions for mimo capacity analysis , '' in _ information theory , ieee transactions on _ , vol .5 , pp . 2950 - 2962 , 2012 .r. dar , m. feder , m. shtaif , `` the jacobi mimo channel , '' in _ information theory , ieee transaction on _ , vol .2426 - 2441 , 2013 .p. j. winzer , g. j. foschini , `` mimo capacities and outage probabilities in spatially multiplexed optical transport systems , '' in _ opt . express _ 19 , 16680 - 16696 , 2011 .a. karadimitrakis , a. l. moustakas , p. vivo , `` outage capacity for the optical mimo channel , '' in _ information theory , ieee transactions on _ , vol.60 , no.7 , pp.4370 - 4382 , 2014 . s. h. simon , a. l. moustakas , `` crossover from conserving to lossy in circular random matrix ensembles , '' in _ physical review letters _96 , no . 13 , pp .136 - 805 , 2006 .b. collins , `` product of random projections , jacobi ensembles and universality problems arising from free probability , '' in _ probab . theory related fields _ , 133(3 ) , pp .315 - 344 , 2005 .m. l. mehta , _ random matrices _ , academic press inc .boston , ma , second edition , 1991 .j. forrester , _ log - gases and random matrices _ , london mathematical society monographs , princeton university , princeton , nj , 2007 .c. carr , m. deneufchatel , j. g. luque , p. vivo , `` asymptotics of selberg - like integrals : the unitary case and newton s interpolation formula , '' in _ journal of mathematical physics _ , 51(12 ) , 123516 , 2010 . m. e. h. ismail , _ classical and quantum orthogonal polynomials in one variable _ , cambridge univ . press . 2005 .e. telatar , `` capacity of multi - antenna gaussian channels , '' in _ europ .585 - 596 , 1999 . c. w. j. beenakker , `` random - matrix theory of quantum transport , '' in _ rev ._ , vol 69,no 3 , pp . 731 - 808 , 1997 .j. forrester , `` quantum conductance problems and the jacobi ensemble , '' in _ j. phys . a : math .39 , pp . 6861 - 6870 , 2006 .t. jiang , `` approximation of haar distributed matrices and limiting distributions of eigenvalues of jacobi ensembles , '' in _ probab . theory related fields _ , no 1 - 2 , pp .221 - 246 , 2009 .
multimode / multicore fibers are expected to provide an attractive solution to overcome the capacity limit of current optical communication system . in presence of high crosstalk between modes / cores , the squared singular values of the input / output transfer matrix follow the law of the jacobi ensemble of random matrices . assuming that the channel state information is only available at the receiver , we derive in this paper a new expression for the ergodic capacity of the jacobi mimo channel . this expression involves double integrals which can be evaluated easily and efficiently . moreover , the method used in deriving this expression does not appeal to the classical one - point correlation function of the random matrix model . using a limiting transition between jacobi and laguerre polynomials , we derive a similar formula for the ergodic capacity of the gaussian mimo channel . the analytical results are compared with monte carlo simulations and related results available in the literature . a perfect agreement is obtained . jacobi mimo channel , gaussian mimo channel , jacobi polynomials , laguerre polynomials , ergodic capacity .
molecular dynamics ( md ) generates the time evolution of classical mechanical particles by discrete time propagation . almost allthe md are obtained by the `` verlet '' algorithm ( va ) where a new position of the particle with mass at time is obtained from the force and the two last discrete positions the algorithm is the central - difference expression for the mass times the acceleration of the particle which equals the force , and it appears in the literature with different names ( verlet , leap - frog , velocity verlet , .. ) .the algorithm is time reversible and symplectic , and the different reformulations of the algorithm do not change the discrete time evolution and the physics obtained by the va dynamics .mathematical investigations have proved the existence of a shadow hamiltonian for symplectic algorithms . the proof is obtained by an asymptotic expansion , but the series for the shadow hamiltonian does not converge in the general case . for a review of the asymptotic expansion , its convergence and optimal truncation see . only the harmonic approximation , of the first term in this expansion is known explicitly . but inclusion of in the traditional obtained zero order energy for md systems with lennard - jones ( lj ) particles reduces the fluctuation in the energy by a factor of hundred for traditional values of and makes it possible to obtain the shadow energy , of the analytic dynamics with high precision .the va algorithm deviates , however , from all other algorithms for classical dynamics by that .furthermore , the discrete va dynamics for a harmonic oscillator ( ddho ) , which can be solved exactly , reveals that the ddho not only has an asymptotic expansion with an underlying analytic shadow hamiltonian .but the ddho dynamics also has a which , independent of an existence of an analytic shadow hamiltonian , is conserved step by step during the discrete time evolution . belowwe show that this hidden invariance is a general quality of the discrete va dynamics , independent of the existence of a shadow hamiltonian , and that the discrete va dynamics has the same qualities and conserved invariances as analytic newtonian dynamics . in order to prove the existence of a hidden energy invariance we must not make use of any analytic tools .this might seem to be a hopeless agenda , but on the other hand the exact solution for a discrete harmonic oscillator makes no use of analyticity and the exact solution has an energy invariance which , in the analytic limit is equal to the energy of analytic newtonian dynamics .the kinetic energy in analytic dynamics is obtained from the momenta . the positions in eq .( 1 ) are the only dynamic variables in the discrete va dynamics , i.e. , the momenta , , are not .consequently , an expression for the total momentum of the system requires a choice of an expression for the momentum of the particle in terms of its positions .the sentences `` momenta '' , `` energy '' , `` potential energy '' , `` kinetic energy '' , and `` work '' should be given by quotations in discrete va dynamics to underline the fact , that the objects in the discrete dynamics only exercise mutual `` irritations '' , or forces at their positions at the discrete time . with the definition of the momenta it follows immediately from the algorithm that the total momentum and angular momentum are conserved for conservative systems with .but the momenta and thereby the `` kinetic energies '' appear with the discrete positions and they are not a function of a single set of the discrete positions .the proof of an invariance , equivalent to the conserved energy in the analytic dynamics is more difficult , but it can be obtained by proving that there exists a hidden `` energy '' invariance , , of the objects dynamics with the change by the discrete step which brings the positions with the forces at time to at . for simplicity consider particles with equal masses , and with the mass included in the discrete time increment , i.e. with .a step with discrete dynamics changes the `` kinetic energy '' of the system by and its ability , , to perform a `` work '' , . since the momenta andthereby the kinetic energy is given by two sets of positions , a change in kinetic energy is given by three consecutive sets of positions .the proof is obtained by consider two consecutive time steps .a new set of positions , is obtained at the time step from the two previous sets , , and the forces , by which the change in the `` kinetic energy '' can be defined as the definition of the change in the kinetic energy for discrete va dynamics is consistent with the definition of the momenta , eq .the forces bring the particles to the positions and with a change in the ability , , to perform a `` work '' , .for the two steps we define the total change and the discrete dynamics obeys the relation the proof starts by noticing that if one instead of the ( nve ) dynamics , obtained by eq .( 1 ) with a constant time increment , adjust the time increment so , one obtains a geodesic step ( nvu ) to the positions which differs from . if is inserted in the verlet algorithm ; eq . ( 1 ) one obtains an expression for at the nvu step at time .e. instead of propagating the system the step with the constant time increment , the increment is adjusted to ensure that the system ability to perform a work is unchanged .the nvu step at time updates the position to and with the geodesic invariance : the constant length of the steps which is obtained by rearranging and squaring eq .( 7 ) i.e. with the change in `` kinetic energy '' the nvu step to obeys we are now able to proof the existence of an `` energy '' invariance ( eq .( 5 ) ) by the va dynamics , eq . ( 1 ) .the proof can e.g. be obtained by deriving the difference between the nvu and the nve step at . the new positions and both obtained from and , but with different time increments . with nve difference can be obtained from the verlet algorithm , eq .( 1 ) and the nvu algorithm , eqn . ( 7 ) and ( 8) , and gives the change in the `` kinetic energy '' is obtained from the nve algorithm eq .( 1 ) and eq .( 5 ) for the discrete dynamics is obtained from eqn .( 15 ) and ( 16 ) .the change in kinetic energy in discrete dynamics must necessarily be obtained from two consecutive steps and the change in the systems ability to perform a work is consistently obtained from the same sets of positions .but , by eliminating in eq .( 4 ) , one obtains an expression for the change in the ability per time step , in the newtonian dynamics the existence of a potential energy state function , , is ensured by that the total work done around any closed circuit from is zero .the va dynamics is started from two sets of positions and , and the time increment .the ability to perform a discrete work is also a state function and it plays the same role as the potential energy for analytic newtonian dynamics . consider any discrete closed sequence of positions generated with va dynamics with the time increment and which starts and ends with the same two configuration and . the total change in the kinetic energy is the start ability is , and since all the terms in are zero accordingly to eq . ( 5 ) it implies that the energy invariance is given by the start condition for the discrete dynamics and it differs from the energy invariance of newtonian dynamics .it is a state function , and due to the discrete dynamics it depends on two consecutive sets of the positions , , instead of the energy invariance in newtonian dynamics which depends on the positions and the momenta at the same time .the two invariances are , however , equal in the analytic limit where the term is the from the newtonian dynamics . the invariance , eq .( 20 ) does not depend on a convergence of an asymptotic expansion , and it differs also from the shadow energy for the shadow hamiltonian by that , although the change contains two terms , the expressions for their changes do not make use of a potential , but only of the forces and the discrete positions .it is obtained by noticing that , with a suitable definition of the `` work '' and kinetic energy , , and by formulating the requirement that is a state function .the derivation is a copy of the derivation of the energy invariance for newtonian dynamics . in thermodynamics the first law of thermodynamicsis formulated exactly in the same manner , but as a basic assumption of that the energy function is a state function consisting of two terms which change by work and kinetic energy exchanges , and the present formulations is the corresponding formulation of the energy conservation in dynamics and thermodynamics for discrete va dynamics .the formulation of energy in discrete va dynamics by the ability to perform a discrete work instead of the potential energy works equally well as the traditional formulation .molecular dynamics simulations with va for particles are obtained from two consecutive start sets of positions , and , and these positions define not only the total dynamics evolution , but also the mean value of traditional zero order energy , , the accurate first order estimate of the shadow energy , of the underlying analytic dynamics and the exact energy invariance .since the change in is given in the same manner as the energy conservation by the first law of thermodynamics , we need to define a `` start ability '' , for the discrete dynamics .but the discrete va dynamics differs in fact neither from the analytic counterpart at this point . in principle we could obtain the ability at the start of the simulation by determining the discrete work performed by bringing the particles from infinite separations via to the positions . in the thermodynamicsone defines , however , a standard state of energy ( enthalpy ) , and here we will use the potential energy at the positions and the accurate estimate , at the start of the dynamics , and obtain and the energy invariance \ ] ] with the energy evolution by md in double precision arithmetic with va was determined for two systems . in the first a liquid system of lj particles at the density was calibrated at the temperature .the thermostat was switched off and the energy evolution in the next ten thousand time steps with was obtained .figure 1 shows the energy evolution and for the first hundred time steps .the first order estimate of ( green dashes ) improves the accuracy of the energy determination with a factor of hundred , the energy invariance is exact ( see inset ) .the constant va dynamics is obtained from two sets of positions , and and these start values can not contain information about whether the system is in equilibrium or not . in order to obtain the evolution of the kinetic energy and the ability in a non - equilibrium system with va dynamicsa system was started with two sets of positions which correspond to a non - equilibrium state .the non - equilibrium state was obtained for a system of lj particles in a fcc solid at and density ( density of coexisting solid at ) by spontaneously expanding the positions to the density by a scaling of all the positions .a lj systems equilibrium state at the density is a liquid .the fcc ordered system melted spontaneously , and the conservative systems temperature decreased according to the second law of thermodynamics .the change in the temperature at the spontaneous melting is shown with green dashes in figure 2 .the temperature decreased from within 20 - 40 time steps to .the differences between and the temperature , obtained for the shadow hamiltonian are of the order , and they are not visible on the figure .the decrease in the spontaneous temperature at the melting was balanced by a corresponding increase in the ability ( red line ) , and the energy invariance ( blue small dashes ) was constant in the conservative system .the two md simulations ( figure 1 and figure 2 ) demonstrate that the traditional and the present ( discrete ) energy concept work equally well .the discrete va dynamics has the same invariances as newtonian dynamics and it raises the question : which of these formulations that are correct , or alternatively , the most appropriate formulation of classical dynamics ? in this context t. d. lee in 1983 wrote a paper entitled , `` can time be a discrete dynamical variable ? '' ; which led to a series of publications by lee and collaborators on the formulation of fundamental dynamics in terms of difference equations , but with exact invariance under continuous groups of translational and rotational transformations .quoting lee , he `` wish to explore an alternative point of view : that physics should be formulated in terms of difference equations and that these difference equations could exhibit all the desirable symmetry properties and conservation laws '' .lee s analysis covers not only classical mechanics , but also non relativistic quantum mechanics and relativistic quantum field theory , and gauge theory and lattice gravity .the discrete dynamics is obtained by treating positions and time , , as a discrete dynamical variables , and he obtained a conserved ( mean ) `` energy '' over consecutive time intervals of different lengths . but according to lee in his formulation of discrete mechanics , `` there is a or time ( in natural units ) .given any time interval , the total number of discrete points that define the trajectory is given by the integer nearest . ''the analogy between lee s formulation of discrete dynamics and va dynamics is striking . for the va dynamicsone uses a , and the momenta are not dynamical variables and they have no impact on the discrete dynamics .the fundamental length and time in quantum electrodynamics are the planck length m and planck time s , and they are immensely smaller than the length unit ( given by the floating point precision ) and time increment used in md to generate the classical discrete dynamics .but the analogy implies that the discrete va dynamics obtained by md is the `` continuation '' of the lee s discrete quantum dynamics for a fundamental length of time , as is the analytic classical dynamics of the traditional quantum mechanics , given by the wigner expansion .the discrete non relativistic quantum mechanics is obtained by lee using feynman s path integration formalism , but for discrete positions and a corresponding discrete action , \ ] ] where is the end - positions at time and the minimum of determines the classical path .the action is a sum over products of time increments and `` kinetic energies '' , and lee has used the symbol , for the average of `` potential energy '' in the time intervals $ ] .the momenta for all the paths , given by the discrete nodes are obtained from differences , , so the classical va discrete trajectory is the classical limit path for discrete quantum mechanics with , as the classical newtonian trajectory is for the traditional quantum mechanics .there is , however , one important difference between the analytic and the discrete dynamics .the momenta in the discrete quantum dynamics are obtained by a difference between discrete sets of positions and they are all with the positions . so the heisenberg uncertainty is a trivial consequence of a discrete quantum electrodynamics with a fundamental length of time .lee motivates his reformulation of the analytic dynamics in the introduction in by the difficulties of formulating a general theoretical model for dynamics and with the concluding remarks in that ( he tries to explore the opposite viewpoint ) : `` difference equations are more fundamental , and differential equations are regarded as approximations '' .the difference in the energy between the analytic energy and the energy obtained by eq .( 21 ) for discrete electrodynamics with a unit time increment is of the order , and it is absolute marginal .the heisenberg uncertainty between positions and momenta is of the order and this uncertainty is an inherent quality of discrete dynamics with a fundamental length of time .the discrete classical va dynamics is fundamentally different from analytic newtonian dynamics , but has the same invariances and the dynamics is obtained equally well by both methods .but , on the other hand the traditional quantum mechanics is in all manner fully appropriate and justifies no revision of the formulation , and an eventual revision of the dynamics must be justified by other facts than conservation of the energy by classical molecular dynamics simulation with the va algorithm .the author acknowledges useful discussions with ole j heilmann and jeppe c dyre .the centre for viscous liquid dynamics `` glass and time '' is sponsored by the danish national research foundation ( dnrf ) grant no . + 99 l. verlet , phys . rev . *159 * , 98 ( 1967 ) .s. toxvaerd , o. j. heilmann and j. c. dyre , j. chem . , 224106 ( 2012 ) .j. m. sanz - serna , acta numer . * 1 * , 243 ( 1992 ) .e. hairer , ann .numer . math .* 1 * , 107 ( 1994 ) . s. reich , siam j. numer . anal . * 36 * , 1549 ( 1999 ) .s. toxvaerd , phys rev .e , * 50 * , 2271 ( 1994 ) .( the word was introduced in this paper , inspired by the terms [ h. yoshida , phys .a * 150 * , 262 ( 1990 ) ] and [ c. grebogi , s. m. hammel , j. a : yorke and t. saur , phys .lett . * 65 * , 1527 ( 1990 ) ] ) .e. hairer , c. lubich and g. wanner _geometrical numerical integration _( springer books archives , 2006 ) s. toxvaerd , j. chem . , 224106 ( 2013 ) .t. s. ingebrigtsen , s. toxvaerd , o. j. heilmann , t. b. schrder , and j. c. dyre , j. chem . phys .* 135 * , 104101 ( 2011 ) .h. goldstein , c. p. poole , and j. safko _ classical mechanics _ third edition .( pearson , 2011 ) .chapter 1 .m. a. barroso and a. l. ferreira , j. chem . phys . *116 * , 7145 ( 2002 ) .t. d. lee , phys . lett . *122 b * , 217 ( 1983 ) . t. d. lee , j. stat . phys . * 46 * , 843 ( 1987 ) .r. friedberg and t. d. lee , nucl .* b 225 [ fs9 ] * , 1 ( 1983 ) .see e.g. l. j. garay , j. mod .phys . a * 10 * , 145 ( 1995 ) .e. wigner , phys . rev . * 40 * , 749 ( 1932 ) .
for discrete classical molecular dynamics ( md ) obtained by the `` verlet '' algorithm ( va ) with the time increment there exists a shadow hamiltonian with energy , for which the discrete particle positions lie on the analytic trajectories for . here we proof that there , independent of such an analytic analogy , exists an exact hidden energy invariance for va dynamics . the fact that the discrete va dynamics has the same invariances as newtonian dynamics raises the question , which of the formulations that are correct , or alternatively , the most appropriate formulation of classical dynamics . in this context the relation between the discrete va dynamics and the ( general ) discrete dynamics investigated by t. d. lee [ phys . lett . , 217 ( 1983 ) ] is presented and discussed .
thermal fluctuation is one of the fundamental noise sources in precise measurements .for example , the sensitivity of interferometric gravitational wave detectors is limited by the thermal noise of the mechanical components. the calculated thermal fluctuations of rigid cavities have coincided with the highest laser frequency stabilization results ever obtained .it is important to evaluate the thermal motion for studying the noise property .the ( traditional ) modal expansion has been commonly used to calculate the thermal noise of elastic systems .however , recent experiments have revealed that modal expansion is not correct when the mechanical dissipation is distributed inhomogeneously . in some theoretical studies , calculation methods that are completely different from modal expansion have been developed .these methods are supported by the experimental results of inhomogeneous loss . however , even when these method were used , the physics of the discrepancy between the actual thermal noise and the traditional modal expansion was not fully understood . in this paper , another method to calculate the thermal noiseis introduced .this method , advanced modal expansion , is a modification of the traditional modal expansion ( this improvement is a general extension of a discussion in ref .the thermal noise spectra estimated by this method are consistent with the results of experiments concerning inhomogeneous loss .it provides information about the disagreement between the thermal noise and the traditional modal expansion .we present the details of these topics in the following sections .the thermal fluctuation of the observed coordinate , , of a linear mechanical system is derived from the fluctuation - dissipation theorem , , \label{fdt}\\ h_{x}(\omega)&=&\frac{\tilde{x}(\omega)}{\tilde{f}(\omega ) } , \label{transfer function}\\ \tilde{x}(\omega)&=&\frac1{2\pi } \int^{\infty}_{-\infty}x(t)\exp(-{\rm i}\omega t)dt , \label{fourier transform}\end{aligned}\ ] ] where , , and , are the frequency , time , boltzmann constant and temperature , respectively .the functions ( , , and ) are the ( single - sided ) power spectrum density of the thermal fluctuation of , the transfer function , and the generalized force , which corresponds to . in the traditional modal expansion , in order to evaluate this transfer function , the equation of motion of the mechanical system without any loss is decomposed into those of the resonant modes .the details are as follows : example of the definition of the observed coordinate , , in eq .( [ observed coordinate ] ) .the mirror motion is observed using a michelson interferometer .the coordinate is the output of the interferometer .the vector represents the displacement of the mirror surface .the field is parallel to the beam axis .its norm is the beam - intensity profile .,width=325 ] the definition of the observed coordinate , , is described as where is the displacement of the system and is a weighting function that describes where the displacement is measured .for example , when mirror motion is observed using a michelson interferometer , as in fig .[ defx ] , and represent the interferometer output and the displacement of the mirror surface , respectively .the vector is parallel to the beam axis .its norm is the beam - intensity profile .the equation of motion of the mechanical system without dissipation is expressed as =f(t)\boldsymbol{p}(\boldsymbol{r } ) , \label{eq_mo_continuous}\ ] ] where is the density and is a linear operator .the first and second terms on the left - hand side of eq .( [ eq_mo_continuous ] ) represent the inertia and the restoring force of the small elements in the mechanical oscillator , respectively .the solution of eq .( [ eq_mo_continuous ] ) is the superposition of the basis functions , the functions , and , represent the displacement and time development of the -th resonant mode , respectively .the basis functions , , are solutions of the eigenvalue problem , written as = -\rho { \omega_n}^2 \boldsymbol{w}_n(\boldsymbol{r } ) , \label{eigenvalue problem}\ ] ] where is the angular resonant frequency of the -th mode .the displacement , , is the component of an orthogonal complete system , and is normalized to satisfy the condition the formula of the orthonormality is described as the parameter is called the effective mass of the mode .the tensor is the kronecker s -symbol .putting eq .( [ mode decomposition ] ) into eq .( [ observed coordinate ] ) , we obtain a relationship between and using eq .( [ normalized condition ] ) , in short , coordinate is a superposition of those of the modes , . in order to decompose the equation of motion , eq .( [ eq_mo_continuous ] ) , eq .( [ mode decomposition ] ) is substituted for in eq .( [ eq_mo_continuous ] ) .equation ( [ eq_mo_continuous ] ) is multiplied by and then integrated over all of the volume using eqs .( [ eigenvalue problem ] ) , ( [ normalized condition ] ) and ( [ effective mass ] ) .the result is that the equation of motion of the -th mode , , is the same as that of a harmonic oscillator on which force is applied .after modal decomposition , the dissipation term is added to the equation of each mode .the equation of the -th mode is written as \tilde{q}_n=\tilde{f } , \label{traditional1}\ ] ] in the frequency domain .the function is the loss angle , which represents the dissipation of the -th mode .the transfer function , , derived from eqs .( [ transfer function ] ) , ( [ observed coordinate decomposition ] ) and ( [ traditional1 ] ) is the summation of those of the modes , , }. \label{traditional3}\end{aligned}\ ] ] according to eqs .( [ fdt ] ) and ( [ traditional3 ] ) , the power spectrum density , , is the summation of the power spectrum , , of , in the traditional modal expansion , the dissipation term is introduced after decomposition of the equation of motion without any loss . on the contrary , in an advanced modal expansion , the equation with the loss is decomposed .if the loss is sufficiently small , the expansion process is similar to that in the perturbation theory of quantum mechanics .the equation of is expressed as \tilde{q}_n\nonumber\\ & + & \sum_{k \neq n } { \rm i } \alpha_{nk}(\omega ) \tilde{q}_k = \tilde{f},\label{advanced1}\\ \phi_n(\omega ) & = & \frac{\alpha_{nn}}{m_n { \omega_n}^2}\label{phi}. \label{phi_n}\end{aligned}\ ] ] the third term in eq .( [ advanced1 ] ) is the difference between the advanced , eq . ( [ advanced1 ] ) , and traditional , eq . ( [ traditional1 ] ) , modal expansions . since this term is a linear combination of the motions of the other modes , it represents the couplings between the modes .the magnitude of the coupling , , depends on the property and the distribution of the loss ( described below ) .let us consider the formulae of the couplings caused by the typical inhomogeneous losses , the origins of which exist outside and inside the material ( viscous damping and structure damping , respectively ) .regarding most of the external losses , for example , the eddy - current damping and residual gas damping are of the viscous type .the friction force of this damping is proportional to the velocity .inhomogeneous viscous damping introduces a friction force , , into the left - hand side of the equation of motion , eq .( [ eq_mo_continuous ] ) , in the frequency domain .the function represents the strength of the damping .the equation of motion with the dissipation term , , is decomposed .since the loss is small , the basis functions of the equation without loss are available .equation ( [ mode decomposition ] ) is put into the equation of motion along with the inhomogeneous viscous damping .this equation multiplied by is integrated .the coupling of this dissipation is written in the form in most cases , the internal loss in the material is expressed using the phase lag , , between the strain and the stress .the magnitude of the dissipation is proportional to this lag .the phase lag is almost constant against the frequency in many kinds of materials ( structure damping ) . in the frequency domain ,the relationship between the strain and the stress ( the generalized hooke s law ) in an isotropic elastic body is written as }{1+\sigma } \left(\tilde{u}_{ij } + \frac{\sigma}{1 - 2\sigma}\sum_{l}\tilde{u}_{ll } \delta_{ij}\right)\nonumber\\ & = & [ 1+{\rm i}\phi(\boldsymbol{r})]\tilde{\sigma}'_{ij } , \label{structure_stress}\\ u_{ij } & = & \frac1{2 } \left(\frac{\partial u_i}{\partial x_j } + \frac{\partial u_j}{\partial x_i}\right ) , \label{strain}\end{aligned}\ ] ] where is young s modulus and is the poisson ratio ; and are the stress and strain tensors , respectively .the tensor , , is the real part of the stress , .it represents the stress when the structure damping vanishes .the value , , is the -th component of .the equation of motion of an elastic body in the frequency domain is expressed as where is the -th component of . from eqs .( [ structure_stress ] ) and ( [ eq_mo_elastic_withloss ] ) , an inhomogeneous structure damping term is obtained , .the equation of motion with the inhomogeneous structure damping is decomposed in the same manner as that of the inhomogeneous viscous damping .the coupling is calculated using integration by parts and gauss theorem , dv\nonumber\\ & = & \int \frac{e_0 \phi(\boldsymbol{r})}{1+\sigma } \left(\sum_{i , j}w_{n , ij}w_{k , ij } + \frac{\sigma}{1 - 2\sigma}\sum_{l}w_{n , ll}\sum_{l}w_{k , ll } \right ) dv = \alpha_{kn } , \label{coupling_structure}\end{aligned}\ ] ] where and are the -th components of and the normal unit vector on the surface .the tensors , and , are the strain and stress tensors of the -th mode , respectively . in order to calculate these tensors , is substituted for in eqs .( [ structure_stress ] ) and ( [ strain ] ) with .equation ( [ coupling_structure ] ) is valid when the integral of the function , , on the surface of the elastic body vanishes . for example , the surface is fixed ( ) or free ( ) .the equation of motion in the advanced modal expansion coincides with that in the traditional modal expansion when all of the couplings vanish . a comparison between eqs .( [ effective mass ] ) and ( [ coupling_viscous ] ) shows that in viscous damping all are zero when the dissipation strength , , does not depend on the position , . in the case of structure damping , from eqs .( [ eq_mo_continuous ] ) , ( [ mode decomposition ] ) , ( [ eigenvalue problem ] ) and ( [ eq_mo_elastic_withloss ] ) , the stress , , without dissipation satisfies according to eq .( [ effective mass ] ) , eq .( [ stress decomposition ] ) is decomposed without any couplings . from eq .( [ stress decomposition ] ) and the structure damping term , , the conclusion is derived ; all of the couplings in the structure damping vanish when the loss amplitude , , is homogeneous . in summary ,the inhomogeneous viscous and structure dampings produce mode couplings and destroy the traditional modal expansion . the reason why the inhomogeneity of the loss causes the couplings is as follows .let us consider the decay motion after only one resonant mode is excited .if the loss is uniform , the shape of the displacement of the system does not change while the resonant motion decays . on the other hand ,if the dissipation is inhomogeneous , the motion near the concentrated loss decays more rapidly than the other parts .the shape of the displacement becomes different from that of the original resonant mode .this implies that the other modes are excited , i.e. the energy of the original mode is leaked to the other modes .this energy leakage represents the couplings in the equation of motion .it must be noticed that some kinds of `` homogeneous '' loss cause the couplings .for example , in thermoelastic damping , which is a kind of internal loss , the energy components of the shear strains , , are not dissipated .the couplings , , do not have any terms that consist of the shear strain tensors .the coupling formula of the homogeneous thermoelastic damping is different from eq .( [ coupling_structure ] ) with the constant .the couplings are not generally zero , even if the thermoelastic damping is uniform .the advanced , not traditional , modal expansion provides a correct evaluation of the `` homogeneous '' thermoelastic damping . in this paper , however , only coupling caused by inhomogeneous loss is discussed . in the advanced modal expansion ,the transfer function , , is derived from eqs .( [ transfer function ] ) , ( [ observed coordinate decomposition ] ) , and ( [ advanced1 ] ) ( since the dissipation is small , only the first - order of is considered ) , [ -m_k \omega^2 + m_k { \omega_k}^2 ( 1+{\rm i}\phi_k)]}. \label{advanced3}\ ] ] putting eq .( [ advanced3 ] ) into eq .( [ fdt ] ) , the formula for the thermal noise is obtained . in the off - resonance region ,where for all , this formula approximates the expression the first term is the same as the formula of the traditional modal expansion , eq .( [ traditional2 ] ) . the interpretation of eq .( [ advanced2 ] ) is as follows .the power spectrum density of the thermal fluctuation force of the -th mode , , and the cross - spectrum density between and , , are evaluated from eq .( [ advanced1 ] ) and the fluctuation - dissipation theorem , the power spectrum density , , is independent of . on the other hand , depends on .having the correlations between the fluctuation forces of the modes , correlations between the motion of the modes must also exist . the power spectrum density of the fluctuation of , , and the cross - spectrum density between the fluctuations of and , ,are described as under the same approximation of eq .( [ advanced2 ] ) .the first and second terms in eq .( [ advanced2 ] ) are summations of the fluctuation motion of each mode , eq . ( [ g_q_n ] ) , and the correlations , eq .( [ g_q_n_q_k ] ) , respectively . in conclusion ,inhomogeneous mechanical dissipation causes mode couplings and correlations of the thermal motion between the modes . in order to check wheatherthe formula of the thermal motion in the advanced modal expansion is consistent with the the equipartition principle , the mean square of the thermal fluctuation , , which is an integral of the power spectrum density over the whole frequency region , is evaluated .this mean square is derived from eq .( [ fdt ] ) using the kramers - kronig relation , = -\frac1{\pi } \int_{-\infty}^{\infty } \frac{{\rm im}[h_x(\xi)]}{\xi-\omega}d\xi .\label{kramers - kronig}\ ] ] the calculation used to evaluate the mean square is written as }{\omega } d\omega \nonumber\\ & = & k_{\rm b}t { \rm re}[h_{x}(0 ) ] .\label{x2}\end{aligned}\ ] ] since the transfer function , , is the ratio of the fourier components of the real functions , the value is a real number . the functions and , which cause the imaginary part of , must vanish when is zero .the correlations do not affect the mean square of the thermal fluctuation .equation ( [ x2 ] ) is rewritten using eq .( [ advanced3 ] ) as equation ( [ x2_2 ] ) is equivalent to the prediction of the equipartition principle . the calculation of the formula of the advanced modal expansion , eq .( [ advanced2 ] ) , is more troublesome than that of the other methods , which are completely different from the modal expansion , when many modes contribute to the thermal motion .however , the advanced modal expansion gives clear physical insight about the discrepancy between the thermal motion and the traditional modal expansion , as shown in sec .[ new insight ] .it is difficult to find this insight using other methods .in order to test the advanced modal expansion experimentally , our previous experimental results concerning oscillators with inhomogeneous losses are compared with an evaluation of the advanced modal expansion . in an experiment involving a drum ( a hollow cylinder made from aluminum alloy as the prototype of the mirror in the interferometer ) with inhomogeneous eddy - current damping by magnets , the measured values agreed with the formula of the direct approach , eq .( 6 ) in ref .this expression is the same as that of the advanced modal expansion .figure [ experiment ] presents the measured spectra of an aluminum alloy leaf spring with inhomogeneous eddy - current damping .the position of the magnets for the eddy - current damping and the observation point are indicated above each graph . in the figures above each graph ,the left side of the leaf spring is fixed .the right side is free .the open circles in the graphs represent the power spectra of the thermal motion derived from the measured transfer functions using the fluctuation - dissipation theorem .these values coincide with the directly measured thermal - motion spectra .the solid lines are estimations using the advanced modal expansion ( the correlations derived from eqs .( [ coupling_viscous ] ) and ( [ g_q_n_q_k ] ) are almost perfect ) . as a reference , an evaluation of the traditional modal expansion is also given ( dashed lines ) .the results of a leaf - spring experiment are consistent with the advanced modal expansion .therefore , our two experiments support the advanced modal expansion .the advanced modal expansion provides physical insight about the disagreement between the real thermal motion and the traditional modal expansion . here, let us discuss the three factors that affect this discrepancy : the number of the modes , the absolute value and the sign of the correlation . since the difference between the advanced and traditional modal expansions is the correlations between the multiple modes , the number of the modes affects the magnitude of the discrepancy . if the thermal fluctuation is dominated by the contribution of only one mode , this difference is negligible , even when there are strong correlations . on the other hand ,if the thermal motion consists of many modes , the difference is larger when the correlations are stronger .examples of the one - mode oscillator are given in fig .[ experiment ] .the measured thermal motion spectra of the leaf spring with inhomogeneous losses below 100 hz were the same as the estimated values of the `` traditional '' modal expansion .this is because these fluctuations were dominated by only the first mode ( about 60 hz ) . as another example , let us consider a single - stage suspension for a mirror in an interferometric gravitational - wave detector .the sensitivity of the interferometer is limited by the thermal noise of the suspensions between 10 hz and 100 hz . since , in this frequency region ,this thermal noise is dominated by only the pendulum mode , the thermal noise generated by the inhomogeneous loss agrees with the traditional modal expansion .it must be noticed that the above discussion is valid only when the other suspension modes are negligible .for example , when the laser beam spot on the mirror surface is shifted , the two modes ( pendulum mode and mirror rotation mode ) must be taken into account .in such cases , the inhomogeneous loss causes a disagreement between the real thermal noise of the single - stage suspension and the traditional modal expansion .the discrepancy between the actual thermal motion and the traditional modal expansion in the elastic modes of the mirror is larger than that of the drum , the prototype of the real mirror in our previous experiment .one of the reasons is that the thermal motion of the mirror ( rigid cylinder ) consists of many modes .the drum ( hollow cylinder ) had only two modes .since the number of modes that contribute to the thermal noise of the mirror in the interferometer increases when the laser beam radius becomes smaller , the discrepancy is larger with a narrower beam .this consideration is consistent with our previous calculation .example for considering the absolute value of the coupling .there are the -th and -th modes , and , of a bar with both free ends .the vertical axis is the displacement .the dashed horizontal lines show the bar that does not vibrate . when only the grey part ( a ) , which is narrower than the wavelengths on the left - hand side , has viscous damping , the absolute value of the coupling , eq .( [ coupling_viscous ] ) , is large . because the signs of and do not change in this region .if viscous damping exits only in the hatching part ( b ) , which is wider than the wavelengths on the right - hand side , the coupling is about zero , because , in this wide region , the sign of the integrated function in eq . ( [ coupling_viscous ] ) , which is proportional to the product of and , changes.,width=325 ] in eq .( [ g_q_n_q_k ] ) , the absolute value of the cross - spectrum density , , is proportional to that of the coupling , . equations ( [ coupling_viscous ] ) and ( [ coupling_structure ] ) show that the coupling depends on the scale of the dissipation distribution .a simple example of viscous damping is shown in fig .[ abs ex ] .let us consider the absolute value of when the viscous damping is concentrated ( at around ) in a smaller volume ( ) than the wavelengths of the -th and -th modes .an example of this case is ( a ) in fig .[ abs ex ] .it is assumed that the vector is nearly parallel to .the absolute value of the coupling is derived from eqs .( [ phi_n ] ) and ( [ coupling_viscous ] ) as the absolute value of the cross - spectrum is derived from eqs .( [ g_q_n ] ) , ( [ g_q_n_q_k ] ) , and ( [ coupling_narrow ] ) as in short , the correlation is almost perfect . on the other hand ,if the loss is distributed more broadly than the wavelengths , the coupling , i.e. the correlation , is about zero , the dissipation in the case where the size is larger than the wavelengths is equivalent to the homogeneous loss .an example of this case is ( b ) in fig .[ abs ex ] .although the above discussion is for the case of viscous damping , the conclusion is also valid for other kinds of dissipation .when the loss is localized in a small region , the correlations among many modes are strong .the loss in a narrower volume causes a larger discrepancy between the actual thermal motion and the traditional modal expansion .this conclusion coincides with our previous calculation of a mirror with inhomogeneous loss .the sign of the correlation depends on the frequency , the loss distribution , and the position of the observation area .the position dependence provides a solution to the inverse problem : an evaluation of the distribution and frequency dependence of the loss from measurements of the thermal motion . according to eq .( [ g_q_n_q_k ] ) , the sign of the correlation reverses at the resonant frequencies .for example , in calculating the double pendulum , experiments involving the drum and a resonant gravitational wave detector with optomechanical readout , this change of the sign was found . in some cases , the thermal - fluctuation spectrum changes drastically around the resonant frequencies .a careful evaluation is necessary when the observation band includes the resonant frequencies .examples are when using wide - band resonant gravitational - wave detectors , and thermal - noise interferometers .the reason for the reverse at the resonance is that the sign of the transfer function of the mode with a small loss from the force ( ) to the motion ( ) , in eq .( [ traditional3 ] ) [ , below the resonance is opposite to that above it . since the sign of the correlation changes at the resonant frequencies , the cross - spectrum densities , the second term of eq .( [ advanced2 ] ) , make no contribution to the integral of the power spectrum density over the whole frequency region , i.e. the mean square of the thermal fluctuation , , as shown in sec .[ thermal noise of advanced ] .therefore , the consideration in sec .[ thermal noise of advanced ] indicates that a reverse of the sign of the correlation conserves the equipartition principle , a fundamental principle in statistical mechanics .example for considering the sign of the coupling .there are the lowest three modes , , of a bar with both free ends .the vertical axis is the displacement .the dashed horizontal lines show the bar that does not vibrate .the observation point is at the right - hand side end .the normalization condition is eq .( [ normalized condition ] ) .the sign and shape of the displacement of all the modes around the observation point are positive and similar , respectively . on the contrary , at the left - hand side end ,the sign and shape of the -th mode are different from each other in many cases . from eqs .( [ normalized condition ] ) , ( [ coupling_viscous ] ) , ( [ coupling_structure ] ) , when the loss is concentrated near to the observation area , most of the couplings are positive . on the other hand , when the loss is localized far from the observation area , the number of the negative couplings is about the same as the positive one . in such a case , most of the couplings between the -th and -th modes are negative.,width=325 ] according to eqs .( [ coupling_viscous ] ) and ( [ coupling_structure ] ) , and the normalization condition , eq .( [ normalized condition ] ) , the sign of the coupling , , depends on the loss distribution and the position of the observation area .a simple example is shown in fig .[ signex ] .owing to this normalization condition , near the observation area , the basis functions , , are similar in most cases . on the contrary , in a volume far from the observation area , is different from each other in many cases . from eqs .( [ normalized condition ] ) , ( [ coupling_viscous ] ) , ( [ coupling_structure ] ) and ( [ g_f_n_f_k ] ) , when the loss is concentrated near to the observation area , most of the couplings ( and the correlations between the fluctuation forces of the modes , ) are positive . on the other hand , when the loss is localized far from the observation area , the numbers of the negative couplings and are about the same as the positive ones . in such a case ,most of the couplings between the -th and -th modes ( and ) are negative .these are because the localized loss tends to apply to the fluctuation force on all of the modes to the same direction around itself .equation ( [ g_q_n_q_k ] ) indicates that the sign of the correlation , , is the same as that of the coupling , , below the first resonance . in this frequency band ,the thermal motion is larger and smaller than the evaluation of the traditional modal expansion if the dissipation is near and far from the observation area , respectively .this conclusion is consistent with the qualitative discussion of levin , our previous calculation of the mirror , and the drum experiment .the above consideration about the sign of the coupling gives a clue to solving the inverse problem : estimations of the distribution and frequency dependences of the loss from the measurement of the thermal motion .since the sign of the coupling depends on the position of the observation area and the loss distribution , a measurement of the thermal vibrations at multiple points provides information about the couplings , i.e. the loss distribution . moreover , multiple - point measurements reveal the loss frequency dependence .even if the loss is uniform , the difference between the actual thermal motion and the traditional modal expansion exits when the expected frequency dependence of the loss angles , , is not correct .the measurement at the multiple points shows whether the observed difference is due to an inhomogeneous loss or an invalid loss angle .this is because the sign of the difference is independent of the position of the observation area if the expected loss angles are not valid .as an example , our leaf - spring experiment is discussed .the two graphs on the right ( or left ) side of fig .[ experiment ] show thermal fluctuations at different positions in the same mechanical system . the spectrum is smaller than the traditional modal expansion .the other one is larger .thus , the disagreement in the leaf - spring experiment was due to inhomogeneous loss , not invalid loss angles .when the power spectrum had a dip between the first ( 60 hz ) and second ( 360 hz ) modes , the sign of the correlation , , was negative . according to eq .( [ g_q_n_q_k ] ) , the sign of the coupling , , was positive .the loss was concentrated near to the observation point when a spectrum dip was found .the above conclusion agrees with the actual loss shown in fig .[ experiment ] .the traditional modal expansion has frequently been used to evaluate the thermal noise of mechanical systems .however , recent experimental research has proved that this method is invalid when the mechanical dissipation is distributed inhomogeneously . in this paper, we introduced a modification of the modal expansion . according to this method ( the advanced modal expansion ) , inhomogeneous loss causes correlations between the thermal fluctuations of the modes .the fault of the traditional modal expansion is that these correlations are not taken into account .our previous experiments concerning the thermal noise of the inhomogeneous loss support the advanced modal expansion .the advanced modal expansion gives interesting physical insight about the difference between the actual thermal noise and the traditional modal expansion .when the thermal noise consists of the contributions of many modes , the loss is localized in a narrower area , which makes a larger difference . when the thermal noise is dominated by only one mode ,this difference is small , even if the loss is extremely inhomogeneous .the sign of this difference depends on the frequency , the distribution of the loss , and the position of the observation area .it is possible to derive the distribution and frequency dependence of the loss from measurements of the thermal vibrations at multiple points .there were many problems concerning the thermal noise caused by inhomogeneous loss .our previous work and this research solved almost all of these problems : a modification of the traditional estimation method ( in this paper ) , experimental checks of the new and traditional estimation methods and a confirmation of the new methods ( and this paper ) , an evaluation of the thermal noise of the gravitational wave detector using the new method , and a consideration of the physical properties of the discrepancy between the actual thermal noise and the traditional estimation method ( in this paper ) .this research was supported in part by research fellowships of the japan society for the promotion of science for young scientists , and by a grant - in - aid for creative basic research of the ministry of education . within the optics community ,another revision of the modal expansion , the quasinormal modal expansion , is being discussed : a.m. van den brink , k. young , and m.h .yung , j. phys .a : math . gen . * 39 * , 3725 ( 2006 ) and its references . in the off - resonance region , where for all , this approximation is always appropriate because the maximum of is . in the calculation to derive eq .( [ advanced3 ] ) , cramer s rule is useful .the sign on the right - hand side of the kramers - kronig relation in ref . is positive .the definition of the fourier transformation in ref . is conjugate to that of this paper , eq .( [ fourier transform ] ) .the results of the discussion presented in this section are valid under arbitrary normalization conditions . if a normalization condition other than eq .( [ normalized condition ] ) is adopted , the signs of some couplings change .the signs of the right - hand sides of eqs .( [ observed coordinate decomposition ] ) and ( [ advanced1 ] ) also change. equations ( [ observed coordinate ] ) , ( [ eq_mo_continuous ] ) , and ( [ mode decomposition ] ) say that the right - hand sides of eqs .( [ observed coordinate decomposition ] ) and ( [ advanced1 ] ) in the general style are and , respectively .the change in the sign cancels each other in the process of the calculation for the transfer function , , and the power spectrum of the thermal motion , .
we modified the modal expansion , which is the traditional method used to calculate thermal noise . this advanced modal expansion provides physical insight about the discrepancy between the actual thermal noise caused by inhomogeneously distributed loss and the traditional modal expansion . this discrepancy comes from correlations between the thermal fluctuations of the resonant modes . the thermal noise spectra estimated by the advanced modal expansion are consistent with the results of measurements of thermal fluctuations caused by inhomogeneous losses .
the last few years have witnessed a tremendous activity devoted to the understanding of complex networks .a particular class of networks is the _ bipartite networks _ ,whose nodes are divided into two sets , and , and only the connection between two nodes in different sets is allowed ( as illustrated in fig .many systems are naturally modeled as bipartite networks : human sexual network is consisted of men and women , metabolic network is consisted of chemical substances and chemical reactions , etc .two kinds of bipartite networks should be paid more attention for their particular significance in social , economic and information systems .one is the so - called _ collaboration network _ , which is generally defined as a networks of actors connected by a common collaboration act .examples are numerous , including scientists connected by coauthoring a scientific paper , movie actors connected by costarring the same movie , and so on . moreover , the concept of collaboration network is not necessarily restricted within social systems ( see , for example , recent reports on technological collaboration of software and urban traffic systems ) .although the collaboration network is usually displayed by the one - mode projection on actors ( see later the definition ) , its fully representation is a bipartite network .the other one is named _ opinion network _ , where each node in the _ user - set _ is connected with its collected objects in the _ object - set_. for example , listeners are connected with the music groups they collected from music - sharing library ( e.g. _ audioscrobbler.com_ ) , web - users are connected with the webs they collected in a bookmark site ( e.g. _ delicious _ ) , customers are connected with the books they bought ( e.g. _ amazon.com_ ) .recently , a large amount of attention is addressed to analyzing and modeling bipartite network .however , for the convenience of directly showing the relations among a particular set of nodes , the bipartite network is usually compressed by one - mode projecting .the one - mode projection onto ( -projection for short ) means a network containing only -nodes , where two -nodes are connected when they have at least one common neighboring -node .1b and fig .1c show the resulting networks of -projection and -projection , respectively .the simplest way is to project the bipartite network onto an unweighted network , without taking into account of the frequency that a collaboration has been repeated .although some topological properties can be qualitatively obtained from this unweighted version , the loss of information is obvious .for example , if two listeners has collected more than 100 music groups each ( it is a typical number of collections , like in _ audioscrobbler.com_ , the average number of collected music groups per listener is 140 ) , and only one music group is selected by both listeners , one may conclude that those two listeners probably have different music taste . on the contrary , if nearly 100 music groups belong to the overlap , those two listeners are likely to have very similar habits .however , in the unweighted listener - projection , this two cases have exactly the same graph representation .since the one - mode projection is always less informative than the original bipartite network , to better reflect structure of the network , one has to use the bipartite graph to quantify the weights in the projection graph .a straightforward way is to weight an edge directly by the number of times the corresponding partnership repeated .this simple rule is used to obtain the weights in fig .1b and fig .1c for -projection and -projection , respectively .this weighted network is much more informative than the unweighted one , and can be analyzed by standard techniques for unweighted graphs since its weights are all integers .however , this method is also quantitatively biased . empirically studied the scientific collaboration networks , and pointed out that the impact of one additional collaboration paper should depend on the original weight between the two scientists .for example , one more co - authorized paper for the two authors having only co - authorized one paper before should have higher impact than for the two authors having already co - authorized 100 papers .this saturation effect can be taken into account by introducing a hyperbolic tangent function onto the simple count of collaborated times . as stated by newman that two scientists whose names appear on a paper together with many other coauthors know one another less well on average than two who were the sole authors of a paper , to consider this effect, he introduced the factor to weaken the contribution of collaborations involving many participants , where is the number of participants ( e.g. the number of authors of a paper ) .[ 0.8]-projection ( b ) and -projection ( c ) . the edge - weight in ( b ) and ( c ) is set as the number of common neighbors in and , respectively.,title="fig : " ] how to weight the edges is the key question of the one - mode projections and their use .however , we lack a systematic exploration of this problem , and no solid base of any weighting methods have been reported thus far .for example , one may ask the physical reason why using the hyperbolic tangent function to address the saturation effect rather than other infinite possible candidates .in addition , for simplicity , the weighted adjacent matrix is always set to be symmetrical , that is , .however , as in scientific collaboration networks , different authors may assign different weights to the same co - authorized paper , and it is probably the case that the author having less publications may give a higher weight , vice versa .therefore , a more natural weighting method may be not symmetrical .another blemish in the prior methods is that the information contained by the edge whose adjacent -node ( -node ) is of degree one will be lost in -projection ( -projection ). this information loss may be serious in some real opinion networks .for example , in the user - web network of_ delicious _( http://del.icio.us ) , a remarkable fraction of webs have been collected only once , as well as a remarkable fraction of users have collected only one web .therefore , both the user - projection and web - projection will squander a lot of information .since more than half publications in _ mathematical reviews _ have only one author , the situation is even worse in mathematical collaboration network . in this article, we propose a weighting method , with asymmetrical weights ( i.e. , ) and allowed self - connection ( i.e. , ) .this method can be directly applied as a personal recommendation algorithm , which performs remarkably better than the widely used _ global ranking method _ ( grm ) and _ collaborative filtering _ ( cf ) .without loss of generality , we discuss how to determine the edge - weight in -projection , where the weight can be considered as the importance of node in s sense , and it is generally not equal to .for example , in the book - projection of a customer - book opinion network , the weight between two books and contributes to the strength of book recommendation to a customer provided he has brought book . in the scientific collaboration network , reflects how likely is to choose as a contributor for a new research project .more generally , we assume a certain amount of a resource ( e.g. recommendation power , research fund , etc . )is associated with each -node , and the weight represents the proportion of the resource would like to distribute to .[ 1.2]-nodes , and the lower four are -nodes .the whole process consists of two steps : first , the resource flows from to ( a ) , and then returns to ( b ) .different from the prior network - based resource - allocation dynamics , the resource here can only flow from one node - set to another node - set , without consideration of asymptotical stable flow among one node - set.,title="fig : " ] to derive the analytical expression of , we go back to the bipartite representation .since the bipartite network itself is unweighted , the resource in an arbitrary -node should be equally distributed to its neighbors in .analogously , the resource in any -node should be equally distributed to its -neighbors . as shown in fig .2a , the three -nodes are initially assigned weights , and .the resource - allocation process consists of two steps ; first from to , then back to .the amount of resource after each step is marked in fig .2b and fig .2c , respectively . merging these two steps into one ,the final resource located in those three -nodes , denoted by , and , can be obtained as : note that , this matrix are column normalized , and the element in the row and column represents the fraction of resource the -node transferred to the -node . according to the above description , this matrix is the very weighted adjacent matrix we want .now , consider a general bipartite network , where is the set of edges .the nodes in and are denoted by and , respectively .the initial resource located on the -node is .after the first step , all the resource in flows to , and the resource located on the -node reads , where is the degree of , and is an adjacent matrix as in the next step , all the resource flows back to , and the final resource located on reads , this can be rewritten as where which sums the contribution from all 2-step paths between and .the matrix represents the weighted -projection we were looking for .the resource - allocation process can be written in the matrix form as .it is worthwhile to emphasize the particular characters of this weighting method . for convenience ,we take the scientific collaboration network as an example , but our statements are not restricted to the collaboration networks .firstly , the weighted matrix is not symmetrical as this is in accordance with our daily experience - the weight of a single collaboration paper is relatively small if the scientist has already published many papers ( i.e. , he has large degree ) , vice versa .secondly , the diagonal elements in are nonzero , thus the information contained by the connections incident to one - degree -node will not be lost .actually , the diagonal element is the maximal element in each column . only if all s -neighbors belongs to s neighbors set , .it is usually found in scientific collaboration networks , since some students coauthorize every paper with their supervisors .therefore , the ratio can be considered as s researching independence to , the smaller the ratio , the more independent the researcher is , vice versa . the independence of can be approximately measured as generally , the author who often publishes papers solely , or often publishes many papers with different coauthors is more independent .note that , introducing the measure here is just to show an example how to use the information contained by self - weight , without any comments whether to be more independent is better , or contrary .the exponential growth of the internet and world - wide - web confronts people with an information overload : they are facing too many data and sources to be able to find out those most relevant for him .one landmark for information filtering is the use of search engines , however , it can not solve this _ overload problem _ since it does not take into account of personalization thus returns the same results for people with far different habits .so , if user s habits are different from the mainstream , it is hard for him to find out what he likes in the countless searching results . thus far ,the most potential way to efficiently filter out the information overload is to recommend personally .that is to say , using the personal information of a user ( i.e. , the historical track of this user s activities ) to uncover his habits and to consider them in the recommendation . for instances ,amazon.com uses one s purchase history to provide individual suggestions .if you have bought a textbook on statistical physics , amazon may recommend you some other statistical physics books .based on the well - developed _ web 2.0 _ technology , the recommendation systems are frequently used in web - based movie - sharing ( music - sharing , book - sharing , etc . ) systems , web - based selling systems , bookmark web - sites , and so on .motivated by the significance in economy and society , recently , the design of an efficient recommendation algorithm becomes a joint focus from marketing practice to mathematical analysis , from engineering science to physics community .[ 0.8 ] basically , a recommendation system consists of users and objects , and each user has collected some objects .denote the object - set as and user - set as .if users are only allowed to collect objects ( they do not rate them ) , the recommendation system can be fully described by an adjacent matrix , where if has already collected , and otherwise .a reasonable assumption is that the objects you have collected are what you like , and a recommendation algorithm aims at predicting your personal opinions ( to what extent you like or hate them ) on those objects you have not yet collected .a more complicated case is the voting system , where each user can give ratings to objects ( e.g. , in the _ yahoo music _ , the users can vote each song with 5 discrete ratings representing _ never play again _ , _ it is ok _ , _ like it _ , _ love it _ , and _ ca nt get enough _ ) , and the recommendation algorithm concentrates on estimating unknown ratings for objects .these two problems are closely related , however , in this article , we focus on the former case . denote the degree of object . the _ global ranking method _ ( grm ) sorts all the objects in the descending order of degree and recommends those with highest degrees .although the lack of personalization leads to an unsatisfying performance of grm ( see numerical comparison in the next section ) , it is widely used since it is simple and spares computational resources . for example , the well - known _ yahoo top 100 mtvs _ ,_ amazon list of top sellers _ , as well as the board of most downloaded articles in many scientific journals , can be all considered as results of grm .thus far , the widest applied personal recommendation algorithm is _ collaborative filtering _ ( cf ) , based on a similarity measure between users .consequently , the prediction for a particular user is made mainly using the similar users . the similarity between users and be measured in the pearson - like form where is the degree of user . for any user - object pair ,if has not yet collected ( i.e. , ) , by cf , the predicted score , ( to what extent likes ) , is given as two factors give rise to a high value of .firstly , if the degree of is larger , it will , generally , have more nonzero items in the numerator of eq . ( 10 ) . secondly, if is frequently collected by users very similar to , the corresponding items will be significant .the former pays respect to the global information , and the latter reflects the personalization . for any user , all the nonzero with are sorted in descending order , and those objects in the top are recommended .we propose a recommendation algorithm , which is a direct application of the weighting method for bipartite networks presented above .the layout is simple : first compress the bipartite user - object network by object - projection , the resulting weighted network we label .then , for a given user , put some resource on those objects already been collected by . for simplicity ,we set the initial resource located on each node of as that is to say , if the object has been collected by , then its initial resource is unit , otherwise it is zero .note that , the initial configuration , which captures personal preferences , is different for different users .the initial resource can be understood as giving a unit recommending capacity to each collected object . according to the weighted resource - allocation process discussed in the prior section ,the final resource , denoted by the vector , is .thus components of are for any user , all his uncollected objects ( , ) are sorted in the descending order of , and those objects with highest value of final resource are recommended .we call this method _ network - based inference _( nbi ) , since it is based on the weighted network .note that , the calculation of eq .( 12 ) should be repeated times , since the initial configurations are different for different users .we use a benchmark data - set , namely _ movielens _ , to judge the performance of described algorithms .the movielens data is downloaded from the web - site of _ grouplens research _( http://www.grouplens.org ) .the data consists 1682 movies ( objects ) and 943 users .actually , movielens is a rating system , where each user votes movies in five discrete ratings 1 - 5 .hence we applied the coarse - graining method similar to what is used in ref . : a movie has been collected by a user iff the giving rating is at least 3 .the original data contains ratings , 85.25% of which are , thus the user - movie bipartite network after the coarse gaining contains 85250 edges . to test the recommendation algorithms ,the data set ( i.e. , 85250 edges ) is randomly divided into two parts : the training set contains 90% of the data , and the remaining 10% of data constitutes the probe . the training set is treated as known information , while no information in probe set is allowed to be used for prediction .[ 0.8 ] all three algorithms , grm , cf and nbi , can provide each user an ordered queue of all its uncollected movies .for an arbitrary user , if the edge is in the probe set ( according to the training set , is an uncollected movie for ) , we measure the position of in the ordered queue .for example , if there are 1500 uncollected movies for , and is the 30th from the top , we say the position of is the top 30/1500 , denoted by . since the probe entries are actually collected by users , a good algorithm is expected to give high recommendations to them , thus leading to small .the mean value of the position value , averaged over entries in the probe , are 0.139 , 0.120 and 0.106 by grm , cf and nbi , respectively .3 reports the distribution of all the position values , which are ranked from the top position ( ) to the bottom position ( ) . clearly , nbi is the best method and grm performs worst ..the hitting rates for some typical lengths of recommendation list . [ cols="^,^,^,^",options="header " , ] to make this work more relevant to the real - life recommendation systems , we introduce a measure of algorithmic accuracy that depends on the length of recommendation list . the recommendation list for a user , if of length , contains recommended movies resulting from the algorithm . for each incident entry in the probe ,if is in s recommendation list , we say the entry is _ hit _ by the algorithm . the ratio of hit entries to the population is named _hitting rate_. for a given , the algorithm with a higher hitting rate is better , and vice versa . if is larger than the total number of uncollected movies for a user , the recommendation list is defined as the set of all his uncollected movies .clearly , the hitting rate is monotonously increasing with , with the upper bound 1 for sufficiently large . in fig .4 , we report the hitting rate as a function of for different algorithms . in accordance with fig .3 , the accuracy of the algorithms is nbi cf grm .the hitting rates for some typical lengths of recommendation list are shown in table i. in a word , via the numerical calculation on a benchmark data set , we have demonstrated that the nbi has remarkably better performance than grm and cf , which strongly guarantee the validity of the present weighting method .weighting of edges is the key problem in the construction of a bipartite network projection . in this article we proposed a weighting method based on a resource - allocation process .the present method has two prominent features .first , the weighted matrix is not symmetrical , and the node having larger degree in the bipartite network generally assigns smaller weights to its incident edges .second , the diagonal element in the weighted matrix is positive , which makes the weighted one - mode projection more informative .furthermore , we proposed a personal recommendation algorithm based on this weighting method , which performs much better than the widest used global ranking method as well as the collaborative filtering .especially , this algorithm is tune - free ( i.e. , does not depend on any control parameters ) , which is a big advantage for potential users .the main goal of this article is to raise a new weighting method , as well as provide a bridge from this method to the recommendation systems .the presented recommendation algorithm is just a rough framework , whose details have not been exhaustively explored yet .for example , the setting of the initial configuration may be oversimplified , a more complicated form , like , may lead to a better performance than the presented one with .one is also encouraged to consider the asymptotical dynamics of the resource - allocation process , which can eventually lead to some certain iterative recommendation algorithms .although such an algorithm require much longer cpu time , it may give more accurate prediction than the present algorithm .if we denote and the average degree of users and objects in the bipartite network , the computational complexity of cf is , where the first term accounts for the calculation of similarity between users ( see eq .( 9 ) ) , and the second term accounts for the calculation of the predicted score ( see eq . ( 10 ) ) . substituting the equation ,we are left with .the computational complexity for nbi is with two terms accounting for the calculation of the weighted matrix and the final resource distribution , respectively . here is the second moment of the users degree distribution in the bipartite network .clearly , , thus the resulting form is .note that the number of users is usually much larger than the number of objects in many recommendation systems .for instance , the _ eachmovie _ dataset provided by the _company contains users and movies , and the _ netflix _ company provides nearly 20 thousands online movies for million users .it is also the case of music - sharing systems and online bookstores , the number of registered users is more than one magnitude larger than that of the available objects ( e.g. , music groups , books , etc . ) .therefore , nbi runs much fast than cf .in addition , nbi requires memory to store the weighted matrix , while cf requires memory to store the similarity matrix .hence , nbi is able to beat cf in all the three criterions of recommendation algorithm : _ accuracy _ , _ time _ and _ space_. however , in some recommendation systems , as in bookmark sharing websites , the number of objects ( e.g. webpages ) is much larger than the number of users , thus cf may be more practicable .the authors thank to sang hoon lee for his comments and suggestions .this work is partially supported by swiss national science foundation ( project 205120 - 113842 ) .we acknowledge sbf ( switzerland ) for financial support through project c05.0148 ( physics of risk ) , tzhou acknowledges the nnsfc under grant no .. 1 l. a. n. amaral , a. scala , m. barthlmy , and h. e. stanley , proc .* 97 * , 11149 ( 2000 ) .s. h. strogatz , nature * 410 * , 268 ( 2001 ) .r. albert and a. -l .barabsi , rev .* 74 * , 47 ( 2002 ) .s. n. dorogovtsev and j. f. f. mendes , adv . phys . * 51 * , 1079 ( 2002 ) . m. e. j. newman , siam review * 45 * , 167 ( 2003 ) .s. boccaletti , _ et al ._ , phys . rep . * 424 * , 175 ( 2006 ) . l. da f. costa , _ et al .* 56 * , 167 ( 2007 ) .p. holme , f. liljeros , c. r. edling , and b. j. kim , phys .e * 68 * , 056107 .f. liljeros , _ et al ._ , nature * 411 * , 907 ( 2001 ) . h. jeong , _ et al ._ , nature 407 , 651 ( 2000 ). s. wasserman , and k. faust , _ social network analysis _( cambridge univ . press ,cambridge , 1994 ) .j. scott , _ social network analysis _( sage publication , london , 2000 ) .m. e. j. newman , proc .98 * , 404 ( 2001 ) .m. e. j. newman , phys . rev .e * 64 * , 016131 ( 2001 ) .d. j. watts and s. h. strogatz , nature * 393 * , 440 ( 1998 ) . c. r. myers , phys . rev .e * 68 * , 046116 ( 2003 ) .zhang , _ et al ._ , physica a * 360 * , 599 ( 2006 ) . s. maslov , and y. -c .zhang , phys .lett . * 87 * , 248701 ( 2001 ) .m. blattner , y. -c .zhang , and s. maslov , physica a * 373 * , 753 ( 2007 ) .r. lambiotte , and m. ausloos , phys .e * 72 * , 066107 ( 2005 ) .p. cano , o. celma , m. koppenberger , and j. m. buldu , chaos * 16 * , 013107 ( 2006 ) .c. cattuto , v. loreto , and l. pietronero , proc .* 104 * , 1461 ( 2007 ) .g. linden , b. smith , and j. york , ieee internet computing * 7 * , 76 ( 2003 ) .k. yammine , _ et al ._ , lect . notes . comput .* 3220 * , 720 ( 2004 ) .r. lambiotte , and m. ausloos , phys .e * 72 * , 066117 ( 2005 ) .p. g. lind , m. c. gonzlez , and h. j. herrmann , phys .e * 72 * , 056127 ( 2005 ) .e. estrada , and j. a. rodrguez - velzquez , phys .e * 72 * , 046105 ( 2005 ) .j. j. ramasco , s. n. dorogovtsev , and r. pastor - satorras , phys .e * 70 * , 036106 ( 2004 ) .j. ohkubo , k. tanaka , and t. horiguchi , phys .e * 72 * , 036120 ( 2005 ) .m. peltomki , and m. alava , j. stat .p01010 ( 2006 ) . j. w. grossman , and p. d. f. ion , congressus numerantium * 108 * , 129 ( 1995 ) .barabsi , _ et al ._ , physica a * 311 * , 590 ( 2002 ) . t. zhou , _et al . _ ,c * 18 * , 297 ( 2007 ) .j. j. ramasco , and s. a. morris , phys .e * 73 * , 016122 ( 2006 ) .m. li , _ et al ._ , physica a * 375 * , 355 ( 2007 ). m. e. j. newman , phys .e * 70 * , 056131 ( 2004 ) .m. li , _ et al ._ , physica a * 350 * , 643 ( 2005 ) . m. e. j. newman , phys . rev .e * 64 * , 016132 ( 2001 ) .m. e. j. newman , proc .sci . u.s.a . * 101 * , 5200 ( 2004 ) .e * 75 * , 021102 ( 2007 ) .m. faloutsos , p. faloutsos , and c. faloutsos , comput .comm . rev .* 29 * , 251 ( 1999 ) .a. broder , _ et al ._ , comput . netw . * 33 * , 309 ( 2000 ) . j. m. kleinberg , j. acm * 46 * , 604 ( 1999 ) .b. alexander , educause rev . * 41 * , 33 ( 2006 ) .a. ansari , s. essegaier , and r. kohli , j. marketing research * 37 * , 363 ( 2000 ) .y. p. ying , f. feinberg , and m. wedel , j. marketing research * 43 * , 355 ( 2006 ) .r. kumar , p. raghavan , s. rajagopalan , and a. tomkins , j. comput .* 63 * , 42 ( 2001 ) .n. j. belkin , comm .acm * 43 * , 58 ( 2000 ) .m. montaner , b. lpez , and j. l. de la rosa , artificial intelligence review * 19 * , 285 ( 2003 ) .j. l. herlocker , j. a. konstan , k. terveen , and j. t. riedl , acm trans .* 22 * , 5 ( 2004 ) .p. laureti , l. moret , y. -c .zhang , and y. -k .yu , europhys . lett . * 75 * , 1006 ( 2006 ) .yu , y. -c .zhang , p. laureti , and l.moret , arxiv : cond - mat/0603620 .f. e. walter , s. battiston , and f. schweitzer , arxiv : nlin/0611054 .j. a. konstan , _ et al ._ , commun .acm * 40 * , 77 ( 1997 ) .k. glodberg , t. roeder , d. gupta , and c. perkins , information retrieval * 4 * , 133 ( 2001 ) .
the one - mode projecting is extensively used to compress the bipartite networks . since the one - mode projection is always less informative than the bipartite representation , a proper weighting method is required to better retain the original information . in this article , inspired by the network - based resource - allocation dynamics , we raise a weighting method , which can be directly applied in extracting the hidden information of networks , with remarkably better performance than the widely used global ranking method as well as collaborative filtering . this work not only provides a creditable method in compressing bipartite networks , but also highlights a possible way for the better solution of a long - standing challenge in modern information science : how to do personal recommendation ?
we consider the advection - diffusion equation with , , and . for simplicitywe assume the region is polygonal .we also assume and then we have a weak solution .it is well known that this problem can exhibit boundary or internal layers in the convection dominated regime and that for the standard continuous galerkin ( cg ) formulation these layers cause non - physical oscillations in the numerical solution .several adaptations to the cg method are effective but space does not allow their discussion here . we refer readers to for a full description of these approaches .discontinuous galerkin ( dg ) methods also offer a stable approach for approximating this problem .however the number of degrees of freedom required for dg methods is in general considerably larger than for cg methods .we describe an alternative approach also studied in .a dg method is applied on the layers and a cg method away from the layers .we call this approach the continuous - discontinuous galerkin ( cdg ) method .the hypothesis is that provided the layers are entirely contained in the dg region the instability they cause will not propagate to the cg region .note that in our formulation there are no transmission conditions at the join of the two regions .here we present the cdg method and discuss its implementation using the ` deal.ii ` finite element library .we additionally provide some numerical experiments to highlight the performance of the method .assume that we can identify a decomposition of where it is appropriate to apply the cg and dg methods respectively .we do not consider specific procedures to achieve this here , but generally it will be that we wish all boundary and internal layers to be within . identifying these regions can be done a priori in some cases or a posteriori based on the solution of a dg finite element method . consider a triangulation of which is split into two regions and where we will apply the cg and dg methods respectively . for simplicitywe assume that the regions and are aligned with the regions and and the set contains edges which lie in the intersection of the two regions .call the mesh skeleton and the internal skeleton .define as the union of boundary edges and the inflow and outflow boundaries by where is the outward pointing normal .define ( resp . ) to be the intersection of with ( resp . ) . by conventionwe say that the edges of are part of the discontinuous skeleton and . with this conventionthere is potentially a discontinuity of the numerical solution at .elements of the mesh are denoted , edges ( resp .faces in 3d ) by and denote by and the diameter of an element and an edge , defined in the usual way . the jump and average of a scalar or vector function on the edges in are defined as in , e.g. , .define the cdg space to be where is the space of polynomials of degree at most supported on .this is equivalent to applying the usual cg space on and a dg space on .we may now define the interior penalty cdg method : find such that for all where \nonumber \\ & \qquad + \sum_{{\ensuremath{e \in { \ensuremath{{\mathcal{e}_h}}}}}}\left [ \int_e \sigma \frac{{\varepsilon}}{h_e } { \ensuremath{{\ensuremath{\llbracket u_h \rrbracket } } } } { \ensuremath{\cdotp}}{\ensuremath{{\ensuremath{\llbracket v_h \rrbracket } } } } - \int_e \left ( { \ensuremath{{\ensuremath{\ { \ ! \ ! \ { { \varepsilon}\nabla u_h \ } \ ! \ ! \ } } } } } { \ensuremath{\cdotp}}{\ensuremath{{\ensuremath{\llbracket v_h \rrbracket } } } } + { \vartheta}{\ensuremath{{\ensuremath{\ { \ ! \ ! \ { { \varepsilon}\nabla v_h \ } \ ! \ ! \ } } } } } { \ensuremath{\cdotp}}{\ensuremath{{\ensuremath{\llbracket u_h \rrbracket}}}}\right ) \right ] \nonumber \\ b_a(u_h , v_h ) & = \sum_{{\ensuremath{e \in { \ensuremath{{\mathcal{e}_h^o}}}}}}\int_e b { \ensuremath{\cdotp}}{\ensuremath{{\ensuremath{\llbracket v_h \rrbracket } } } } u_h^- + \sum_{e \in { \ensuremath{\gamma^{\text{out } } } } } \int_e ( b { \ensuremath{\cdotp}}n ) u_h v_h \nonumber \end{split}\ ] ] and - \sum_{e \in { \ensuremath{\gamma^{\text{in } } } } } \int_e ( b { \ensuremath{\cdotp}}n)v_hg . \nonumber\ ] ] here is the penalization parameter and .note that through the definition of the edge terms are zero on and the method reduces to the standard cg fem . if we take , i.e. , the entire triangulation as discontinuous , we get the interior penalty ( ip ) family of dg fems ( see , e.g. , ) .the work of shows that the cg method is the limit of the dg method as .a reasonable hypothesis is that the solution to the cdg method is the limit of the solutions to the dg method as the penalty parameter on , i.e. , super penalising the edges in .call and the penalty parameters for edges in and respectively .call the numerical solution for the cdg problem .the solution to the pure dg problem on the same mesh is denoted where is the usual piecewise discontinuous polynomial space on . then we have [ thm : convergence ] the dg solution converges to the cdg solution of - as , i.e. , we do not prove this result here but direct readers to for a full discussion .although this result does not imply stability of the cdg method ( indeed , for the case where the region is taken to be the whole of it shows that the cdg method has the same problems as the cg method ) , but does indicate that investigation of the cdg method as an intermediate stage between cg and dg is justified .hence , it aids into building an understanding of the convergence and stability properties of the cdg method , based on what is known for cg and dg .this , in turn , is of interest , as the cdg method offers substantial reduction in the degrees of freedom of the method compared to dg .the cdg method poses several difficulties in implementation .one approach is to use the super penalty result of theorem [ thm : convergence ] to get a good approximation to the cdg solution .however this will give a method with the same number of degrees of freedom as dg .we therefore present an approach to implement the cdg method with the appropriate finite element structure .we discuss this approach with particular reference to the ` deal.ii ` finite element library .this is an open source library designed to streamline the creation of finite element codes and give straightforward access to algorithms and data structures .we also present some numerical experiments .the main difficulty in implementing a cdg method in ` deal.ii ` is the understandable lack of a native cdg element type . in order to assign degrees of freedom to a mesh in ` deal.ii ` the code must be initialised with a ` triangulation ` and then instructed to use a particular finite element basis to place the degrees of freedom .although it is possible to initialise a ` triangulation ` with the dg and cg regions set via the ` material_id ` flag , no appropriate element exists . in the existing ` deal.ii ` framework it would be difficult to code an element with the appropriate properties . a far more robust approach is to use the existing capabilities of the library and therefore allow access to other features of ` deal.ii ` .for instance without the correct distribution of degrees of freedom the resulting sparsity pattern of the finite element matrix would be suboptimal , i.e. , containing more entries than required by the theory and therefore reducing the benefit of shrinking the number of degrees of freedom relative to a dg method .the ` deal.ii ` library has the capability to handle problems with multiple equations applied to a single mesh such as the case of a elastic solid fluid interaction problem . in our casewe wish to apply different methods to the same equation on different regions of the mesh , which is conceptually the same problem in the ` deal.ii ` framework .in addition we will use the capability of the library .the ` deal.ii ` library has the capability to create collections of finite elements , ` hp::fecollection ` . heremultiple finite elements are grouped into one data structure .as the syntax suggests the usual use is for refinement to create a set of finite elements of the same type ( e.g. , scalar lagrange elements ` fe_q ` or discontinuous elements ` fe_dgq ` ) of varying degree .unfortunately it is not sufficient to create a ` hp::fecollection ` of cg and dg elements as the interface between the two regions will still be undefined . in order to create an admissible collection of finite elements we use ` fe_nothing ` .this is a finite element type in ` deal.ii ` with zero degrees of freedom . using the class we create two vector - valued finite element types and andcombine them in a ` hp::fecollection ` .we apply the first on the cg region , and the second on the dg region .now when we create a ` triangulation ` initialised with the location of cg and dg elements the degrees of freedom can be correctly distributed according to the finite element defined by ` hp::fecollection ` . when assembling the matrix for the finite element method we need only be careful that we are using the correct element of ` hp::fecollection ` and the correct part of .the most difficult case is on the boundary where from a dg element we must evaluate the contribution from the neighbouring cg element ( note that in the cdg method a jump is permissible on ) .if we implement the cdg method in ` deal.ii ` in this way we create two solutions : one for the ` fe_q`-`fe_nothing ` component and another for the ` fe_nothing ` -`fe_dgq ` component .consider a domain in , and .the dirichlet boundary conditions and the forcing function are chosen so that the analytical solution is this solution exhibits an exponential boundary layer along and of width , .we solve the finite element problem on a 1024 element grid and fix .this is larger than is required for stability ( see example 1 below ) but shows the behaviour of ` fe_nothing ` more clearly .we show each of the components of ` fe_system ` and the combined solution . for comparison we also show the dg finite element solution for the same problem . one advantage of following the` deal.ii ` framework is that the data structures will allow the implementation of methods .in fact we can envisage the implementation of a method where at each refinement there is the possibility to change the mesh size , polynomial degree or the element type .we propose no specific scheme here but simply remark that implementing a method is relatively straightforward with the ` fe_nothing ` approach .we present two numerical experiments highlighting the performance of the cdg method .both examples present layers when is small enough . in each casewe fix the region where the continuous method is to be applied then vary .this causes the layer to steepen . in the advection dominated regime ,i.e. , large and no steep layer present , we see the cdg solution approximates the true solution well . as we make smaller the layer forms andextends into the continuous region .as becomes smaller still the layer leaves the continuous region and the performance of the dg and cdg method is indistinguishable . in each experimentwe pick the and regions so that with the given refinement the region consists of exactly one layer of elements and coincides with .consider again the problem with true solution presented above .we solve the finite element problem on a 1024 element grid and fix so exactly one row of elements is in .as we vary the layer sharpens and moves entirely into the dg region .as we can see from figure [ fig : example1a ] before the layer has formed the two methods perform well .as the layer begins to form with decreasing it is not entirely contained in the discontinuous region and the error peaks . as the layer sharpens further itis entirely contained in the discontinuous region and the difference between the two solutions becomes negligible .now we look at a problem with an internal layer .let the advection coefficient be given by and pick the boundary conditions and right hand side so that the true solution is where is the error function defined by in figure [ fig : example2a ] we notice same the same behaviour as in example 1 .if the layer exists it must be contained within the discontinuous region for the two methods to perform equivalently . in figure[ fig : example2b ] we can see the cdg solution for various with the oscillations clearly visible when . when the layer is sharpened , the oscillations disappear .
for the stationary advection - diffusion problem the standard continuous galerkin method is unstable without some additional control on the mesh or method . the interior penalty discontinuous galerkin method is stable but at the expense of an increased number of degrees of freedom . the hybrid method proposed in combines the computational complexity of the continuous method with the stability of the discontinuous method without a significant increase in degrees of freedom . we discuss the implementation of this method using the finite element library ` deal.ii ` and present some numerical experiments .
we introduce the operator calculus necessary to present our approach to ( local ) inversion of analytic functions . it is important to note that this is different from lagrange inversion andis based on the flow of a vector field associated to a given function .it appears to be theoretically appealing as well as computationally effective .acting on polynomials in , define the operators and .they satisfy commutation relations =i ] , =[b , c]=0 ] .an appell system is a system of polynomials in the variables such that 1 .the top degree term of is a constant multiple of ; 2 . , where has all components zero except for 1 in the position .rota is well - known for his _ umbral calculus_ development of special polynomial sequences , called _ basic sequences . _ from our perspective , these are canonical polynomial systems " in the sense that they provide polynomial representations of the heisenberg - weyl algebra , in realizations different from the standard one .our idea is to illustrate explicitly the rle of vector fields and their duals , using operator calculus methods for working with the latter ( in our volumes this viewpoint is prefigured in ) .the main feature of our approach is that the action of the vector field may be readily calculated while the action of the dual vector field on exponentials is identical to that of the vector field .then we note that acting iteratively with a vector field on polynomials involves the complexity of the coefficients , while acting iteratively with the dual vector field always produces polynomials from polynomials .so we can switch to the dual vector field for calculations .specifically , fix a neighborhood of 0 in .take an analytic function defined there , normalized to , .denote and the inverse function , i.e. , , .then is defined by power series as an operator on polynomials in and =v'(d) ] . in other words , and generate a representation of the hw algebra on polynomials in .the basis for the representation is , i.e. , is a _ raising operator_. and so that is the corresponding _ lowering operator . _the form a system of _ canonical polynomials _ or generalized appell system .the operator of multiplication by is given by , which is a _ recursion operator _ for the system .we identify vector fields with first - order partial differential operators .consider a variable with corresponding partial differential operator .given as above , let be the vector field .then we observe the following identities as any operator function of acts as a multiplication operator on .the important property of these equalities is that and commute , as they involve independent variables .so we may iterate to get on the other hand , we can solve for the left - hand side of this equation using the method of characteristics .namely , if we solve with initial condition , then for any smooth function , thus to solve equation ( [ eq : vf ] ) , multiply both sides by and observe that we get integrating yields or , writing for , we have we can set to get on the one hand while in summary , we have the expansion of the exponential of the inverse function or this yields an alternative approach to inversion of the function rather than using lagrange s formula .we see that the coefficient of yields the expansion of .in particular , itself is given by the coefficient of on the right - hand side . specifically , we have : the coefficient of in is equal to , each giving the coefficient of in the expansion of .expand both sides of equation ( [ eq : iterate ] ) , using for , in powers of and , and let : and compare with equation ( [ eq : expansion ] ) .the same idea works in several variables .we have analytic in a neighborhood of in .denote the jacobian matrix by and its inverse by .the variables commute and act as raising operators for generating the basis .namely , . and , are lowering operators : .denote by . with variables and corresponding partials , define the vector fields for a vector field , we have the identities the method of characteristics applies as in one variable and as in equation ( [ eq : evf ] ) thus , we have the expansion in particular , the component , , of the inverse function is given by the coefficient of in the above expansion .an important feature of our approach is that to get an expansion to a given order requires knowledge of the expansion of just to that order .the reason is that when iterating , at step it is acting on a polynomial of degree , so all terms of the expansion of of order or higher would yield zero acting on .this allows for streamlined computations . for polynomial systems , will have polynomial entries , and will be rational in . hence raising operators will be rational functions of , linear in .thus the coefficients of the expansion of the entries of would be computed by finite - step recurrences .note that to solve for near , with , apply the method to , so that .the inverse is . then .in this section we focus on the one - variable case .we illustrate the method with examples , and then present an algorithm suitable for symbolic computation . in one variable , solvinga cubic is interesting as the expansion of can be expressed in terms of chebyshev polynomials .. then .thus where are chebyshev polynomials of the second kind .specializing provides interesting cases .for example , let , or .then the coefficients in the expansion of are periodic with period 8 and , in fact , the coefficient of in the polynomials yield the coefficients in the expansion of the inverse . hereare some polynomials starting with , : this gives to order 6 : this expansion will give approximate solutions to for near .inversion of the chebyshev polynomial can be used as the basis for solving general cubic equations ( ) .to get started we have , with , so , , , etc .we find in this case , we can find the expansion analytically . to solve ,write invert to get , for integer , , with denoting the principal branch. then we want a branch with corresponding to . with ,we want the argument of the cosine to be , for some integer .this yields the condition . taking , we get , with the minus sign .namely , using hypergeometric functions ( see next example ) and rewriting , we find the form if we generate the polynomials , we can find the expansion of to any order .a similar approach is interesting for the chebyshev polynomial . satisfies the hypergeometric differential equation which can be written in the form =\lambda^2\,f\ ] ] with here denoting . for integer ,this is the differential equation for the corresponding chebyshev polynomial . in general , these are _ chebyshev functions_. as noted above , for , we take , and , as above , we require with , we have the solution for symbolic computation using maple , one can use the ore_algebra package . 1 .first fix the degree of approximation .expand as a polynomial to that degree .2 . declare the ore algebra with one variable , , and one derivative , .3 . define the operator in the algebra .iterate starting with using the applyopr command .extract the coefficient of to get the expansion of .here is a matrix approach that can be implemented numerically . fix the order of approximation . cut off the expansion at . let the matrix define the auxiliary diagonal matrices q&=&\begin{pmatrix}1/\gamma(1)&0&\ldots&0\cr 0&1/\gamma(2)&\ldots&0\cr \vdots&\vdots&\ddots&\vdots\cr 0&0&\ldots&1/\gamma(n)\cr\end{pmatrix}.\end{aligned}\ ] ]note that . denoting , we have the recursion = [ c_1^{(k)},c_2^{(k)},\ldots , c_n^{(k ) } ] pwq.\ ] ] the condition gives . then yields . we see that for .we iterate as follows : \1 . start with times the unit vector $ ] of length .multiply by .iterate , multiplying on the right by at each step .finally , multiply on the right by . the top row will give the coefficients of the expansion of to order .here is a simple system for illustration . so the raising operators are expanding yields , with , thus so any given order , the polynomials of degree are an invariant subspace for the operator up until the last step .we can formulate an alternative matrix computation as follows .let and denote the matrices of the operators of differentiation and multiplication by respectively on polynomials of degree less than or equal to .the space is invariant under differentiation , and we cut off multiplication by to be zero on . we get with the first row of all zeros .we then compute the matrix times , where is computed as a matrix polynomial by substituting in up to order .then has a matrix representation , , on the space and we iterate multiplying by acting on the unit vector .these give the coefficients of the polynomials .in several variables , one constructs matrices for and using kronecker products of and with the identity . for example , with in the spot .similarly for .then one has explicit matrix representations for the dual vector fields and the polynomials can be found accordingly .this approach is explicit , but seems to much slower than using the built - in ore_algebra package .
solving analytic systems using inversion can be implemented in a variety of ways . one method is to use lagrange inversion and variations . here we present a different approach , based on dual vector fields . for a function analytic in a neighborhood of the origin in the complex plane , we associate a vector field and its dual , an operator version of fourier transform . the construction extends naturally to functions of several variables . we illustrate with various examples and present an efficient algorithm readily implemented as a symbolic procedure in maple while suitable as well for numerical computations using languages such as c or java .
we gratefully acknowledge sandra gonzlez - bailn for interesting discussions and comments on the draft . this work has been partially supported by mineco through grant fis2011 - 25167 and fis2012 - 38266 ; comunidad de aragn ( spain ) through a grant to the group fenol and by the ec fet - proactive project plexmath ( grant 317614 ) .a. a. also acknowledges partial financial support from the icrea academia and the james s. mcdonnell foundation .
the ability to understand and eventually predict the emergence of information and activation cascades in social networks is core to complex socio - technical systems research . however , the complexity of social interactions makes this a challenging enterprise . previous works on cascade models assume that the emergence of this collective phenomenon is related to the activity observed in the local neighborhood of individuals , but do not consider what determines the willingness to spread information in a time - varying process . here we present a mechanistic model that accounts for the temporal evolution of the individual state in a simplified setup . we model the activity of the individuals as a complex network of interacting integrate - and - fire oscillators . the model reproduces the statistical characteristics of the cascades in real systems , and provides a framework to study time - evolution of cascades in a state - dependent activity scenario . the proliferation of social networking tools and the massive amounts of data associated to them has evidenced that modeling social phenomena demands a complex , dynamic perspective . physical approaches to social modeling are contributing to this transition from the traditional paradigm ( scarce data and/or purely analytical models ) towards a data - driven new discipline . this shift is also changing the way in which we can analyze social contagion and its most interesting consequence : the emergence of information cascades . theoretical approaches , like epidemic and rumor dynamics , reduce these events to physically plausible mechanisms . these idealizations deliver analytically tractable models , but they attain only a qualitative resemblance to empirical results , for instance regarding avalanche size distributions . yet , the challenge of having mechanistic models that include more essential factors , like the propensity of individuals to retransmit information , still remain open . with the availability of massive amounts of microblogging data logs , like twitter , we are in a position to scrutinize the patterns of real activity and model them . the vast majority of models to this end are based on a dynamical process that determines individuals activity ( transmission of information ) , and this activity is propagated according to certain rules usually based on the idea of social reinforcement , i.e. the more active neighbors an individual has , the larger his probability to become also active , and thus to contribute to the transmission of information . along these lines , the often used threshold model ( and its networked version ) mimics social dynamics , where the pressure to engage a behavior increases as more friends adopt that same behavior . briefly , the networked threshold model assigns a fixed threshold , drawn from a distribution , to each node ( individual ) in a complex network of size and an arbitrary degree distribution . each node is marked as _ inactive _ except an initial seeding fraction of active nodes , typically . a node with degree updates its state becoming active whenever the fraction of active neighbors . the simulation of this mechanistic process evolves following this rule until an equilibrium is reached , i.e. , no more updates occur . given this setup , the _ cascade condition _ in degree - uncorrelated networks can be derived from the growth of the initial fraction of active nodes , who on their turn might induce the one - step - to - activation ( vulnerable ) nodes . therefore , large cascades can only occur if the average cluster size of vulnerable nodes diverges . using a generating function approach , this condition is met at where is the fraction of nodes of degree close to their activation threshold and is the average degree . for all the clusters of vulnerable nodes are small , and the initial seed can not spread beyond isolated groups of early adopters ; on the contrary , if then small fraction of disseminators may unleash with finite probability large cascades . more recently , the cascade condition has been analytically determined for different initial conditions as well as for modular and correlated networks , while placing the threshold model in the more general context of critical phenomena and percolation theory . these efforts , however , have a limited scope since they can account only for one - shot events , for instance the spread of chain letters , the diffusion of a single rumor or the adoption of an innovation . in other cases , instead , empirical evidence suggests that once an agent becomes active that behavior will be sustained , and reinforced , over time . this creates a form of enduring activation that will be affected and affect other agents over time in a recursive way . indeed , cascades evolve in time , as a consequence of dynamical changes in the states of agents as dynamics progress . cascades are then events that brew over time in a system that holds some memory of past interactions . moreover , the propensity to be active in the propagation of information sometimes depends on other factors than raw social influence , e.g. , mood , personal implication , opinion , etc . in this paper , we present a threshold model , with self - sustained activity , where system - wide events emerge as microscopical conditions become increasingly correlated . we capitalize on the classical integrate - and - fire oscillator ( ifo ) model by mirollo and strogatz . in this model , each node in a network of size is characterized by a voltage - like state of an oscillator , which monotonically increases with phase until it reaches a fixed threshold , and then it _ fires _ ( emits information to its coupled neighbors , and resets its state to 0 ) . the pulsatile dynamics , makes that each time a node fires , the state of its neighbors is increased by . more precisely , $ ] is uniformly distributed at and evolves such that \phi)\ ] ] parametrized by to guarantee that is concave down . setting the fixed threshold to 1 , whenever then _ instantaneously _ , if the edge exists . integrate - and - fire models have been extremely useful to assess the bursty behavior and the emergence of cascades in neuronal systems represented in lattices and complex networks . we propose to model social systems as a complex network of ifos representing the time evolving ( periodic in this case ) activation of individuals . the model comprises two free parameters , and , which are closely related . dissipation may be interpreted as the willingness or _ intrinsic propensity _ of agents to participate in a certain diffusion event : the larger , the shorter it takes for a node to enter the tip - over interval . conversely , quantifies the amount of influence an agent exerts onto her neighbors when she shows some activity . larger s will be more consequential for agents , forcing them more rapidly into the tip - over region . both quantities affect the level of _ motivation _ of a given agent . note that in the current framework maps onto in the classical threshold model , in the sense that both determine the width of the tip - over region . finally , the phase is translated into time steps , and then prescribed as . we use eq . [ g1 ] to derive the cascade condition in this new framework . note that now the distribution of activity is governed by where corresponds to the states probability distribution at a certain time . for an initial uniform distribution of motivation and a fixed , the condition for the emergence of cascades reads and in general for any time , which implies that the cascade condition depends on time in our proposed framework . it is worth noticing that , in this scenario , is not a function of the node degree . probability distributions of four different representative times along the synchronization window . each snapshot depicts the -state histogram of the oscillators . the dynamics begins with a random uniform distribution of -states inset ( a) and it progressively narrows during the transition to synchrony inset ( d ) . main : largest fraction of synchronized nodes across time . the path to synchronization evolves steadily at a low level , and eventually suffers an abrupt transition . ] as the dynamics evolve in time , the states of the nodes progressively correlate and , consequently , the distribution of states changes dramatically . the evolution of the states distribution is depicted in fig . [ fig1 ] . the initially uniform distribution ( inset ( a ) ) evolves towards a dirac function ( inset ( d ) ) as the network approaches global synchronization . we have not been able to find a closed analytical expression for the consecutive composition of the function after an arbitrary number of time steps to reveal the evolution of , nonetheless it can be solved numerically . eq . [ g3 ] reduces to the cascade condition is thus that exactly corresponds to the bond percolation critical point on uncorrelated networks . for the case of random poisson networks , then we can now explore the cascade condition in the phase diagram in fig . [ fig2 ] , and compare the analytical predictions with results from extensive numerical simulations . since the time to full synchronization ( global cascade ) is different for each , we introduce _ cycles_. one cycle is complete whenever every node in the network has fired at least once . in this way we bring different time scales to a common , coarse - grained temporal ground , allowing for comparison . the regions where cascades above a prescribed threshold are possible are color - coded for each cycle , and black is used in regions where cascades do not reach ( labeled as n.c . , `` no cascades '' , in fig . [ fig2 ] ) . note that if cascades are possible for a cycle , they will be possible also for any . this figure renders an interesting scenario : on the one hand , it suggests the existence of critical values below which the cascade condition is systematically frustrated ( black area in the phase diagram ) . on the other , it establishes how many cycles it takes for a particular pair to attain macroscopical cascades ( full synchronization ) which becomes an attractor thereafter , for undirected connected networks . given the cumulative dynamics of the current framework , in contrast with watts model , the region in which global cascades are possible grows with . turning to the social sphere , these results open the door to predicting how long it takes for a given topology , and a certain level of inter - personal influence , to achieve system - wide events . furthermore , the existence of a limiting determines whether such events can happen at all . additionally , the predictions resulting from eq . [ g5 ] are represented as dashed lines in fig . [ fig2 ] . for the sake of clarity , we only include predictions for ( dashed black ) , ( dashed gray ) and ( dashed white ) . projections from this equation run close to numerical results in both homogeneous ( fig . [ fig2]a ) and inhomogeneous networks ( fig . [ fig2]b ) , although some deviations exist . noteworthy , eq . [ g5 ] clearly overestimates the existence of macroscopic cascades in the case of scale - free networks at . indeed , does not yet incorporate the inherent dynamical heterogeneity of a scale - free topology , thus eq . [ g5 ] is a better predictor as the dynamics loose memory of the hardwired initial conditions . in the general case , deviations are due to the fact that the analytical approach in the current work is not developed beyond first order . second order corrections to this dynamics ( including dynamical correlations ) should be incorporated to the analysis in a similar way to that in , however it is beyond the scope of the current presentation . cascade diagram for different cycles ( coded by color ) , with fixed . vertical axis and each dashed line define a confined region in which global cascades might occur according to eq . [ g5 ] and for a specific cycle ( here we show only the expected zones for dashed white and dashed gray ) . results are obtained for synthetic erds - rnyi ( a ) and scale - free with ( b ) uncorrelated networks of size . a cascade is considered `` macroscopical '' if the synchronized cluster . color codes indicate the existence of at least one cascade in numerical simulations ; analytical predictions are averaged over 200 networks with random initial conditions . note that the cascade condition in ( a ) often underestimates the actual cascade regions because it does not take into account second order interactions ; the same applies in the lower panel ( b ) , except for where the analytical prediction overestimates the results because the inclusion of the hub into the cascade is improbable starting from a uniform distribution . ] according to mirollo & strogatz , synchronicity emerges more rapidly when or is large ; then the time taken to synchronize is inversely proportional to the product . in our simulations , we use this cooperative effect between coupling and willingness to fix , which is set to a set of values ( slightly above or below ) , and use to fine - tune the matching between observed cascade distributions and our synthetic results ( see fig . [ fig3 ] ) . to illustrate the explanatory power of the dynamical threshold model , we use data from www.twitter.com . they comprise a set of million spanish messages publicly exchanged through this platform from the 25th of april to the 25th of may , 2011 . in this period a sequence of civil protests and demonstrations took place , including camping events in the main squares of several cities beginning on the 15th of may and growing in the following days . notably , a pulse - based model suits well with the affordances of this social network , in which any emitted message is instantly broadcasted to the author s immediate neighborhood its set of _ followers_. for the whole sample , we queried for the list of followers for each of the emitting users , discarding those who did not show outgoing activity during the period under consideration . the set of users plus their following relations constitute the topological support ( directed network ) for the dynamical process running on top of it . the average degree of this network is and its degree distribution scales like . on top of the described network , we measure the cascade size distribution for different periods as in . in fig . [ fig3 ] these periods correspond to the `` slow - growth '' phase ( 25th april to 3rd may ; blue squares in the upper panel ) and to the `` explosive '' phase ( 19th to 25th may ; blue squares in the lower panel ) , which comprehends the most active interval the reaction to the spanish government ban on demonstrations around local elections on the 22nd may . on the other hand , we run the proposed dynamics on the same topology for different values , with remarkable success ( red circles ) . interestingly , an appropriate fitting is attained when is adapted to real - world excitation level : when cascades are measured in an interval around the 15th may , a higher is needed nominating it as a suitable proxy for the system s excitability . additionally , modeling low - activity periods can be achieved just by setting . and ; last eight days ( bottom ) for which we have and . note that varies for different values . the model performs well in both periods , the relative error of the slope in the linear region is . real data distributions are measured as in . ] summarizing , we have proposed a time - dependent continuous self - sustained model of social activity . the model can be analyzed in the context of previous cascade models , and encompasses new phenomenology as the time - dependence of the critical value of the emergence of cascades . we interpret it under a social perspective , where collective behavior is seen as an evolving phenomenon resulting from inter - personal influence , contagion and memory . in a general perspective , our modeling framework offers an alternative approach to the analysis of interdependent decision making and social influence . it complements threshold models and complex contagion taking into account time dynamics and recursive activation , and also splits motivation into two components : intrinsic propensity and strength of social influence . we also anticipate that the exploration of the whole parametric space would lead to new insights about the effects of social influence and interdependence in social collective phenomena .
galaxy clusters are the largest gravitationally bound systems in our universe .they typically contain hundreds of galaxies and are detected in a broad range of frequencies . in particular , we observe strong x - ray radiation from the hot intra cluster medium ( icm ) , which is mainly caused by continuous free - free emission of thermal electrons and discrete metal lines .observations show that clusters contain roughly ten per cent of their total mass in a hot baryonic gas component with temperatures up to several k ( corresponding to about 15 kev ) at typical densities of . the state of the icm and its related physical properties are crucial to investigate and learn about the formation and evolution of galaxy clusters .the gas is partially heated at the virial shock by the gravitational infall and furthermore raised in temperature by shocks within the icm . galactic winds driven by the star formation process and feedback of active galactic nuclei ( agn ) inject additional kinetic and thermal energy to the hot cluster medium .radiative processes within the icm , such as thermal bremsstrahlung or metal lines allow the gas to cool and loose energy . furthermore , all galaxy cluster are known to host magnetic fields with strengths up to several g , which influences the dynamics of the hot plasma .the structure and the amplitude of the cluster magnetic fields guide the propagation of charged particles and contribute to the equation of motion via the lorentz force .the magnetic field lines are assumed to be highly tangled and twisted on small scales , which then will contribute of the heating of the plasma by magnetic reconnection ( e.g. * ? ? ?little is know about the magnetic field structure as its mapping is observationally challenging .we discuss the influence of the anisotropic magnetic field on plasma transport processes and in particular on thermal conduction . from a microscopic point of view thermal conductionis the heat transport due to collisions of electrons .therefore , it strongly depends on temperature , which allows us to use probes of the hot icm to study large - scale transport of heat . from a macroscopic point of viewthermal conduction can be modelled as a diffusion process re - distributing internal energy .usually , the so called spitzer conduction ( see * ? ? ?* ) is used as a general formulation and multiplied with an efficiency factor , which changes with and depends on the astrophysical environment .this original formulation assumes an isotropic movement of electrons and corresponding distribution of collisions . as galaxy clusters host significantly strong magnetic fieldsthe influence of the magnetic fields onto thermal conduction has to be taken into account .since particle movement perpendicular to magnetic field lines is restricted , the assumption of isotropic collisions does not hold any longer and thermal conduction is coupled to the magnetic field topology .this result in different heat transport timescales parallel and perpendicular to the magnetic field .however , the magnetic fields needs to be sufficiently strong to dominate the mean free path .observational evidence in galaxy clusters of suppressed perpendicular heat transport is found in so called cold fronts , i.e. regions with a rather stable temperature gradient , but no pressure gradient .this phenomenon demonstrates the insulation of gas with respect to conduction most probably through magnetic fields .moreover , turbulent magnetic fields also become a very interesting case to study giving rise to different ways to estimate efficiency factors averaging the suppression effect over volume . for example proposed that spitzer conduction should be suppressed by a factor of about 1/300 , while claimed that conduction can still be efficient up to 1/5 of the spitzer value in highly turbulent environments .thermal conduction was frequently discussed as a heating source to balance cooling losses in galaxy clusters in order to explain why hot gas is observed despite the cooling times being smaller than the cluster life times .however , the impact of this effect is questioned and even less significant , when the anisotropies of the magnetic field are included . for detailed discussions of this problem we refer to , , , , , and . in pioneering work ,usually a factor of 1/3 is multiplied onto the spitzer value as a correction for the influence magnetic fields ( e.g. , supported by work presented in ) .we present a numerical implementation of the anisotropic heat transport equation including the seeding and evolution of magnetic fields applied to cosmological simulations of galaxy clusters .a solver for anisotropic thermal conduction is already part of several commonly used grid based codes , which solve the equations of magnetohydrodynamics ( mhd ) . for implementation details and cluster simulationssee for example , , ( athena code ) , ( flash code ) or ( enzo code ) . however , we present the first implementation of anisotropic thermal conduction into a smoothed particle magnetohydrodynamics ( spmhd ) code .in contrast to the eulerian methods , which discretise the volume , sph discretises the mass and is commonly used in simulations of structure formation .the lagrangian nature of sph ensures the conservation of energy , momentum and angular momentum and allows to resolve large density gradients .for a recent review on the sph method we refer to .in this paper we present simulations performed with the n - body / sph code gadget with the non - ideal mhd implementation based on and .the sph code evolves entropy as the thermodynamical variable of choice .additionally , we include several improvements for sph such as the wendland kernel functions .our cosmological simulations include sub - grid models for radiative cooling , star formation and supernova feedback as described by .furthermore , we employ a supernova seeding scheme for the magnetic field in contrast to previous simulations using uniform initial magnetic fields ( e.g. * ? ? ?we use and extend the existing implementation of isotropic thermal conduction by with the conjugate gradient solver described in .the paper is structured as follows . in section [ phenomenologyconduction ]we explain the physics behind ( anisotropic ) thermal conduction , and describe our sph implementation in section [ numericalimplementation ] .section [ tests ] presents several test problems before we analyse simulations of galaxy cluster formation in section [ applicationgeneral ] .we start with a brief introduction in the physical properties and concepts of isotropic as well as anisotropic conduction . according to can write down a conduction heat flux resulting from a temperature gradient as with the conduction coefficient .for an idealized lorentz gas we can assume spitzer conductivity , which equals a coefficient of with the average proton number of the plasma , the coulomb logarithm and electron temperature , mass and elementary charge .most important is the strong dependence of conductivity on the electron temperature .therefore , we assume that thermal conduction has an important influence mainly on very hot gas , as for example in the central regions of massive galaxy clusters where the plasma reaches temperatures up to about . in fact , we need to multiply the idealized spitzer conductivity by an additional factor : this factor has been calculated by and is highly dependent on the average proton number of the plasma . for a pure proton electron plasma and rises up close to 1 for large values of . describe a way to calculate the average proton number by summation over all ions in the plasma . applying a primordial hydrogen - helium plasma, they find a value of .using the tabulated values in we obtain a factor of .when used for cosmological simulations one often assumes a primordial gas distribution . a typical value for an effective conductivity is e.g. given by or with the electron number density and the mean free path of the electrons . because of the inverse dependence on mass we infer that electrons give a much stronger contribution to heat conduction than protonsthis is reasonable since lighter particles have higher thermal velocities at a fixed temperature and can be accelerated much easier .therefore , the amount of collisions in a given time span are drastically increased for low mass particles .consequently , only electrons are considered in the calculation and we omit the index in our equations hereafter .we neglect any dependency of the coulomb logarithm on temperature and electron density and use , which is a fairly good approximation for typical plasmas in our study .more precise calculations for different collision events ( e.g. electron - electron or electron - proton ) can be found for example in .what remains is the strong dependence on temperature to the power of .furthermore , we need to apply an important correction .so far we assumed , that the typical length scale of the temperature gradient is always much larger than the mean free path .however , for very low density plasmas one can not expect a high conductivity even if the temperature rises a lot , since scatterings and therefore energy transfer events happen only at a very low rate . have calculated the saturated heat flux for this case as interpolating between these findings and the common spitzer conduction coefficient we estimate a corrected heat flux as alternatively , we can redefine the conduction coefficient as this modified spitzer conduction is applicable to galaxy clusters and giant elliptical galaxies if magnetic fields are not taken into account .for a detailed discussion we refer to . finally , we can write the effect of thermal conduction as a change of specific internal energy which depends on the density and on the heat flux directed anti - parallel to the temperature gradient . next , we add magnetic fields to the picture . as previously mentioned , thermal conduction is based on coulomb collisions of charged particles . except these collisions particlesare allowed to move freely in the plasma . however , in the presence of magnetic fields the movement perpendicular to the field lines is restricted .the electrons move on spiral trajectories around the field lines .the frequency of the circular motion , which depends on the strength of the magnetic field , is called larmor- or gyrofrequency : to see how this affects the capability of electrons to transport energy , we present some phenomenological ideas and scaling relations on how a general electron diffusion process is affected by magnetic fields , following . due to the similar microscopic originwe infer the same relations to hold also for thermal conduction .this connection is motivated through some scaling relations starting with the ideal gas law assuming a more or less constant density we infer knowing that the source of a heat flux corresponds to the time evolution of pressure and using eq .( [ eqbasicheatflux ] ) we obtain discretising the derivatives by typical length and time scales we get for the conduction coefficient where we can identify the diffusion coefficient .according to this relation , the two coefficients behave similarly and we can therefore apply the following scaling relations on an implementation of anisotropic thermal conduction . at first, we connect the mean free path and collision time via the particle s velocity .a typical diffusion coefficient of units can be defined as since particle movement parallel to the magnetic field is not restricted , the diffusion along the field lines should not be affected , which corresponds to .we assume that motion of particles perpendicular to magnetic field lines is only possible by jumps between cyclotron trajectories , which results in a diffusion coefficient like therefore , the relation between the two coefficients is this is however only valid if , ergo if the gyroradius is much smaller than the mean free path . in other words , we need the magnetic field to impose a notable restriction onto the electrons movement . in the regime of we have to change the relation in order to ensure : to evaluate this relation for a given system we require the collision time or the corresponding frequency with the plasma frequency and the debye length putting these relations together we finally obtain to check the order of magnitude of the fraction of perpendicular and parallel diffusion coefficient , we use typical values for the magnetic field strength , temperature and density in galaxy clusters ( see section [ introduction ] ) .( [ eqdiffusionbsquared ] ) results in a factor of which means that conduction perpendicular to the magnetic field is be typically extremely suppressed in the icm . in a different approach a criterion for the minimum magnetic field which is needed for anisotropic conduction of which makes it easier to validate the criterion .however , these relations are only phenomenological estimates.additionally , perpendicular diffusion is overlayed with turbulence transport processes , which are extremely difficult to describe. however , laboratory experiments show , that the scaling with the magnetic field effectively changes from to , which characterises so called bohm diffusion . according to the calculations above we construct a scaling relation for this kind of behaviour of we assume an electrons movement with their thermal speed and neglect further influences for example by plasma instabilities .we get in total which allows a much stronger diffusion orthogonal to the magnetic field lines for typical values .a more detailed analysis with similar results is given for example in .for highly tangled magnetic fields discuss if the coherence length should replace the gyroradius .however , they find that this assumption is wrong .this matches the considerations of who present that tangled magnetic fields do not suppress thermal conduction very strongly , despite general believe .their results state a reduction factor of which is an average over the local angles between magnetic field lines and the temperature gradient .we briefly analyse this behaviour for totally random magnetic field configurations in section [ temperaturestep ] . summing up, thermal conduction perpendicular to magnetic fields lines with reasonable field strengths is in general almost totally suppressed .when we come into a regime where we need to apply scaling relations regarding the magnetic fields , the ratio scales either like or . we finalize our considerations and derive the anisotropic conduction equation . in principle , there are different possible approaches .we briefly repeat the immediate requirements for the resulting scheme : * unchanged isotropic conduction if the magnetic field is parallel to the temperature gradient , * strong suppression of energy transfer via conduction if the magnetic field is perpendicular to the temperature gradient , * scaling of the suppression factor inverse with the magnetic field strength to some power .the first two requirements are easily fulfilled by multiplying the projection of the magnetic field onto the temperature gradient to the conduction coefficient this approach has the advantage that it requires barely any change in the existing numerical scheme presented in and does not cost much additional computation time . however , it does not fulfil the third requirement . herewe have no possibility to introduce a scaling of the suppression dependent on the actual strength of the magnetic field .we therefore would have to assume a sufficiently strong magnetic field , which can not be guaranteed in the whole computational domain at all times .problems arise especially in combination with a magnetic seeding mechanism when no initial magnetic field is present .instead we follow a different derivation by splitting up the conduction equation into a part parallel and a part perpendicular to the magnetic field and assign different conduction coefficients to both parts }}.\ ] ] with being the normalised magnetic field vector .please note , that we can easily regain the isotropic equation from this by setting or .plugging in the relation between parallel and perpendicular diffusion we derived earlier , it can be seen that all of our requirements are fulfilled .we reshuffle the terms for better handling .}}\label{eqconductionfinalsecondapproach}\ ] ] from section [ anisotropicthermalconduction ] we know that within galaxy clusters mainly . however , we can not simply neglect the second term along the temperature gradient . comparing the absolute values of the two termswe see , that except of the first term contains a which can be arbitrarily small and make both terms comparable in magnitude .if the magnetic field and the energy gradient are almost totally perpendicular , the second term dominates and can not be neglected .in this section we derive our numerical representation of anisotropic thermal conduction . before transforming eq .( [ eqconductionfinalsecondapproach ] ) into sph formalism we note that the second term can be handled similar to the isotropic implementation just with a different coefficient . for detailswe refer to . herewe discuss only on the first term .initially , the term suggests a split - up of the calculation of temperature gradient and divergence .however , this requires a large amount of additional computation time , since it needs an additional sph loop , but also leads to further numerical errors due to the effective second kernel derivative introduced ( see ) .furthermore , chaining sph discretisations can introduce strongly growing numerical errors and should be avoided . instead , we derive a consistent formulation with only one sph loop following the example of , who developed an sph scheme for a similar diffusion equation in radiative transfer . in the following calculations latin indices like , and always denote particles while greek indices like , indicate components of tensors .before we start discretising the modified conduction equation , we have to find a better estimate for mixed second derivatives .the derivation is at first similar to the one presented by , but gets more complicated since we also need mixed derivatives .consider an arbitrary quantity at which we expand around with the distance vector .we multiply both sides by and integrate over .the first order term vanishes due to antisymmetry of the integrand and we solve for the second order term .more detailed calculations are presented in appendix [ appendixsecondorder ] .assuming q is a second order tensor quantity , we can rewrite the equation to }}\bmath{\nabla}_i { \ensuremath{w_{ij}}}}{{\ensuremath{\left| { \ensuremath{\bmath{x}_{ij}}}\right|}}^2 } \label{eqmixedsecondderivativeestimatefirst}\ ] ] and }}\bmath{\nabla}_i { \ensuremath{w_{ij}}}}{{\ensuremath{\left| { \ensuremath{\bmath{x}_{ij}}}\right|}}^2 } \label{eqmixedsecondderivativeestimatesecond}\ ] ] with the substituted tensor this is a very compact and neat formulation and we use to check this formula for consistency .consider .then we get putting this into eq .( [ eqmixedsecondderivativeestimatefirst ] ) or ( [ eqmixedsecondderivativeestimatesecond ] ) we recover the result which can be obtained for the isotropic implementation , where only non - mixed second derivatives are needed : before we further analyse the properties of these approximation formulas let us at first review our basic equation . as previously mentioned we consider only the part of eq .( [ eqconductionfinalsecondapproach ] ) parallel to the magnetic field ( the first term ) .the term conducting along the temperature gradient can be handled isotropically , which is described in .we start by writing the equation in component form : }}. \label{eqanisoconductioncomponents}\ ] ] furthermore , we define the components of a tensor as next , we write the equation only in terms of mixed second derivatives : now we use eq .( [ eqmixedsecondderivativeestimatefirst ] ) and ( [ eqmixedsecondderivativeestimatesecond ] ) to estimate the second derivatives in eq .( [ eqconductionanisosecondderivatives ] ) .re - factoring the terms leads to a compact expression for particle _i _ : }}\bmath{\nabla}_i { \ensuremath{w_{ij}}}.\ ] ] finally , we discretise the integral and rewrite the temperature to specific internal energy : }}\bmath{\nabla}_i { \ensuremath{w_{ij}}}\label{eqansiofinalfinal } \end{array}\ ] ] with the mean molecular mass and the adiabatic index .this equation allows us to calculate the effects of anisotropic conduction without an additional sph loop .one convenient property is that we managed to generate the term like in the isotropic conduction case .this ensures only conduction if the temperatures of two particles differ and the sign takes care of the heat flux direction .there might still be a problem with this approximative formula . to ensure the correct flow of internal energy from hot to cold ( according to the second law of thermodynamics ) the tensor must be positive definite . however , from the definition of a variable with tilde ( eq .( [ eqtildevariabledef ] ) ) we see , that this tensor does not necessarily fulfil this condition . fora very anisotropic setup heat flows in the wrong direction .in addition to a violation of classical thermodynamics this leads to numerical instabilities . to overcome this problem we have basically two options which are both artificial andtherefore might negatively influence onto our discretisation formula in general : 1 . implement a limiter in the code , which checks for non physical heat flows .2 . change the tensor to a more isotropic version , which is always positive definite . since it is more straight forward and computationally cheaper we follow and use the second option : we add an isotropic component to the anisotropic tensor in order to prevent temperature flowing from cold to warm regions .we already have a pure isotropic component which is however proportional to and it is not clear if this is already sufficient .therefore , we do not forget about this _ fully anisotropic _ formulation , but further investigate the behaviour in our tests and cluster simulations in the next sections .we add an artificial isotropic component and replace the tensor by calculations carried out by show that we need to set .we use the minimum value to prevent a large error in the estimate .this leads to , which is computationally very cheap since we have to compute for each particle , anyway .we call this formulation the _ isotropised _ discretisation. we can check , that itself is positive definite by diagonalising it : similar problems of non physical heat fluxes arise also in grid code solutions and are not an intrinsic problem of sph formulations ( see e.g. * ? ? ?finally , we address the time integration of the resulting equation . show how to apply a symmetry enforcing finite difference scheme .this is a fairly simple and computationally cheap approach , however , they conclude that a kernel averaging of the temperature is required to suppress the effects of small - scale noise in the temperature distribution .therefore , an additional sph loop is required , which greatly increases the computational cost . in contrast , considered an implicit integration scheme . that requires again an additional sph loop but has the advantage of much more accurate results for larger conduction time steps , therefore reducing the computational cost .they chose the so called conjugate gradient ( cg ) method which is basically an algorithm to solve a matrix inversion problem .instead of fully inverting the matrix , it is also used as an iterative approximation method with very good convergence properties .a detailed discussion of the algorithmic properties is for example given by .due to the advantages in overall computational cost and precision we use the conjugate gradient method to solve the anisotropic conduction equation . atfirst , we need to ensure , that our equation suffices the requirements of the cg solver . to discuss the properties consider the following equation which we want to solve for the vector : for the algorithm to succeed , we place a few constraints on the matrix : it needs to be real , symmetric and positive definite .because the equations do not contain imaginary parts , the first condition is always fulfilled .symmetry corresponds to conservation of energy , which should always be fulfilled for an energy transport scheme .if this was not the case self - consistently from the derivation , we would have to symmetrise the result afterwards .derived above have to be multiplied by .] we see if this property is fulfilled after writing down the equation explicitly as a matrix inversion problem .the positive definiteness can be argued as follows : in the continuous limit the matrix becomes diagonal .positive definite for a symmetric and real matrix means that the eigenvalues are positive . in our casethis corresponds to heat being transported only anti - parallel to the temperature gradient following the 2nd law of thermodynamics .we argued about positive definiteness already in section [ finalnumerics ] : the fully anisotropic formulation can violate this condition , which can therefore lead to non - physical heat flows as well as numerical instabilities , since the cg method in principle requires it to be given .the isotropised version is constructed such that it definitely fulfils positive definiteness .now we show how to write eq .( [ eqansiofinalfinal ] ) in cg formalism .discretising the timestep using we get for the part along the magnetic field lines with we can then write this as the matrix equation with : * * * now , we check again if the energy is conserved properly ( except for numerical errors ) . and therefore and are symmetric , which is exactly the property we identified with energy conservation .for the isotropised version we get the same equations just without the tilde above each and therefore same argumentation holds .next , we carry out several tests for the different implementations .we use rather simple test cases for which we can verify the behaviour analytically , before we apply it to a physically challenging problem like galaxy cluster evolution . for all of the following tests we use initial conditions with `` glass - like '' particle distributions .therefore , we rule out any alignment effects which arise by the definition of a grid .even if the test setups could be done in one or two dimensions , we perform all tests in a fully three dimensional set - up .furthermore , we run the simulations with gas only and disable any accelerations on the sph particles which would come from self - gravity or the mhd equations . with this approach ,we ensure that hydrodynamical properties like the density and the internal energy are computed correctly in their respective sph loops , but we evolve only the conduction equation to thoroughly test the behaviour of our implementations .we always start by describing a test case and the derivation of an analytic solution . we are only be able to derive an analytic solution for a constant conduction coefficient , which we enforce in our code for the test problems instead of using spitzer conduction .afterwards we show the behaviour of the existing code ( i.e. isotropic conduction ) with a reference run and further present our results with the new anisotropic approaches ( fully anisotropic and isotropised ) . finally , we present a more complicated test , where we allow a temperature dependent spitzer conduction and check the influence of different prescriptions for perpendicular suppression . at first , we reproduce the first test of and slightly modify it , so that we can apply it to the new anisotropic conduction implementation .the basic idea is to set - up a temperature step and let the particles exchange heat .we fix the particle positions ( and also the magnetic field , which we add later ) and therefore only evolve the conduction equation . also considering a fixed conduction coefficient instead of spitzer conduction we can pull the out of the divergence andget this simplified conduction equation can be solved analytically ( depending on the initial conditions ) and we compare to the simulation results .we assume a gas with constant density and use with the specific heat capacity .we rewrite eq .( [ eqsimplifiedconductioneq ] ) to with the so called thermal diffusivity , which is simply a diffusion coefficient , as discussed in section [ anisotropicthermalconduction ] .for this temperature step problem it is sufficient to solve the equation in one dimension .the more general solution can be inferred later and basically differs only in some pre - factors .following and this equation can be solved through fourier transformation . for detailsplease see appendix [ appendixtemperaturestep ] .we describe the initial internal energy distribution with the following step function : with being the position of the temperature step , the height and the mean value .we get in total at first , we cross check our calculations with the existing implementation of isotropic conduction . the resultis shown in fig .[ figconduction1a ] .the sph particles are directly plotted as black points without any binning or additional smoothing .the result matches well with the analytic solution .therefore , the existing implementation works even for sudden temperature jumps . ) ) , the black dots are sph particles . both solutions match very well . ]the next step is to include a magnetic field into the test problem to check the new anisotropic implementation . for simplicitywe keep the magnetic field fixed .we introduce a homogeneous field in direction .hence , there is an angle of between the -field and the energy gradient , which is in our set - up parallel to the -axis .the results of the fully anisotropic conduction version are plotted in fig .[ figconduction1all ] in the top left box .this implementation reproduces the analytic solution quite well ; however , we get more scatter than in the run with isotropic conduction .it is still not yet clear what the origin for this noise exactly is , but for a cosmological simulation this is of less importance , since thermal conduction is not the dominating effect modifying and the scatter is smoothed out .probably , this formulation does not ensure a positive definite transport matrix , which induces errors into the conjugate gradient solver .however , we see that for a magnetic field we do not need an artificial isotropisation to obtain a stable solution .the top right panel shows the results using the isotropised formulation .clearly , we never get the exact analytic result , since the isotropisation is artificially added into the numerics , but our result is close to the real solution .in contrast to the fully anisotropic formulation we get less scatter since the anisotropic part of the equation is mixed with an isotropic component and therefore has a weaker effect .furthermore , we ensure positive definiteness of the transport matrix which guarantees stability of the algorithm .so far we performed all tests with a magnetic field to the temperature gradient . to exclude the arbitrariness of this choice and to study in more detail the different implementations we carry out the tests also with two other setups : * a magnetic field along the temperature gradient to check if the isotropic case can be recovered with the new code at sufficient accuracy . * the other extreme case of a magnetic field perpendicular to the temperature gradient to see if the different implementations really recover total suppression of heat flux . in the middle row of fig .[ figconduction1all ] we show the results for a parallel magnetic field again for both implementations .the fully anisotropic implementation recovers the analytic solution very well , however , with some noise .the amount of noise is about the same as with a the diagonal magnetic field .since we have no difference to isotropic conduction in the case of a parallel magnetic field we see , that the noise can not origin from computational instability due to a non positive definite transport matrix . in comparison , we find less scatter butthe same expected offset in the isotropised run as before . in the bottom rowwe show similar plots for a magnetic field perpendicular to the temperature gradient . from our preconditionswe expect no conduction in this case , so the initial conditions should stay constant except for numerical noise . for the fully anisotropic derivation we find a rather stable solution , as expected . however , we encounter the regime , where the anisotropy is strong enough for heat to flow in the wrong direction .we expected that behaviour and therefore , implemented the isotropised variation .we do not expect to see this behaviour any more in simulations , where other processes are included and immediately damp the instabilities .the isotropised approach can by construction not show a stable solution for this setup : the anisotropic part may be suppressed , but the isotropic part continues to work independently of the magnetic field .we find that while the conduction parallel to magnetic field lines is damped , we gain an amplification of the perpendicular component .these violate the initial assumptions of our derivation for the sake of enforcing a physical heat flux .we consider different test setups to find out , which formulation should be used . at last , we check whether our implementation reproduces the common idea to approximate anisotropic conduction by an isotropic implementation damped by a factor of .we imprint a random magnetic field onto our initial problems and fit the analytic solution with fitting parameter to it .we find factors of about 0.33 at different times matching our expectations .the fully anisotropic approach usually shows slightly smaller values than the isotropised version , however the difference is very small .we emphasize that besides scatter , the shape of the analytic solution is reproduced very well .we conclude that unphysical errors like in the bottom left part of fig .[ figconduction1all ] will probably not arise in simulations with turbulent magnetic fields . since the temperature step test contained an artificial discontinuity we test the code also with a similar setup but taking a smooth temperature distribution .following we take a sinusoidal temperature distribution at . at first, we derive the analytic solution for the initial conditions : with a generic wavenumber .the result is : assuming periodic boundary conditions we need to add an initial offset to prevent negative energies : we chose the arbitrary values of erg / g and erg / g .including a magnetic field we expect a reduced conduction with coefficient we perform this test with both implementations of anisotropic conduction and three magnetic field configurations with , and to the -axis .the results are shown in fig .[ figconduction2 ] . basically , we find a similar behaviour as for the temperature step problem : the isotropised impelmentation shows always an offset from the analytic solution ( weaker conduction parallel and stronger conduction perpendicular to the magnetic field ) , while the fully anisotropic implementation reproduces the solution very well .we emphasise two main differences to our previous results : the amount of scatter for the fully anisotropic implementation is similar to what we get for the isotropised run .since there is no strong discontinuity in this setup the amount of scatter is way lower than for the temperature step test .furthermore , we do not get any numerical artifacts in our results and even full suppression of conduction with a perpendicular magnetic field for the fully anisotropic implementation .therefore , this approach produces the best results as long as there are no sudden temperature jumps .next , we test how the code behaves for a more complex scenario .similar to the second test from we set - up a sphere of hot gas .we use spherical symmetric initial conditions for the internal energy in the form of for this test case we only show a qualitative comparison of the different runs , to see if the anisotropy is reproduced well . in fig .[ figconduction3 ] we show our test results using isotropic conduction and both anisotropic approaches for a magnetic field in direction .the comparison shows well the different overall effect of the three implementations .the more anisotropy the approach contains the lower the temperature decline in the inner region .additionally , we see the stronger anisotropy in the middle panel compared to the right one through the ellipticity of the resulting profile . in total , this agrees with our previous findings .again there are no strange artifacts visible in any of the runs .we conclude that the fully anisotropic approach should be fairly unproblematic to use while it gives us more exact results according to the properties we formulated at the beginning .therefore , we consider only this formulation .finally , we again set up a temperature step problem but now we investigate the behaviour of the suppression mechanism described in section [ anisotropicthermalconduction ] .we use typical values for temperature and density as they are found in hot regions of galaxy clusters and use a homogeneous magnetic field of the form with .the results are shown in fig .[ figconduction4 ] . at the beginning we set up the temperature step at kpc . the generally expected behaviour is , that the discontinuity propagates to the low temperature regime while the two levels close in on the mean temperature .we have run the different set - ups with either totally suppressed conduction perpendicular to the magnetic field and both phenomenologically motivated scaling relations presented in section [ anisotropicthermalconduction ] .we can identify the following different behaviours when varying the magnetic field strength : * : the magnetic field is strong enough to fully suppress perpendicular conduction no matter which prescription we use .* : the linear scaling relation results in an increased net conduction while the quadratic scaling still suppresses perpendicular conduction strongly .* : both prescriptions allow a certain amount of perpendicular conduction however there is no clear relation between both .the denominator of the suppression factor on the higher energy level is larger than one which results in a stronger suppression when the factor is squared . however , it is smaller than one for the low energy level. this is illustrated by fig .[ figconduction4hist ] . * : the relation between linear and quadratic scaling has fully flipped : while both allow for a lot of perpendicular conduction now we get more net conduction with the quadratic formula * : the magnetic field is so weak that it can not suppress perpendicular conduction any more with either of the discussed scaling relations . .particles at the higher plateau get a stronger suppression for the quadratic formalism while particles at the lower plateau show the opposite behaviour.,title="fig : " ] initial conditions in total , we see that a proper treatment of perpendicular conduction is important mostly for very small magnetic field strengths. we can not judge from this test which prescription is the better , however , it is important to include a prescription if small magnetic fields require proper treatment . additionally , we note that even if we take into account only hot gas , the suppression is still also dependent on density , which means that also particles with stronger magnetic fields can require this proper description . after all testswe come to the following conclusions : * the isotropised formulation for anisotropic conduction ensures that the solving algorithm is stable and does not lead to non - physical heat conduction .however , it violates the prerequisites we used to derive an anisotropic formulation .* we find that the fully anisotropic formulation behaves sufficiently well for adequately smooth temperature distributions .since the degree of instability should be small in comparison to hydrodynamical effects we only apply this formulation in our cosmological simulations .* we have briefly investigated the effects of different scalings for perpendicular suppression and further inquire their behaviour in simulations of galaxy clusters .in this section we present zoomed in re - simulations of massive coma - like galaxy clusters selected from large gpc sized cosmological boxes , where the parameters for a cosmology with , , and .we select five clusters from the original set of simulations presented in to study the effect of thermal conduction within galaxy clusters .these galaxy clusters have virial masses of corresponding to virial radii of typically . calculated virial properties for all runs are listed in table [ tabcluster ] .more details about the selection of the galaxy clusters and the generation of the initial conditions can be found in . at first, we select one isolated and relaxed looking galaxy cluster ( ) to perform several simulations testing different settings for the implementation of thermal conduction .[ figmapsg5699754 ] displays projected temperature maps of 5 mpc wide and thick slices through the cluster , demonstrating the effect of thermal conduction on the temperature structure .the upper left panel shows the reference run without any thermal conduction .then , from left to right and top to bottom the suppression factor is reduced ( i.e. the conduction efficiency is increased ) for the case of isotropic heat conduction .the last panel bottom right shows the result for anisotropic heat conduction , where we include the linearly scaling perpendicular suppression factor as displayed before ( see section [ anisotropicthermalconduction ] ) .as shown already in , where similar simulations have been carried out with an earlier version of the implementation of isotropic heat conduction , we see that in such massive ( and therefore hot ) galaxy clusters , isotropic conduction has a strong effect on the temperature distribution . with less suppression of thermal conduction , more heat gets transported from the central part of the cluster to the outskirts and even more dramatically visible , local temperature fluctuations get smoothed out . in contrast , the simulation with anisotropic heat conduction shows only a very mild smoothing of the temperature fluctuations compared to the control run .this can also be seen in fig .[ figemissivity1 ] , where we show the emissivity distributions for all the isotropic and one anisotropic run . the larger the isotropic conduction coefficientthe more the distribution is taylored around the mean temperature , ( e.g. the cluster gets more isothermal ) , while the peak increases and shifts to slightly higher temperatures .2.5 mpc ) of the inner part of the relaxed cluster at .the upper left panel shows the simulation with isotropic conduction for .the other three maps show the runs with anisotropic thermal conduction for different treatment of the perpendicular case ( see section- [ anisotropicthermalconduction ] ) . ] to investigate the effect of details in the different treatment of perpendicular conduction for the anisotropic heat conduction , fig .[ figmapsg5699754zoom ] shows temperature maps zooming onto the central 2.5 mpc of our test cluster .here we compare the isotropic thermal conduction with a suppression factor of with three anisotropic runs with different treatment of the perpendicular component : fully suppressed ( ) , linear ( ) and quadratic ( ) proportionality to the magnetic field strength .it is clearly visible that the detailed choice of treatment of the perpendicular component has a quite significant effect on the outcome .still , none of the anisotropic runs show such a strong smoothing of local temperature fluctuations as the isotropic conduction simulation , even if we allow for conduction to become rather isotropic for weak magnetic fields .it also makes a notable difference if we use the linear or the quadratic formula to calculate the perpendicular suppression factor . ) . ]this gets again more clear when looking at the emissivity distributions for the different anisotropic runs shown in fig .[ figemissivity2 ] .while including a perpendicular suppression coefficient proportional to shrinks the distribution a bit , since it contains overall more conduction , we see clearly a different picture for the case proportional to .we find that this prescription suppresses conduction perpendicular stronger for most particles which results in less conduction compared to the linear case .therefore , we see the emissivity distribution broadening again , even beyond the case with zero thermal conduction . .the temperature profiles are normalized by the mean temperature within the virial radius for each respective run . ] a more quantitative analysis is shown in fig .[ figradialtempprofiles ] , where the scaled , radial temperature profiles are presented in the upper panel . hereit can be clearly seen that the stronger we choose the isotropic conduction coefficient , the more internal energy is transported outwards beyond the virial radius . in agreement with previous studies in ,isotropic thermal conduction at a level of 1/3 of the spitzer value already leads to an isothermal temperature distribution in the inner part of the galaxy cluster .regarding anisotropic conduction we include two runs , the one with full suppression perpendicular to magnetic field lines as well as the one using the linearly scaling suppression factor .while the totally suppressed run resembles almost zero level of isotropic thermal conduction , the temperature profile of the latter one lies somewhere between and for isotropic thermal conduction .the entropy profiles in the lower panel are in general more difficult to interpret .here , already the reference run builds up a significant entropy core ( in line with what was reported for the non ideal mhd simulations in ) .their shape is rather similar for all runs and due to combined effects including the different implementations of thermal conduction the trends are not easy to interpret and seem to depend on the local dynamical structures in the core of the galaxy cluster .as inferred from observations by with the one predicted for the simulated relaxed cluster with the different treatment of the thermal conduction ( as labelled ) . ] finally , we can compare the simulated cluster to a sample of observations with _ xmm - newton _ presented by , where they measured the width of the temperature fluctuations within the central part ( e.g. within ) of a sample of galaxy clusters .[ figcomp1 ] shows these observational data points over plotted with results for our different implementations of thermal conduction within our simulations .please note that here we not only use the central galaxy cluster but also make use of a smaller galaxy cluster present within our simulation , which has a temperature of roughly 1 kev .for the case of isothermal conduction we see that the high isotropic conduction coefficients ( e.g. and ) produce results which are below the observed temperature fluctuations for the high temperature system , similar to the findings in . for the low temperatures all implementations are consistent with the observations .in contrast , the anisotropic runs are matching with the simulations without thermal conduction .interestingly , the simulation with the quadratic dependency of the suppression factors shows the largest temperature fluctuations , in line with the broader temperature distribution shown before .to enforce the idea of a proper treatment of perpendicular conduction we display the range of suppression factors for all hot particles in one of our simulations in fig .[ figsupfac ] at several redshifts .since conduction scales strongly with the temperature of the plasma , we take into account only the most important contributors to thermal conduction , selecting particles within the hot atmosphere of groups and clusters by requesting their temperatures to exceed k. as the typical formation time of clusters and groups is around , the amount of particles within this hot gas phase increases significantly until the redshift approaches . while either suppression formulation results in fairly low suppression factors for the bulk of particles , there are significant differences in the amount of particles which have moderate suppression factors up to the regime of almost unsuppressed conduction. furthermore , it seems that the quadratic formula produces in general lower factors and therefore less net conduction than the linear one .as we have already seen in our tests in section [ tempstepperp ] the two formulations show opposite behaviour in different regimes .this is driven by the distribution of the magnetization of particles .while in the outer parts of clusters and groups there are many particles which never experienced supernova seeding events and therefore can have very low values of the magnetic field , while in the very high density regions in the cores of clusters and groups , almost all particles have magnetic fields at g levels . still particles sitting in extreme density peaks can also contribute to this opposite behaviour . to further investigate the effect of the different treatments of thermal conduction we select the same four , coma like galaxy clusters ( _ g0272097 _ , _ g1657050 _ , _ g4606589 _ and _ g6802296 _ ) as in and simulate them with zero thermal conduction , isotropic conduction at a level of and anisotropic thermal conduction using the linear scaling for the perpendicular case . table [ tabcluster ] lists the general properties of the resulting galaxy clusters . .calculated virial masses [ /h ] , virial radii [ mpc / h ] and mass fraction of collapsed baryons ( stars + gas with ) for all runs .different conduction settings do not alter the global halo properties significantly .the fraction of collapsed baryons seems to grow slightly with increasing net conduction . for further cross reference our initial conditions with the numbering used by . [ cols="<,<,<,<,<,<,<",options="header " , ] while the virial properties of the halo are basically unchanged , the amount of condensated baryons in form of stars and cold gas changes with the treatment of thermal conduction. this fraction slightly grows with increasing net conduction , similar to previous findings , again indicating that thermal conduction alone is not able to prevent cooling in the centers of cluster .although , due to the inclusion of magnetic fields , the fraction of condensed baryons is smaller than in previous numerical studies , it is still larger than previous observations . however , more recent observational studies by indicate a significantly larger amount of stars in the central galaxies of clusters than previously thought . ultimately , including anisotropic thermal conduction seems not to change the amount of cold baryons in the center of simulated galaxy clusters significantly .the respective temperature maps for all five clusters with the three settings for thermal conduction are shown in fig .[ figmapsmoreics ] .the four additional clusters show a a very similar behaviour as we saw before in the relaxed one .temperature is transported outwards with the isotropic conduction using , while substructures are strongly smoothed out , where as the run with anisotropic conduction shows only mild smoothing of temperature fluctuations .one interesting aspect gets clearly visible in fig .[ figradialtempprofilesall ] , where we present the corresponding radial temperature profiles for five clusters for the three different runs .again , we see that isotropic conduction leads to a significant flattening of the temperature profile embedding a cold core with varying size and moderate temperature . the simulations without thermal conductionshow a rising temperature profile towards the center with a much larger drop of temperature within the central core .the simulations with anisotropic thermal conduction where we used the linear scaling for the perpendicular component shows a more bimodal temperature profile .some clusters show very similar temperature profiles compared to the simulations without any thermal conduction , some have a very pronounced cold core .the sample is much to small to draw robust conclusions but this indicates that in the case of anisotropic thermal conduction the amount of heat transport is strongly varying with the current dynamical state of the cluster and therefore might contribute to the observed bimodality of cool core and non cool core clusters .the temperature for all runs drops lower than the specific mean temperature of gas inside the virial radius at about 40 to 45 per cent of the virial radius . as inferred from observations by with the one predicted for the set of simulated clusters , including also some less massive ones which are found within the high resolution region of the zoomed simulations .the different colors correspond to the simulations without thermal conduction ( black ) , isotropic thermal conduction with ( red ) and anisotropic thermal conduction , where the perpendicular term is evaluated using the linear scaling ( pink ) . ] finally , we compare the temperature fluctuations of the full set of our simulations to the _ xmm - newton _ observations presented by .similar as before , beside the central , massive cluster we also take other clusters found in the high resolution region into account , allowing us to get also some objects with various temperatures , sampling the low temperature region . fig .[ figcomp2 ] shows the comparison of the data with our simulations without thermal conduction , with isotropic conduction using and with anisotropic conduction including linear scaling for the perpendicular component . while the simulated clusters with isotropic conduction using fall significantly below the bulk of data points for clusters above 5 kev , the simulations without thermal conduction and with anisotropic thermal conduction seem to represent the observed data points reasonably well .still a more clearly selected set of simulated galaxy clusters across the whole temperature range as well as observations in the high temperature regime are needed to draw more robust conclusions .we derive and discuss a numerical scheme for anisotropic thermal conduction in the presence of magnetic fields .we present a discretisation for sph and implement our new method into the cosmological simulation code gadget .we show a variety of standard tests as well as cosmological simulations of galaxy cluster formation with different choice of conduction parameters , where we combined the new conduction implementation with a supernova seeding scheme for the magnetic field ( beck et al .2013 ) , leading to a self consistent evolution of magnetic fields within the cosmic structures .our numerical scheme for anisotropic conduction in sph solves the corresponding equations using a conjugate gradient solver , and therefore need only a very small amount of extra computational effort in the mhd version of gadget .however , the straight forward derivation can violate the second law of thermodynamics in cases of strong anisotropies and large jumps in temperature .additionally , this can causes an unstable behaviour of the conjugate gradient solver in the presence of extremely sharp jumps of temperature .typically , this problem is solved by introducing a correction which ensures positive definiteness of the linear equation system by adding an artificial isotropic component . however, this correction can lead to significant , artificial heat flow perpendicular to the magnetic field and it is therefore questionable if such a numerical correction is useful in a realistic environment , where it can hide the effect of anisotropic conduction . in general , for any realistic situation , our anisotropic implementation with fully suppressed perpendicular term is already stable enough so that we do not have to add an artificial , destructive isotropisation term .however , a closer look at the term of the perpendicular conduction coefficient reveals that the amount of suppression of the perpendicular conduction coefficients can scale with either with or with . depending on other plasma properties, these two scalings can have different relative effect on the amount of perpendicular heat transport . to test them , we perform cosmological simulations of the formation of galaxy clusters with different implementations of the perpendicular transport coefficients . we also compare the results with a fully isotropic implementation of heat transport for different values of the suppression of thermal conduction with respect to the classical spitzer value .our main results can be summarised as follows : * temperature maps from simulated galaxy clusters show that isotropic thermal conduction not only transports heat outwards , but also smoothes small - scale features .anisotropic conduction seems to resemble isotropic transport with coefficients like ; however , prominent substructures in the temperature distribution survive due to insulation by magnetic field lines . *radial temperature profiles change differently when applying anisotropic thermal conduction depending on the dynamical evolution of the cluster .some profiles are very similar to those without thermal conduction showing a rising profile with a large drop towards the center while others show a very pronounced cool core .in contrast , isotropic conduction produce flattened , almost isothermal temperature profiles in the central regions .* we show the relevance of a proper treatment of perpendicular conduction instead of only parallel transport . at all timeswe find a significant amount of particles with temperatures k for which a non negligible perpendicular component is assigned .these particles sit either in regions with negligible magnetic field or at gas density peaks .* we calculate emissivity distributions and compare to observed temperature fluctuations of .we find that simulations with either zero or anisotropic conduction reflect the observational data points best , whereas isotropic conduction with shows a clear lack of temperature fluctuations compared to the observational data points . in comparison , although clearly visible , the differences for the different descriptions for perpendicular conduction show only mild changes in the amount of temperature fluctuations . herea significant increase of number of simulated galaxy clusters as well as many more observations of high temperature clusters will be needed to discriminate between them .* we compare the fractions of cold gas and stars in different simulations and find a weak dependence on the conduction parameters .conduction seems not to play a key role in suppressing cooling in galaxy clusters , but due to the coupling of the suppression factors to the local dynamical state of the cluster , the anisotropic conduction might contribute to the observed bimodality of cool core and non cool core systems . in conclusion ,anisotropic thermal conduction is not a dynamically dominant process within galaxy clusters , but can influence the evolution of small - scale structure . incontrary to isotropic heat conduction , it produces a reasonable amount of temperature fluctuations compared to observations and still allows locally for transport of heat . in general , it comes only with a small amount of computational cost for cosmological spmhd codes and eliminates the need for a free efficiency parameter . in the future, a sample of cosmological boxes will allow a better statistical analysis of the detailed temperature structure and the role of anisotropic thermal conduction in galaxy clusters .this will help to gain further knowledge about the scaling relations of perpendicular conduction or the importance of small - scale plasma instabilities .we thank volker springel for access to the developer version of gadget .aa gives special thanks to his colleagues max imgrund , marco huser and carsten uphoff for very useful discussions during the creation of this work .kd and amb are supported by the dfg research unit 1254 magnetisation of interstellar and intergalactic media and by the dfg cluster of excellence ` origin and structure of the universe ' .kd is supported by the sfb - transregio tr33 the dark universe. 99 andreon s. , 2010 , mnras , 407 , 263 avara m. j. , reynolds c. s. , bogdanovi t. , 2013 , apj , 773 , 171 balogh m. l. , pearce f. r. , bower r. g. , kay s. t. , 2001 , mnras , 326 , 1228 beck a. m. , lesch h. , dolag k. , kotarba h. , geng a. , stasyszyn f. a. , 2012 , mnras , 422 , 2152 beck a. m. , dolag k. , lesch h. , and kronberg p. p. , 2013 ,mnras , 435 , 3575 binney j. & cowie l. l. , 1981 , apj , 247 , 464 bogdanovi t. and reynolds c. s. , balbus s. a. , parrish i. j. , 2009 , apj , 704 , 211 bonafede a. , dolag k. , stasyszyn f. , murante g. , borgani s. , 2011 , mnras , 418 , 2234 bregman j. n. , david l. p. , 1989 ,apj , 341 , 49 brookshaw l. , 1985 , asa proc . , 6 , 207 chandran b. d. g. , cowley s. c. , 1998 , phys . rev .lett . , 80 , 3077 cleary p. w. , monaghan j. j. , 1999 , j. comp ., 148 , 227 cowie l. l. , mckee c. f. , 1977 , apj , 211 , 135 dehnen w. , aly h. , 2012 , mnras , 425 , 1068 dolag k. , stasyszyn f. , 2009 , mnras , 398 , 1678 dolag k. , jubelgas m. , springel v. , borgani s. , rasia e. , 2004 , apj , 606 , l97 fabian a. c. , 1994 , ann . rev . of a&a , 32 , 277 fabian a. c. , 2002 , lighthouses of the universe : the most luminous celestial objects and their use for cosmology , springer - verlag , p. 24fabian a. c. , sanders j. s. , taylor g. b. , allen s. w. , crawford c. s. , johnstone r. m. , iwasawa k. , 2006 , mnras , 366 , 417 frank - kamenezki d. , vorlesungen ber plasmaphysik , veb deutscher verlag der wissenschaften frank k. a. , peterson j. r. , andersson k. , fabian a. c. , sanders j. s. , 2013 , apj , 764 , 46 golant v. e. , zhilinsky a. p. , sakharov , i. e. , fundamentals of plasma physics , john wiley & sons australia huba j. d. , 2011 , nrl plasma formulary jubelgas m. , springel v. , dolag k. , 2004 , mnras , 351 , 423 komarov s. v. , churazov e. m. , schekochihin a. a. , 2014 , mnras , 440 , 1153 kravtsov a. , vikhlinin a. , meshscheryakov a. , 2014 , arxiv:1401.7329 kronberg p. p. , 1994phys . , 57 , 325 landau l. d. , lifschitz e. m. , 2007 , lehrbuch der theoretischen physik vi hydrodynamik , harri deutsch gmbh lin y .- t . , mohr j. j. , stanford s. a. , 2003 , apj , 591 , 749 loeb a. , 2002 , new astron . , 7 , 279 narayan r. , medvedev m. v. , 2001 , apj , 562 , l192 owers m. s. , nulsen p. e. j. , couch w. j. , 2011 , apj , 741 , 122 owers m. s. , nulsen p. e. j. , couch w. j. , markevitch m. , 2009 , apj , 704 , 1349 parrish i. j. , stone j. m. , 2005 , apj , 633 , 334 peterson j. r. , fabian a. c. , 2006 , phys ., 427 , 1 peterson j. r. , kahn s. m. , paerels f. b. s. , kaastra j. s. , tamura t. , bleeker j. a. m. , ferrigno c. , jernigan j. g. , 2003 , apj , 590 , 207 petkova m. , springel v. , 2009 , mnras , 396 , 1383 pistiner s. , shaviv g. , 1996 , apj , 459 , 147 price d. j. , j. comp .phys . , 231 , 759 rasia e. , lau e. t. , borgani s. , nagai , d. , dolag k , avestruz c. , granato g. l. , mazzotta p. , murante g. , nelson k. , ragone - figueroa c. , apj , 791 , 96 rechester a. b. , rosenbluth m. n. , 1978 , phys .lett . , 40 , 38 rosner r. , tucker w. h. , 1989 , apj , 338 , 761 ruszkowski m. , lee d. , brggen m. , parrish , i. j. , oh s. p. , 2011, apj , 740 , 81 , saad y. , 2003 , iterative methods for sparse linear systems , 2nd edition , society for industrial and applied mathematics sarazin c. l. , 2008 , a pan - chromatic view of clusters of galaxies and the large - scale structure : gas dynamics in clusters of galaxies , p. 1 , springer netherlands sarazin c. , 1986 , x - ray emission from clusters of galaxies , rev .phys . , 58 , 1 sharma p. , hammett g. w. , 2007 , j. comp .phys . , 227 , 123 spitzer l. , 1956 , physics of fully ionized gas , interscience publishers spitzer l. , hrm r. , 1953 , phys . rev .lett . , 89 , 977 springel v. , 2005 , mnras , 364 , 1105 springel v. , hernquist l. , 2002 , mnras , 333 , 649 springel v. , hernquist l. , 2003 , mnras , 339 , 289 springel v. , yoshida n. , white s. d. m. , 2001 , new astron ., 6 , 79 taylor g. b. , gugliucci n. e. , fabian a. c. , sanders j. s. , gentile g. , allen s. w. , 2006 , mnras , 368 , 1500 tormen g. , moscardini l. , yoshida n. , 2004 , mnras , 350 , 1397 vazza f. , brunetti g. , gheller c. brunino r. , 2010 , new astron . , 15 695 voigt l. m. , fabian a. c. , 2004 , mnras , 347 , 1130 zakamska n. l. , narayan r. , 2003 , apj , 582 , 162we show the solution of the modified eq .( [ eqqnumericstaylor ] ) solved for the second order term , which we need to compute the mixed second derivatives in the conduction equation . at first , the kernel derivative is expressed as looking at the first order error term of the modified eq .( [ eqqnumericstaylor ] ) , we see , that since for all possibilities of , and there is always at least one component , where the integral vanishes because of an antisymmetric integrand .all indices range from 1 to 3 , so there is always one component with an odd amount of .the denominator and are even with respect to , so the integral vanishes .the next step is to calculate the integrals of the second order error term with a substitute as we distinguish between the following three cases , which we address one after another : 1 .at least three indices are unequal 2 . all indices are equal 3 .the indices form two pairs , e.g. and if at least three of the four indices are unequal , then there is at least one integration where the integrand contains only a single component .since the denominator and are even functions with respect to , the integrand for this component is in total an odd function which vanishes when integrating over the whole ( symmetric ) domain .therefore , the integral is zero . if all indices are equal , we start the calculations with substituting the integration variable without further implications on the integration. then eq .( [ eqsecondordererrorterm ] ) simplifies to where we used the short hand notation .since is only dependent on we choose spherical coordinates for .we can arbitrarily choose the rotation of our coordinate system . for simplicitywe let be along the -axis of the coordinate system .this results in we can easily perform the and integrations and obtain next , we perform a partial integration , where the boundary term vanishes , since the kernel is monotonically decreasing towards zero . it remains : because of the kernel normalisation condition .this last case can be calculated pretty similar , except that we have to chose two indices , which have to be unequal .we choose and . again , written in spherical coordinates we get using the results from the -integration before we calculate and the total result is : so basically is only non zero , if we have two pairs of indices . plugging everything back into the modified eq .( [ eqqnumericstaylor ] ) gives : to infer a general behaviour we take a look at an example with since we want to infer an approximation for second order mixed derivatives of , we have to linearly combine terms for different choices of and .it can be found that , and cyclic permutation for other second derivatives .a similar formula applies for mixed derivatives . consider for example it is better to keep both parts separated to explicitly indicate the symmetry for simplicity in the assembly process .from this example we get directly and cyclic permutations . combining the equations ( [ eqsecondordernonmixed ] ) and ( [ eqsecondordermixed ] ) we see that all occur times a factor of 5/2 minus a trace term times 1/2 .this finding is represented by the substition eq .( [ eqtildevariabledef ] ) in our final result .we show the derivation of the analytic solution of the conduction equation for the temperature step test ( section [ temperaturestep ] ) .we start with the fourier transformation of the specific internal energy in both directions : and the conduction equation expressed in fourier space is with the simple solution using we express the unknown coefficient in terms of the initial condition in real space we insert this result into the reverse fourier transformation ( eq .( [ eqfourierbacktrafo ] ) ) and obtain at first , we perform the integration over . for thiswe rewrite the exponentials completing the square to bring them into gaussian form , which is a simple integration and get at this point we need to use the specific initial conditions of our problem .for the temperature step they are defined as we split this expression into two parts : one which is multiplied by and one which is multiplied by .the term can be simply integrated , since it is again only a gaussian integral , which results in . for a little consistence checkconsider then we get , which is what we would expect for a isothermal region without any other effects than thermal conduction .
we present an implementation of physically motivated thermal conduction including the anisotropic effects of magnetic fields for smoothed particle hydrodynamics ( sph ) . the diffusion of charged particles and therefore thermal conduction is mainly proceeding parallel to magnetic field lines and suppressed perpendicular to the field depending on the local properties of the plasma . we derive the sph formalism for the anisotropic heat transport and solve the corresponding equation with an implicit conjugate gradient scheme . we discuss several issues of unphysical heat transport in the cases of extreme ansiotropies or unmagnetized regions and present possible numerical workarounds . we implement our new algorithm into the cosmological n - body / sph simulation code gadget and study its behaviour in several fully three dimensional test cases . our test setups include step - functions as well as smooth temperature distributions allowing us to investigate the stability and accuracy of our scheme . in general , we reproduce the analytical solutions of our idealised test problems , and obtain good results in cosmological simulations of galaxy cluster formations . within galaxy clusters , the anisotropic conduction produces a net heat transport similar to an isotropic spitzer conduction model with an efficiency of one per cent . in contrast to isotropic conduction our new formalism allows small - scale structure in the temperature distribution to remain stable , because of their thermal decoupling caused by surrounding magnetic field lines . compared to observations , isotropic conduction with more than 10 per cent of the spitzer value leads to an oversmoothed temperature distribution within clusters , while the results obtained with anisotropic thermal conduction reproduce the observed temperature fluctuations well . a proper treatment of heat transport including the suppression by perpendicular magnetic fields is crucial especially in the outskirts of clusters and also in high density regions . it s connection to the local dynamical state of the cluster also might contribute to the observed bimodal distribution of cool core and non cool core clusters . our new scheme significantly advances the modelling of thermal conduction in numerical simulations and overall gives better results compared to observations . [ firstpage ] conduction - magnetic fields - methods : numerical - galaxies : clusters : intracluster medium
the amount of patience required to simulate exactly a nuclear magnetic resonance ( nmr ) spectrum of an -spin system scales approximately as . that much is rarely available , and considerable thought has consequently been given over the last decade to more efficient methods , particularly those that promise to achieve that objective in polynomial time .such algorithms do exist , but they make significant _ a priori _ assumptions about the spin system evolution it is usually assumed that the system stays weakly correlated for the duration of the experiment . outside the nmr community , significant progress was recently made with the development of tensor structured methods , all of which descend broadly from the density matrix renormalization group ( dmrg ) as well as matrix product state ( mps ) and matrix product operator ( mpo ) formalisms .typical applications of dmrg in condensed matter theory are 1d spin chains with recent extensions to 2d lattices .dmrg has also been put to good use in electronic and nuclear structure theory , but magnetic resonance spectroscopy has so far received little attention the spin systems encountered in the daily practice of nmr and epr ( proteins , radicals , polynucleotides , polysaccharides ) are irregular three - dimensional room - temperature networks with multiple interlocking loops in the spin coupling graph and no identical couplings . when the strict requirement for correct wavefunction phase during the very long ( milliseconds to seconds ) dissipative spin system trajectories is added to the list , time - domain dmrg methods are currently struggling .there are some biologically relevant cases , however , that may still be treated as linear chains for the purposes of simulating simple backbone nmr experiments , protein side chains may often be ignored .this makes the corresponding spin system a weakly branched linear chain that is amenable to dmrg type treatment .simple nmr experiments can also be reformulated as a matrix - inverse - times - vector problem in the frequency domain , for which efficient algorithms in tensor product formats have recently emerged .we report in this communication the behavior of the amen algorithm , applied to the solution of the nmr simulation problem in the frequency domain , as well as to the technical task of adding together , without loss of accuracy , tensor train representations of thousands of spin hamiltonian terms for a protein . having integrated the algorithms described below into _ spinach _ ( a large - scale magnetic resonance simulation library ) , we are reporting here the first exact quantum mechanical simulation of a liquid - state 1d nmr spectrum for a protein backbone spin system with several hundred coupled spins . beyond the physical assumptions made by chemists at the problem formulation stage and the controllable numerical rounding error of the tensor train format itself ,there are no approximations .tensor product expressions appear naturally in spin dynamics because the state space of a multi - spin system is a direct product of state spaces of individual spins .a simple example is the nuclear zeeman interaction hamiltonian where is the number of spins , is the applied magnetic field , are nuclear chemical shielding tensors , and the sum runs over all nuclei .cartesian components of nuclear spin operators have the following tensor product form where denotes an identity matrix of appropriate dimension and pauli matrices occur at the -th position in the tensor product sequence . this representation is known in numerical linear algebra as the canonical polyadic ( cp ) format .although cp representations have been known in magnetic resonance spectroscopy for a long time , they suffer in practice from rapid inflation spin hamiltonians encountered in nmr and esr ( electron spin resonance ) systems can be complicated and , even for simple initial conditions , the number of terms in the canonical decomposition increases rapidly during system evolution .more ominously , the number of cp terms can change dramatically after small perturbations of the hamiltonian or the system state .a simple example is where the left hand side of this equation contains direct product terms , given by eq . , but the expression approximating it on the right hand side has only two direct product terms , and one could be tempted to use it to reduce storage and cpu time . however , both terms of the approximation grow to infinity when and the accuracy is lost due to rounding errors . such instabilities in the cp formatmake it difficult to use in finite precision arithmetic the number of terms in the decomposition quickly becomes equal to the dimension of the full state space and any efficiency savings disappear .unlike the cp format , which is an _open _ tensor network , _ closed _ tensor network formats are stable to small perturbations .the most popular closed tensor network format was repeatedly rediscovered and is currently known under three different names : dmrg in condensed - matter physics , mps / mpo in computational physics , and tt ( tensor train ) in numerical linear algebra .a tensor train is defined , using the standard notation of numerical linear algebra , as follows : the tt representation of the total operator in eq .is similar to the high dimensional laplacian : with and the number of terms in each summation ( known as _ bond dimension _ , or _tt rank _ ) is two , and the number of entries of the decomposition is now bounded . the tt representation of in eq .has single spin operators , each of which is either zero , or identity , or the pauli matrix the cp representation of in eq .has such operators the tensor train representation is clearly more memory efficient .another notable example is the zz coupling hamiltonian that often makes an appearance in models of simple linear spin chains : as written , this is a cp format with terms and single - spin operators entering direct products .the corresponding tt representation is here each summation runs over three terms only , and the total number of single - spin operator matrices appearing in is , much fewer than the one of the cp format in eq . .storage requirements of tensor structured representations ( both cp and tt ) stand in sharp contrast with the classical approach to magnetic resonance simulations , where the hamiltonian is represented as a sparse matrix with all non - zero entries stored in memory .as soon as the matrix is assembled , cpu and memory resources grow exponentially with the number of spins making the simulation prohibitively difficult for large systems .tensor structured methods avoid this problem ( it is known colloquially as _ the curse of dimensionality _ ) by keeping all data in compressed formats of the form given in eqs . and and manipulating it without ever opening up the kronecker products .a very considerable body of literature exists on manipulating expressions directly in tensor product formats .in particular , a given matrix may be converted into the tt format using sequential singular value decompositions .given tensors in the tt format , one can perform linear or bilinear operations ( addition , element - wise multiplication , matrix - vector multiplication ) , fourier transform , and convolution directly in the tt format , avoiding exponentially large arrays and computational costs .these developments would have permitted large scale magnetic resonance simulations entirely in the tt format , were it not for a significant obstacle the summation operation in tensor train representations is an expensive procedure that carries a significant accuracy penalty due to the need to re - compress the representation to keep the bond dimensions low .spin hamiltonians of practically interesting biological systems contain many thousands one and two spin terms of the kind shown in eq . .intermediate expressions in spin dynamics simulations also frequently involve large sums .we demonstrate below that in those circumstances the standard bundle - and - recompress tensor network summation procedure leads either to the bond dimension expansion beyond the limits of modern computing hardware , or to a catastrophic accuracy loss .this problem also occurs with three dimensional potentials encountered in electronic structure theory . herewe propose an alternative algorithm for computing large sums , based on alternating tensor train optimization , and use it to enable nmr simulations on protein - size spin systems .fully and labelled protein human ubiquitin ( pdb code 1d3z , figure [ fig : ubiq ] ) containing over a thousand magnetic nuclei in 76 amino acid residues was chosen for testing purposes with two types of spin subsystem selection : _ backbone _ ( h , n , c , ca , ha ) and _ extended backbone _ ( h , n , c , ca , cb , ha , hb ) .both cases involve a weakly branched continuous chain of spin - spin couplings and are encountered in the simulation of a large class of protein backbone nmr experiments that map out the protein bonding network and thereby assist in molecular structure determination : hnco , hncoca , hnca , and hsqc .the isotropic nmr hamiltonian was assembled using chemical shift values from the bmrb database and -couplings from the literature data . in the cases where an experimental value of a particular -coupling was not available in the literature , it was estimated based on the known values for structurally similar substances for most nmr simulation purposes and certainly for the purpose of the demonstration of the performance of the tensor train algorithm the accuracy of such coupling estimates ( about ) is sufficient .the raw data for the magnetic couplings used in this work is available in the example set supplied with the current public version of the _ spinach _ library .nmr experiments were performed at on a varian inova mhz ( tesla ) spectrometer equipped with a z gradient triple resonance cryogenic probe using a mm sample of uniformly and labelled human ubiquitin in .spectra were collected as 2d hsqc spectra incorporating gradient enhanced coherence selection and water flip - back .the spectra were recorded with acquisition times of ( , ) and ( , ) . during the evolution period , and couplingswere either allowed to evolve , or decoupled by insertion of a rectangular or a shaped inversion pulse using the central lobe of the sinc function . during acquisition nuclei were either evolved or decoupled using ppm broadband wurst sequence .the liquid state nmr hamiltonian of , labelled ubiquitin is : where canonical nmr spectroscopy notation is used , index runs over all nuclei , and indices run over pairs of nuclei that belong to the same isotope , and run over pairs of nuclei that belong to different isotopes , and run over the nuclei influenced by radiofrequency pulses , and are time profiles of those pulses , are offset frequencies arising from the chemical shielding of the corresponding nuclei , are `` strong '' nmr -couplings , are `` weak '' nmr -couplings , and spin operators are defined by eq . . in the case of extended ubiquitin backbone , the hamiltonian in eq .contains shielding terms , coupling terms , and radiofrequency terms .all calculations reported below were performed by extending the functionality of _ spinach _ library to the tensor train formalism and interfacing it to _ tt - toolbox _ where appropriate . due to the abundance of complicated multi - pulse nmr experiments with time - dependent hamiltonians ,magnetic resonance simulations are generally carried out in the time domain .they always require long - term evolution trajectories with accurate phases ( at least ms , much longer than the reciprocal hamiltonian norm ) for the density operator under the liouville von neumann equation : + \hhr \left ( \hrho(t)-\hrho_{\mathrm{eq } } \right ) , \\o(t ) & = \left\langle \ho \,\big|\ , \hrho(t ) \right\rangle = \trace\left [ \ho^\dagger \hrho(t ) \right ] , \\ \hrho_{\mathrm{eq } } & = \frac { \exp \left ( - \hh / k_b t \right ) } { \trace\exp \left ( - \hh / k_b t \right ) } .\end{split}\ ] ] where is the relaxation superoperator ( model with literature values for relaxation times was used in the present work ) , is the thermal equilibrium state , and is the observable operator , usually a sum of or operators on the spins of interest . in very simple caseswhere the hamiltonian is not time - dependent , the general solution to eq .can be written as : \,\big|\ , \hrho_0 \right\rangle,\ ] ] where is the hamiltonian commutation superoperator .direct time domain evaluation of this equation in tensor train format , either using explicit operator exponentiation or krylov type propagation techniques , does not appear to be possible in all cases described by eq .the ranks in the tensor train expansion quickly grow beyond the capacity of modern computers . increasing the singular value cut - off threshold at the representation compression stage leads to catastrophic loss of accuracy .fortunately , there are simple cases ( most notably pulse - acquire 1d nmr spectroscopy ) where amplitudes at only a few specific frequencies are actually required for the fourier transform of eq . , meaning that the problem can be reformulated in the frequency domain : that is , to compute the observable at the point in the frequency domain , we need to solve a linear system the problem formulation in eq .sacrifices a great deal of generality compared to eq .( simulation of arbitrary nmr pulse sequences is no longer possible ) , but it does serve as a stepping stone and enables the demonstration calculation presented below .the dmrg algorithm was initially proposed to find the ground state of a hermitian matrix by the minimization of the rayleigh quotient the dynamical dmrg algorithm was then developed to find the solution of a linear system with a hermitian positive definite matrix by the minimization of the energy function apart from the change of the minimization target function , the two algorithms are similar . in dmrg formalismthe solution is sought in the form of a tensor train introduced in eq . , but the minimization over all cores simultaneously is a complicated non - linear problem . to make the procedure feasible ,it is replaced by a sequence of optimizations carried over one core at a time : the tt format is linear in all cores .this fact may be expressed as where the frame matrix maps the parameters of the tt core to the vector .the linearity allows to rewrite eq . as where is the energy function for the _ local problem _ with and . using the non - uniqueness of the tensor train representation, one can always construct the representation with the unitary frame matrix , that guarantees the stability of the local problem .such a choice is known as _ gauge condition _ in the mps literature , and _ canonical form _ in the dmrg literature .after the solution is computed , we substitute in the tensor train , and continue for and then back and forth along the chain . the convergence of the above described _ one - site _ dmrg procedure depends on the initial guess and in particular on the initial choice of the tt ranks because they remain the same during the sequence of updates defined by eq . .this is a severe restriction and additional measures are therefore taken to _ adapt _ the tt ranks during the computations .one way to do that is to replace the optimization over single cores by the optimization over pairs of neighboring cores , and then to adapt the tt rank between them .another possibility is to expand the search space by adding auxiliary directions .the first method of the latter type is the _ corrected one - site _ dmrg algorithm , which targets in addition to a surrogate of the next krylov vector for the solution of linear systems , the _ alternating minimum energy _ ( amen )algorithm was recently proposed , which also uses an additional direction to adapt tensor train ranks . the local optimization step in amenis carried over one site only . to adapt tt ranks and improve convergence ,tt blocks are expanded by auxiliary information , the _ enrichment _ introduces new directions in the subspace spanned by .a good choice of the enrichment is the component of the tt representation ( exact or approximate ) of the residual amen algorithm is as fast as one - site methods , but as rank adaptive as the two - site dmrg algorithm , and demonstrates comparable or better convergence rates . for the solution of a linear system with a hermitian positive definite matrix , it has a proven global bound on the geometrical convergence rate . unlike the corrected one - site dmrg method ,the amen algorithm is stable to perturbations and free from tuning parameters and heuristics .the rank adaptation strategy in the enrichment phase of amen is determined by a single relative accuracy parameter . in this workwe use the amen algorithm for two purposes .first , we apply it to a system with a trivial matrix but a complicated right - hand side , which is a sum of many elementary tensors like the one in eq . .this allows us to compress a hamiltonian returned by the _ spinach _package from the cp format given by eq . into the tt format eq . .the hamiltonian is stretched into a vector , and the target functional is a frobenius - norm distance between a given hamiltonian and hamiltonian sought in the tensor train format .the one - site optimization in eq .is effectively the solution of the over - determined linear system using the least squares method . for the unitary frame matrix we have andtherefore the local optimization step is obtained by contracting the frame matrix with the given hamiltonian the enrichment step uses a low - rank approximation of the error which is obtained by one - site dmrg optimization . after the hamiltonian is compressed into the tensor train format ,we compute 1d nmr spectra by solving the linear system in eq . .since the matrix is not expected to be hermitian positive definite , we consider instead an equivalent symmetrized problem for demonstration purposes , we chose a simple non - selective damping relaxation model and the same operator for the initial and the detection state , where are the total spin operators of all nuclei in the system .this avoids explicit radiofrequency pulses and makes the hamiltonian in eq .time - independent and real - valued , those properties are also inherited by the commutation superoperator since the detection state is also real - valued , the nmr spectrum in eq .can be computed from that we obtain as follows : this equation is solved by the amen algorithm at each point in the user - specified frequency interval .as discussed above , a major problem in the application of tensor train methods to magnetic resonance simulation of large systems is the calculation of lengthy sums involved in the construction of spin hamiltonians and density matrices , and their compression into the tt format .[ fig : time ] illustrates the performance of our proposed solution to this problem in the case of minimal ( h , n , c , ca , ha ) and extended ( h , n , c , ca , ha , cb , hb ) ubiquitin backbone spin systems .storage requirements for the tt format in eq .depend on all tt ranks ( bond dimensions ) and are characterized by the _ effective _ tt rank defined by it is clear from the left panels of fig .[ fig : time ] that the primary showstopper rapid growth in the tensor train rank has been removed by the amen method : the effective ranks stay below for the extended backbone and below for the minimal backbone , well within the capability of modern desktop workstations . since is smaller than the number of terms in the cp representation , the tt format with operators provides more compact storage than the cp format . the alternativeto amen is _ binary _ summation , which adds up hamiltonian terms pairwise and recompresses the representation after each addition . as demonstrated in fig .[ fig : time ] , binary summation drives tensor train ranks up to several hundred and thereby makes the solution of the linear system in eq . exceedingly difficult .it is clear from the right panels of fig .[ fig : time ] that the cpu time requirements of amen summation compared to binary summation are essentially the same , making amen procedure clearly superior for all practical purposes . the resulting representation of the ubiquitin backbone spin hamiltonian matrix is , up to the rounding error of the complex double precision arithmetic , exact .in magnetic resonance spectroscopy this is an unprecedented development ubiquitin nmr simulation is currently just about feasible , with significant approximations and colossal computational resources .tensor train representation is therefore a large step forward , even though eq .is not in general applicable to arbitrary nmr experiments . during the construction of the nmr spin hamiltonian . * top * human ubiquitin backbone ( h , n , c , ca , ha ) , * bottom * human ubiquitin extended backbone ( h , n , c , ca , ha , cb , hb ) . here refers to the isotropic part of the hamiltonian and to the irreducible spherical components of the anisotropic part . ]is compared to the results obtained by the restricted state space ( rss ) approximation with basis containing local spin correlations of orders up to and . *bottom * accurate rss computation is used as a reference and compared to the spectra computed by amen and dmrg , both using the accuracy parameter right subgraph : convergence of amen and dmrg methods at two points of the frequency domain ( dashed lines : an off - peak point at ppm , solid lines : a peak at ppm ) ., title="fig : " ] is compared to the results obtained by the restricted state space ( rss ) approximation with basis containing local spin correlations of orders up to and . *bottom * accurate rss computation is used as a reference and compared to the spectra computed by amen and dmrg , both using the accuracy parameter right subgraph : convergence of amen and dmrg methods at two points of the frequency domain ( dashed lines : an off - peak point at ppm , solid lines : a peak at ppm ) ., title="fig : " ] to to after the hamiltonian is compressed , we compute pulse - acquire nmr spectra using eq . with the amen algorithm andcompare it to the simulation produced by the restricted state space ( rss ) approximation , which is currently the only other method that is capable of handling nmr systems of this size . as demonstrated in fig .[ fig:1d ] ( top ) , when the basis set used by rss is increased , its result converges to the one produced by amen , and the relative deviation between two methods falls below across the frequency interval .it is instructive to compare the results of amen simulations with those produced by the dynamical dmrg technique . as shown in fig .[ fig:1d ] ( bottom ) , the nmr spectrum computed by amen matches the reference spectrum returned by rss with only minor deviations , while the accuracy of the result computed by the dynamical dmrg algorithm at the same relative accuracy parameter is unacceptable .dmrg does of course produce the right answer if a much tighter accuracy parameter is specified , but the simulation time goes up by several orders of magnitude .amen does therefore appear to have a better accuracy - to - effort ratio .this is also confirmed by the convergence graph of amen and dmrg , given in the same figure , where the relative deviation between the computed and the reference values is shown during the iterations ( sweeps ) for both dmrg and amen .note also that the inexact values of the spectrum , computed by amen and dmrg are always below the reference values ; this was first noted by jeckelmann .the comparison in fig .[ fig:1d ] is made using to visually emphasize the observed difference between the two methods ; the same conclusion also holds for more accurate calculations using due to the intrinsically low sensitivity of liquid state protein nmr spectroscopy , it is not possible to record the experimental equivalent of fig .[ fig:1d ] directly with a sufficient signal - to - noise ratio ; we have therefore taken a somewhat longer route to the experimental validation of the tensor train simulation fig .[ fig:2d ] shows experimental proton - detected hsqc spectra of ubiquitin , compared to the simulations obtained at the basis set limit of the rss formalism .perfect agreement is apparent in both cases .this provides an experimental evidence to the accuracy of the restricted state space method .the tensor train results in fig .[ fig:1d ] can now be justified by comparison to the rss results it is clear that the tt formalism performs as intended .the successful 1d nmr simulation notwithstanding , very significant obstacles remain on the path to practical applications of the tensor train formalism to nmr spectroscopy .the following issues should be addressed in future work to fully uncover the potential of the dmrg / mps / tt formalism for spin dynamics simulations : \(a ) the requirement for the spin system to be a chain or a tree should be lifted .biological magnetic resonance spin systems are irregular polycyclic interaction networks with multiple interlocking loops in the coupling graph , particularly in solid state nmr , where inter - nuclear dipolar couplings form very dense meshes . a generalization of tensor train algorithms to general contraction networks that fully mimic the molecular structure is therefore required .\(b ) rank explosion problem for time - domain simulations should be solved .it is clear from the success of the restricted state space approximation that the order of spin correlation in many evolving magnetic resonance spin systems either is or may safely be assumed to be quite low .this suggests data sparsity and separability , and indicates that some kind of low - rank decomposition is possible .one likely direction is through the enforcement of symmetries and conservation laws within the tensor train format itself during time evolution .\(c ) our experience indicates that tensor train objects are very far from being drop - in replacements for their matrix counterparts in standard simulation algorithms and software it does actually appear that nearly everything in the very considerable body of magnetic resonance simulation methods needs to be adapted to the realities of dmrg .current implementation of tensor product methods still requires a number of tuning parameters ( approximation accuracies , tt ranks of the enrichment , etc . ) .broad adoption of tensor network algorithms would require basic linear algebra operations to be handled transparently and seamlessly by the existing simulation software packages , in the same way as sparse matrices currently are .\(d ) transparent and clear tensor train approximation accuracy criteria , rank control and _ a priori _ error bounds should be developed in order to estimate the influence of the representation compression errors on the accuracy of the final result .this problem is particularly acute for the state vector phase in time domain simulations : magnetic resonance experiments rely critically on the phase being correctly predicted .all of that having been said , we are very optimistic about the future of low - rank tensor product dmrg / mps / tt methods , having also found them useful in fokker planck type formalisms related to nmr and epr spectroscopy .their primary strength is the lack of heuristic assumptions and the controllable nature of the representation accuracy .an experimental implementation of tensor train magnetic resonance simulation paths , via an interface to the _ tt - toolbox _ , is available in version 1.3.1980 of our _ spinach _ library .even with their well - documented limitations ( the requirement for the spin system to be close to a chain , difficulty with long - range time - domain simulations , code implementation challenges , etc . ) , the ability of tensor network formalisms to simulate simple liquid state nmr spectra of large spin systems essentially without approximations is impressive . they can not yet match the highly optimized dedicated methods developed by the magnetic resonance community , but if some of the limitations are lifted by the subsequent research , dmrg methods would have the potential to become a very useful formalism in nmr research .having solved in this paper the last purely technical problem on the way to the broad adoption of tensor train formalism in magnetic resonance spectroscopy , we are quite optimistic about its potential . in particular , the following avenues appear promising : 1 .generalizing amen method to arbitrary tensor networks , e.g. tree tensor networks , that closely match the coupling topology of the spin system .2 . development of reliable tensor train methods for solving linear systems of algebraic equations with indefinite matrices , and time evolution problems .development of tensor product methods that reduce memory requirements and accelerate convergence by enforcing conservation laws and matrix symmetries . elsewhere in magnetic resonance ,benefits to electron spin resonance spectroscopy , with its star - shaped spin interactions graphs , are likely to be harder to achieve , but may still be obtained by exploiting the direct product structure of combined spin and spatial dynamics appearing in fokker planck type problems .we are grateful to garnet k .- l .chan for patiently explaining the dmrg formalism to ik during his visit to cornell university and to zenawi t. welderufael for finding some of the less obvious ubiquitin -couplings in the literature .we acknowledge the iridis high performance computing facility , and the associated support services at the university of southampton .jmw would like to acknowledge the wellcome trust for support of the southampton nmr centre and we would like to thank the geoff kelly at nimr ( mill hill ) for lending us the ubiquitin sample . the project is supported by epsrc ( ep / h003789/2 , ep / j013080/1 ) . , http://dx.doi.org/10.1007/bf01874573 [ _ an efficient 3d nmr technique for correlating the proton and backbone amide resonances with the -carbon of the preceding residue in uniformly / enriched proteins _ ] , j. biomolecular nmr , 1 ( 1991 ) , pp . 99104. height 2pt depth -1.6pt width 23pt , http://arxiv.org/abs/1304.1222[_alternating minimal energy methods for linear systems in higher dimensions .part ii : faster algorithm and application to nonsymmetric systems _ ] , arxiv preprint 1304.1222 , 2013 .height 2pt depth -1.6pt width 23pt , http://arxiv.org/abs/1312.6542[_corrected one - site density matrix renormalization group and alternating minimal energy algorithm _ ] , in proc .enumath 2013 , accepted , 2014 . , http://dx.doi.org/10.1080/03081087.2012.663371[_exploiting matrix symmetries and physical symmetries in matrix product states and tensor trains _ ] , linear and multilinear algebra , 61 ( 2013 ) , pp .91122 . ,http://dx.doi.org/10.1021/ja00052a088[_pure absorption gradient enhanced heteronuclear single quantum correlation spectroscopy with improved sensitivity _ ] , j. am ., 114 ( 1992 ) , pp .1066310665 . , http://dx.doi.org/10.1137/110844830[_multilevel toeplitz matrices generated by tensor - structured vectors and convolution with logarithmic complexity _ ] , siam j. sci ., 35 ( 2013 ) , pp .a1511a1536 ., http://dx.doi.org/10.1063/1.3152576[_high-perfomance ab initio density matrix renormalization group method : applicability to large - scale multireference problems for metal compounds _ ] , j. chem ., 130 ( 2009 ) , p. 234114 . , http://dx.doi.org/10.1103/physrevb.49.9214[_approximate diagonalization using the density matrix renormalization - group method : a two - dimensional - systems perspective _ ] , phys .b , 49 ( 1994 ) , pp . 92149217 ., http://dx.doi.org/10.1021/ja003724j[_self-consistent karplus parametrization of couplings depending on the polypeptide side - chain torsion _ ] , j. am .soc . , 123 ( 2001 ) , pp .70817093 .height 2pt depth -1.6pt width 23pt , http://dx.doi.org/10.1016/j.aop.2010.09.012[_the density - matrix renormalization group in the age of matrix product states _ ] , annals of physics , 326 ( 2011 ) , pp .96192 . , http://dx.doi.org/10.1143/jpsj.66.2221 [ _ thermodynamics of the anisotropic heisenberg chain calculated + by the density matrix renormalization group method _ ] , j. phys .jpn . , 66 ( 1997 ) , pp .22212223 . , http://dx.doi.org/10.1146/annurev-conmatphys-020911-125018[_studying two - dimensional systems with the density matrix renormalization group _ ] , annual review of condensed matter physics , 3 ( 2012 ) , pp .111128 . , http://dx.doi.org/10.1021/ja9535524 [ _ determination of the backbone dihedral angles in human ubiquitin from reparametrized empirical karplus equations _ ] , j. am .soc . , 118 ( 1996 ) , pp .24832494 . , http://dx.doi.org/10.1103/physrevb.56.5061[_transfer-matrix density - matrix renormalization - group theory for thermodynamics of one - dimensional quantum systems _ ] , physb , 56 ( 1997 ) , pp . 50615064 . , http://dx.doi.org/10.1063/1.3700087[_longitudinal static optical properties of hydrogen chains : finite field extrapolations of matrix product state calculations _ ] , j. chem . phys . ,136 ( 2012 ) , p. 134110 ., http://dx.doi.org/10.1016/j.cpc.2014.01.019[_chemps2 : a free open - source spin - adapted implementation of the density matrix renormalization group for ab initio quantum chemistry _ ] , computer phys .comm . , ( 2014 ) .
we introduce a new method , based on alternating optimization , for compact representation of spin hamiltonians and solution of linear systems of algebraic equations in the tensor train format . we demonstrate the method s utility by simulating , without approximations , a nmr spectrum of ubiquitin a protein containing several hundred interacting nuclear spins . existing simulation algorithms for the spin system and the nmr experiment in question either require significant approximations or scale exponentially with the spin system size . we compare the proposed method to the _ spinach _ package that uses heuristic restricted state space techniques to achieve polynomial complexity scaling . when the spin system topology is close to a linear chain ( e.g. for the backbone of a protein ) , the tensor train representation is more compact and can be computed faster than the sparse representation using restricted state spaces . _ keywords : _ density matrix renormalization group , alternating minimal energy , tensor train , nuclear magnetic resonance , protein
bone age assessment typically involves estimating the age of a patient from a radiograph by quantifying the development of the bones of the non - dominant hand .it is used to evaluate whether a child s bones are developing at an acceptable rate , and to monitor whether certain treatments are affecting a patient s skeletal development . currently , this task is performed manually using an atlas based system such as greulich and pyle ( gp ) or a bone scoring method like tanner and whitehouse ( tw ) .atlas methods such as gp involve comparing the query image to a set of representative hand radiographs taken from subjects at a range of ages .scoring systems assign each bone to one of several predefined stages , then combine these stage classifications to form an age estimate .manual procedures are time consuming and often inaccurate .automated systems for bone age assessment have previously been proposed .these either attempt to recreate the tw or gp methods , or construct regression models for chronological age .our approach is modular and feature based , and can be used to either recreate tw scores or predict age directly . to predict tw bone stages we train a range of classifiers on three transformations of the outline .the first uses an ensemble technique described in that uses elastic distance measures directly on a one dimensional representation of the bone outline .the second technique finds discriminatory subsequences of the one dimensional series ( called shapelets ) through a transformation described in and constructs classifiers in the shapelet feature space .finally , we derive a set of summary shape features based on the tw descriptors .we conclude that the classifiers built on shape features are significantly better on at least one bone and provide greater explanatory power . to predict age directly , we perform linear and non linear regressions from the shape feature space to age . we evaluate this process on a data set of images taken from in the age range 218 .we show that , given the correct outline , we can accurately recreate tw stages and , using just three bones , can predict chronological age as accurately as clinical experts .this stepwise , feature driven approach to automated bone ageing is transparent and explicable to clinicians . by separating out the feature extraction from the segmentation and regression we retain the potential for quickly and simply constructing new models for regional populations .this offers the possibility of producing age estimates tailored to local demographics based on data stored locally in film free hospitals .the rest of this paper is structured as follows . in section [ sec : baa ] , we review the current manual methods , and describe previous attempts at automated bone age assessment . in section [ sec : extraction ] we describe how we format the segmented bones into outlines and shape features and in section [ classifiers ] we provide an overview of the classification and regression techniques we use to predict bone age . in sections [ sec :twstages ] and [ regression ] we present our results .finally , we discuss our conclusions and describe the future direction of this work in section [ sec : conclusions ] .bone age assessment is a task performed in hospitals worldwide on a daily basis .the skeletal development of the hand is most commonly assessed using one of two methods : greulich and pyle ( section [ sec : gp ] ) or tanner and whitehouse ( section [ sec : tw ] ) .the bone age estimate obtained by one of these methods is compared with the chronological age to determine if the skeletal development is abnormal .if there is a significant difference between the patient s bone age and chronological age then the paediatrician may , for example , diagnose the patient with a disorder of growth or maturation .the greulich and pyle ( gp ) method uses an atlas of representative hand radiographs taken from subjects at a range of ages .the latest ( second ) edition of the gp atlas was released in 1959 which included new images for four new age points .the final atlas consists of 31 standard radiographs of males from newborn to the age of 19 years and 27 standard radiographs of females from newborn to the age of 18 years . along with each standard, there is a piece of text describing the development . to use the atlas ,the clinician checks the patient s radiograph against each of the example radiographs of the appropriate sex .the key features to check are the development of the epiphysis ( the region at the end of particular bones ) and the presence of certain carpal bones .the age estimate is the age of the subject who provided the representative image selected as the closest by the clinician .this process is clearly somewhat subjective and a large variation between clinicians has been observed .the representative images are from a very restricted sample taken over 60 years ago , and variation , changes in diet , healthcare and culture may mean this sample is no longer representative .another criticism of gp is that the method implies an assumption that the ossification process happens in an linear fashion , which may not be true . in 1975 , tanner _ published a scoring system commonly referred to as tw2 .two separate methods of calculating bone age are described .the first method uses the radius , ulna and short bones ( rus ) ( the short bones cover the metacarpals and phalanges of fingers one , three and five ) .the second method uses just the carpal bones .the rus method has been found to outperform the carpal bone technique and is easier to use .each bone has various stages associated with it and each stage has certain descriptors to use in the classification .table [ tab : twstage ] shows an image of each stage for the distal phalange of the middle finger with the associated criteria .once all bones have been awarded a score , these scores are summed to find the skeletal maturity score ( sms ) .the distribution of ages for the sms is described by a centile chart , from which a point estimate of bone age can be derived .the tw3 method was published in 2001 .the basic maturity stages and scores remained the same but the centile charts have been updated to adapt to the modern population .[ tab : twstage ] .the various tw stages of distal phalange three [ cols="^,^,<",options="header " , ] one of the benefits of adopting a linear regression model is the ease with which we can perform an exploratory analysis of the feature relevance .the epiphysis models include 15 - 17 terms , including a large number of interactions .the model for the proximal phalange is the index of each variable corresponds to the feature number given in table [ tab : features ] .our first observation is that the model consists almost exclusively of epiphyseal features ( the exception is , metaphysis to width ratio ) .this is true for the other two bones also .this implies that future image processing efforts should focus more on accurately extracting and summarising the epiphysis .secondly , the features epiphysis width ( ) and epiphysis distance to phalanx ( ) are common to all models and are the first to enter the stepwise forward selection .clearly they are the most important factors , and alone account for approximately 80% of the variability in age ( on two variable linear model ) .the model constructed on just the proximal phalangeal features , epiphyseal width and epiphyseal distance to phalanx has a mae of 0.98 and rmse of 1.67 .this implies that it would be very easy to construct a simple , practical model that would give a fairly accurate estimate based on two measurements that can be quickly performed by a non - specialist directly from the image .this offers the potential for screening for abnormality at very low cost .a further benefit of the linear model is that , if the regression assumptions hold , we can construct confidence and prediction intervals for new data .this improves utility of the model in the decision making process because the decision of whether development is abnormal can be phrased as a hypothesis test where the null hypothesis is that the difference between bone age and chronological age is zero .a linear model also offers a simple way of determining whether there are differences in age model between populations .we address the question of whether the models for male and female are significantly different by adding a factor to indicate sex . with regressors and , where if the subject is male and if female , we find that we can not reject the null hypothesis that the coefficient is zero and the resulting model is if we fit a stepwise model , sex is the fourth variable to enter the model , and the interaction with is also significant . clearly , sex is a predictive variable and future models should include the term . the other demographic variable we have available is ethnicity . to test the significance of ethnicity ,we include for factors to model whether the subject was asian ( a ) , african - american ( f ) or hispanic ( h ) .the only significant factor we find is whether subject is asian .this is significant in the simplified model and in the stepwise model it is the second most important variable .clearly there is a different development process at work for the patients with asian ethnicity used in this study .if we include both sex and ethnicity , the _ dmp _ epiphysis model with three bones has a rmse of 0.855 ( compared to the human raters whose estimates had rmse of 0.89 ) .we describe three alternative approaches for using bone outlines to classify bone age stage and conclude that the shape features based on tw descriptors is the most appropriate .we then use these shape features to construct regression models of chronological age . the results for models predicting both tanner - whitehouse stage and chronological age are at least as good as those reported for other automated bone ageing systems .furthermore , with just three bones , we produce age estimates that are as accurate as expert human assessors using the whole image .data and code for each of these classification problems is available from .in addition to the predictive accuracy , there are several benefits of using feature based regression models .firstly , we can explain the importance of individual variables .just two variables account for about 80% of the variation in age , and this relationship offers the potential for fast screening for abnormalities . secondly ,when the regression assumptions are valid , we can construct confidence and prediction intervals .this can aid diagnosis and implies that increasing the training set size will incrementally improve the models ( by reducing the variance ) .finally , we can test the importance of alternative demographic variables and construct models tailored to specific populations .we demonstrate this through an assessment of sex and ethnicity to the model .this offers several interesting possibilities : film free hospitals could enhance the quality of the general model through including their own data ; geographic and demographic effects on bone development can be studied ; historic data could be mined to quantify the effects of development drugs .there are many potential applications for an accurate age model constructed on a diverse and expanding database of images .there are several obvious ways of improving our models .we shall include more bones and examine the effect of intensity features .we can investigate alternative segmentation and outline classification algorithms .we can attempt to screen for full maturity to improve the no epiphysis models .we can estimate age via the full tw methodology rather than directly .our conclusion from the research we have conducted to date is that the feature based system of separating the image processing from the age modelling is the best approach .it offers flexibility , transparency and produces accurate estimates .s. mahmoodi , b. sharif , e. chester , j. owen , r. lee , skeletal growth estimation using radiographic image processing and analysis , ieee transactions on information technology in biomedicine 4 ( 4 ) ( 2000 ) 292297 .r. bull , p. edwards , p. kemp , s. fry , i. hughes , bone age assessment : a large scale comparison of the greulich and pyle , and tanner and whitehouse ( tw2 ) methods , archives of disease in childhood 81 ( 2 ) .e. pietka , a. gertych , s. pospiech , f. cao , h. huang , v. gilsanz , computer - assisted bone age assessment : image preprocessing and epiphyseal / metaphyseal roi extraction , ieee transactions on medical imaging 20 ( 8) ( 2001 ) 715729 .
bone age assessment is a task performed daily in hospitals worldwide . this involves a clinician estimating the age of a patient from a radiograph of the non - dominant hand . our approach to automated bone age assessment is to modularise the algorithm into the following three stages : segment and verify hand outline ; segment and verify bones ; use the bone outlines to construct models of age . in this paper we address the final question : given outlines of bones , can we learn how to predict the bone age of the patient ? we examine two alternative approaches . firstly , we attempt to train classifiers on individual bones to predict the bone stage categories commonly used in bone ageing . secondly , we construct regression models to directly predict patient age . we demonstrate that models built on summary features of the bone outline perform better than those built using the one dimensional representation of the outline , and also do at least as well as other automated systems . we show that models constructed on just three bones are as accurate at predicting age as expert human assessors using the standard technique . we also demonstrate the utility of the model by quantifying the importance of ethnicity and sex on age development . our conclusion is that the feature based system of separating the image processing from the age modelling is the best approach for automated bone ageing , since it offers flexibility and transparency and produces accurate estimates . bone age assessment , automated tanner - whitehouse , shapelet , elastic ensemble
the classical homology theory in topology is a way to describe , in algebraic ways , the presence and number of holes ( of some dimension ) in the given geometric shape or even topological space .the precise definition and the foundations can be found in munkres . the simplest algebraic invariant in a given dimension is the betti number which expresses the `` number of holes '' .the computations are the simplest with the coefficients from a finite field . for our purposes we will always use the field with two elements .when the shape is a discrete collection of disjoint points , there is only 0-dimensional homology .but the points might approximate some interesting shape . for example , if the points are in the plane then they might be tracing out some circle which can be extrapolated by a person . in three dimensions, the points might be lying densely on some sphere . in these cases , the circle has non - zero 1st dimensional homology , and the sphere has non - zero 2nd dimensional homology .persistent homology is an idea that allows us to recognize these homology classes from the given set of disjoint points .the articles give good introductions to this subject .i will only illustrate how this works in the plane with illustrations called bar - codes created using the program teamplex i developed for this purpose , which extends the javaplex software library from the stanford applied topology research group . in the following picturesone can see the fattening of the data points happening during the construction of the complex .the details of this construction are given in section .the emerging multiple intersections correspond to simplices in the complex . just for intuitive understanding, one can imagine solid disks merging together into a connected figure .the hole that you see in figures 3 and 4 represents the 1-dimensional persistent class that we will soon see in the javaplex diagram .this whole process of fleshing out the circle from the eight points is easy to see in the 2-dimensional plane .there are two levels of complications that appear in the application in this paper .even though the number of points which represent players will not increase that much , there will be at most 20 players in the data sets , the number of properties of the players will increase to 12 .this number is the number of coordinates that describe the points .so the dimension of the space will increase from 2 to 12 .the properties of such data sets are impossible to visualize and predict . , _ from 5 to 13 : redefining the positions in basketball _ , 2012 eos / alpha award winnning presentation at at the 2012 mit sloan sports analytics conference . `
this paper applies the major computational tool from topological data analysis ( tda ) , persistent homology , to discover patterns in the data related to professional sports teams . i will use official game data from the north - american national hockey league ( nhl ) 2013 - 2014 season to discover the correlation between the composition of nhl teams with the currently preferred offensive performance markers . specifically , i develop and use the program teamplex ( based on the javaplex software library ) to generate the persistence bar - codes . teamplex is applied to players as data points in a multidimensional ( up to 12-d ) data space where each coordinate corresponds to a selected performance marker . the conclusion is that team s offensive performance ( measured by the popular characteristic used in nhl called the corsi number ) correlates with two bar - code characteristics : greater _ sparsity _ reflected in the longer bars in dimension 0 and lower _ tunneling _ reflected in the low number / length of the 1-dimensional classes . the methodology can be used by team managers in identifying deficiencies in the present composition of the team and analyzing player trades and acquisitions . we give an example of a proposed trade which should improve the corsi number of the team . the hockey world used to be old fashioned . managers would recruit players strictly from what they could see with the naked eye . now , hockey analytics is becoming an influential cog in managing many nhl teams . the toronto maple leafs have already hired an assistant manager who is a well - known proponent of data analytics . in the next five years , every team is predicted to have at least one `` stat analysis guru '' working with them . unlike baseball , where it is easy to have solid position - specific stats , hockey is a faster and more fluid game . hockey positions are much more dynamic and fluid , they are better described as roles that the players play . these are shifting roles between players , especially for forwards . the nhl keeps track of puck possession , turnovers for and against , hits for and against , shots blocked for and against , face - offs won , and scoring opportunities . all of that in addition to some standard stats that are easy to record such as shots on goal and , of course , goals scored , goals saved , scores of games , etc . the dallas stars general manager , jim nill , uses a computer program from the work of 100 college students to measure corsi numbers , turnovers , and scoring opportunities to generate statistical data for his team . the sheer size of the data available is impressive , as can be viewed at the hockey analytics website ` extraskater.com ` . even more data is publicly available on the official nhl website ` nhl.com ` . it is very unclear how to assemble this information into good use . only recording player stats does not give the manager the tools to assemble an effective team . the next step should be the development of tools to analyze this data . topology is a proper tool for taking information about individual players and generating conclusions about the team . this is known as a local - to - global transition . this is where the topological data analysis ( tda ) could be useful . this paper seems to be the first attempt to apply the major computational tool from tda , the persistent homology , to the data collected by the nhl .
the problem of discriminating among a given set of quantum states is an important part of many quantum communication protocols .the parameters of quantum communication channels and their security thresholds with respect to certain eavesdropping attacks are related to the maximally achievable quality of such discrimination , expressed by the accessible information associated with the signal states .unfortunately , except for rather simple cases , it is not possible to find explicitly the generalized measurements that maximize the accessed information .the optimality of a given detection scheme is often conjectured and numerical tools are employed to confirm or reject such a hypothesis . in this contributionwe consider quantum communication between alice and bob , where alice is using states comprising a quantum pyramid to encode her message .such a pyramid can be defined by requesting that all pairwise transition amplitudes between the edge states be the same .bob performs a measurement on the received quantum systems .based on the outcomes he tries to identify the received states and decode the message sent to him . naturally , bob seeks the best possible decoding strategy , i.e. he wants to optimize the measurement on his side so that the accessed information associated with the signal state is maximal in other words : he wishes to extract the accessible information .our choice of quantum pyramids for alice s signal states is motivated by the fact that discrimination among pyramidal states is regularly encountered in quantum cryptography and other quantum information problems , such as grover s search algorithm , and hence is of considerable practical interest . at the same time , the high inherent symmetry of quantum pyramids helps to keep the calculations relatively simple and permits to arrive at transparent results .historically , for some time it was thought that the optimal measurement strategy for pyramidal states is the well - known square - root measurement and even some security proofs for quantum cryptography were based on this reasonable conjecture .recently , however , the optimality of the square - root measurement has been invalidated for a range of parameters of acute pyramids , and also for obtuse pyramids with three edges and very small volume . in this contribution ,obtuse pyramids of any dimension are investigated , so that the optimal measurement strategies for pyramids of any shape and dimension is established .in particular , by exploiting the pyramidal symmetry , and using numerical methods for confirmation , a family of measurements is found that outperform the square - root measurement in terms of accessible information about the edges of obtuse pyramid with few edges and small volume .the paper is organized as follows .we start by introducing quantum pyramids in sec .[ sec : pyramids ] , and then discuss measurement schemes in general terms in sec .[ sec : info ] .section [ sec : srm ] deals with the square - root measurement , which we use as a benchmark .unambiguous discrimination is addressed in sec .[ sec : ud ] , and the maximization of the accessed information is the theme of sec . [sec : ims ] , where we recall the known results about acute pyramids and supplement them with new insights about obtuse pyramids .finally , we present a unified view that regards all occurring measurement schemes as particular cases of a general scheme and summarize our findings in a table .a quantum pyramid can be defined as a set of edge states denoted by normalized kets , , such that all pairwise transition amplitudes among these states are equal and real , where a simple geometrical picture is obtained by decomposing edge states in a fixed orthonormal basis so that they can be represented by vectors in real -dimensional space ; an example of a quantum pyramid with three edges , , and is shown in fig . [fig : pyramids ] .the `` height ket '' is the pyramidal axis of symmetry , and the kets span the -dimensional base of the pyramid .geometrically speaking , the base is itself a -dimensional pyramid with edges of length and an angle of between the edges .the volume enclosed by a pyramid with edges , regarded as an object in real -dimensional euclidean space , is given by the single factor refers to the height of the pyramid , and the factors refer to the base .pyramids may be classified according to the angle between the edge states .we call a pyramid _ acute _ if , _ orthogonal _ if , and _ obtuse _ if . the orthogonal pyramid ( ) has largest volume , and pyramids with or have very small volume .pyramids with , are narrow with nearly collinear edges ; for we have the degenerate case of an `` all height '' pyramid with no base .the opposite case is the extreme obtuse pyramid with , the `` all base '' pyramid with no height ; we call such a degenerate pyramid _ flat_. in fig .[ fig : pyramids ] , the plane pyramid with the three edges proportional to , , is an example of a flat pyramid . given a pyramid with edges , the corresponding orthogonal pyramid with edges can be constructed in accordance with which assumes the same orientation with respect to the shared axis of symmetry , see fig .[ fig : pyramids ] once more .an orthogonal pyramid may be _ lifted _ by adding a component along the height ket , these kets have a genuine dependence that we leave implicit , but the dependence on is only apparent inasmuch as is a unit ket . after normalizing them , the lifted edges make up pyramids whose parameters are such that , so that we get acute pyramids for , obtuse pyramids for , and the flat pyramid for . for future reference, we note that for which is particular , inasmuch as the lifted pyramid for this value is dual to the original pyramid , where we need to exclude the degenerate no - volume pyramids with or .another , more important , observation is the equality which is central to what follows because it permits a decomposition of the identity operator as a sum of nonnegative operators .we have and which coincide for .the generic scenario of quantum communication is as follows .alice sends bob quantum systems prepared in states described by the normalized statistical operators , with prior probabilities of unit sum .bob examines the received systems one by one with the aid of a generalized measurement with outcomes , ; typically .the outcomes are nonnegative probability operators that decompose the identity , so that the joint probabilities of state being sent _ and _ outcome being detected have unit sum , as they should .the outcomes constitute a robability perator easurement ( pom ) in the physics jargon , whereas the acronyom povm , short for ositive perator - alued easure , is preferred in the more mathematically oriented literature .we side with the physicists . the information accessed by the pom in use is quantified by the mutual information between alice and bob which in units of bits is given by where are the corresponding marginal probabilities . if bob chooses his pom such that the value of is maximized , he implements the _ information maximizing scheme _ ( ims ) and thus extracts the _accessible _ information .quite a few properties of imss are known , but there are also many open questions ; a recent review of the matter is ref .the calculation of the accessible information involves a multidimensional nonlinear optimization of the mutual information with respect to the pom outcomes .this is a difficult problem and closed - form solutions are known only for situations in which there is much symmetry among the signal states ; we are adding one item to this list here .when such solutions are not at hand , one must often rely on a numerical search , possibly by the iteration method of ref . for which an open - source code is available .the numerical methods are also useful for the verification or rejection of conjectured solutions .bob could have other objectives than extracting the accessible information .for instance , if he wishes to maximize his odds of winning bets on the state sent by alice , he implements the _ measurement for error minimization _ ( mem ) , which has as many outcomes as there are states ( ) and maximizes , the probability of guessing the signal state right . as a necessary condition on the respective poms we have with p_{k\cdot}\rho_k&\mbox{mem , } \end{array}\right.\ ] ] but there is an unfortunate lack of equally powerful sufficient conditions .yet another scheme is the _ measurement for unambiguous discrimination _( mud ) , which is such that outcomes with imply that the state was sent , that is : for , whereas the outcomes are inconclusive .unambiguous discrimination is not possible for arbitrary signal states and , as a rule , the poms needed for the ims , the mem , and the mud are not the same . specifically , we are here interested in symmetric communication with pyramid states , where and for . owing to the high symmetry of this situation ,the poms for ims , mem , and mud can be stated quite explicitly , both for acute and obtuse pyramids .the symmetry at hand is , of course , the cyclic nature of the pyramid states : there is a unitary operator with period that cyclically permutes the pyramid states , explicit expressions for are where and are understood .the ensemble of equal - weight pyramid states is , therefore , an example of the `` group covariant case '' of section 11.4.4 in ref .there is a family of poms that are directly constructed from the signal states , their weights , and their weighted sum , which is the statistical operator for the state in which bob receives the quantum systems . for any operator that , the outcomes make up a pom with .in particular , if is chosen nonnegative , that is : , we have the so - called _ square root measurement _ ( srm ) , also known as the `` pretty good measurement '' because it often accesses a good fraction of the accessible information .all other permissible choices for are of the form with unitary and amount to an over - all unitary transformation of the srm .in the situation of present interest , the symmetric communication with pyramid states , the srm is the von neumann measurement composed of the projectors onto the edges of the orthogonal pyramid , and the srm coincides with the mem , the resulting odds for guessing the signal state right are which equals , , and for the no - base pyramid ( ) , the orthogonal pyramid ( ) , and the flat pyramid ( ) , respectively . for some time, it was conjectured that the srm is also identical with the ims but , as we shall see below , this is only the case for pyramids with sufficiently large volume . in view of this historical conjecture ,we use the information accessed by the srm , as a benchmark value . for , we have the maximal value of , as it should be , and the small - volume limiting values \displaystyle\frac{\log_2(n-1)}{n}\bigl(n-2 + 4\sqrt{(n-1)r_0}\,\bigr ) & \displaystyle r_0\ll\frac{1}{n } \end{array}\right.\ ] ] are worth noting as well .we achieve unambiguous discrimination with the aid of the poms ( [ 1stpom ] ) and ( [ 2ndpom ] ) for the particular value of ( [ dual ] ) .these poms have outcomes , of which the is inconclusive , \displaystyle { | \bar{e}_k \rangle}\frac{r_0}{r_1}{\langle \bar{e}_k | } & \displaystyle r_0<\frac{1}{n}<r_1 \end{array}\right\}\quad\mbox{for } \nonumber\\ \pi_{n+1}&=&\left\ { \begin{array}{c@{\ \mbox{for}\ }l } \displaystyle { | h \rangle}\frac{r_0-r_1}{r_0 ^ 2}{\langle h | } & \displaystyle r_0>\frac{1}{n}>r_1\\[2ex ] \displaystyle\frac{r_1-r_0}{r_1}\bigl(1-{| h \rangle}\frac{1}{r_0}{\langle h |}\bigr ) & \displaystyle r_0<\frac{1}{n}<r_1 \end{array}\right\}\,.\end{aligned}\ ] ] the probability that the unambiguous discrimination fails is given by the probability of getting the inconclusive outcome , 1-nr_0 & \displaystyle 0\le r_0<\frac{1}{n}\mbox{\ ( obtuse ) , } \end{array}\right.\ ] ] and , of course , there is no failure for the orthogonal pyramid with .the information accessed by the muds of ( [ mud ] ) is nr_0\log_2n & \mbox{for obtuse pyramids . }\end{array}\right.\ ] ] but this does not do full justice to the obtuse case , where the inconclusive outcome is proportional to the projector on the -dimensional pyramid base , and there is more information to be accessed by a decomposition of this into rank-1 outcomes .we introduce the normalized _ difference kets _ \bigr\rangle}=\bigl({| e_m \rangle}-{| e_n \rangle}\bigr)\frac{1}{\sqrt{2nr_1 } } \quad\mbox{with }\,,\ ] ] which are in number and offer a decomposition of the projector onto the pyramid base , } 1-{| h \rangle}\frac{1}{r_0}{\langle h |}=\frac{2}{n}\sum_{m < n}{\bigl| [ mn ] \bigr\rangle}{\bigl\langle [ mn ] \bigr|}\,.\ ] ] for obtuse pyramids , then , we can replace the outcome of ( [ mud ] ) by more informative outcomes in accordance with }\quad\mbox{with}\enskip \pi_{[mn]}={\bigl| [ mn ] \bigr\rangle}\frac{2(r_1-r_0)}{nr_1}{\bigl\langle [ mn ] \bigr|}\,.\ ] ] for these we have the joint probabilities }=\frac{r_1-r_0}{n}\bigl(\delta_{jm}+\delta_{jn}\bigr)\,,\ ] ] so that upon getting the r_1>r_0\,, ] and \bigr\rangle} ] .a unified view regards all poms as special cases of this decomposition of the identity : \bigr\rangle}{\bigl\langle [ mn ] \bigr|}=1\,,\ ] ] where the nonnegative weights , , and are subject to so that a particular pom is specified by stating the value of and the value of one of the three weights .any permissible choice of , , , and defines a pom with at most outcomes , one outcome for each summand in ( [ allpoms ] ) that carries a positive weight .the accessed information is \\ & + w_3(1-r_0)\log_2\frac{n}{2}\ , , \end{split}\ ] ] where we have no contribution from the inconclusive outcome with weight .[ tbl : summary ] as a summary , table [ tbl : summary ] lists the parameter values for the srm , the mud , and the ims . with regard to the historical conjecture that the square - root measurement extracts the accessible information , we conclude that this is true for pyramids with large volume , but not for small - volume pyramids that either are acute and have more than two edges or are obtuse and have three , four , five , or six edges .we are very grateful for the valuable discussions with dagomir kaszlikowski , ajay gopinathan , frederick willeboordse , shiang yong looi , and sergei kulik . j. wishes to thank for the kind hospitality received during his visits to singapore .this work was supported by grant msm6198959213 of the czech ministry of education , and by nus grant wbs : r-144 - 000 - 109 - 112 .centre for quantum technologies is a research centre of excellence funded by ministry of education and national research foundation of singapore .we regard the identity operator on the right - hand side of ( [ genpom ] ) as the projector on the joint range of the signal states and ignore the orthogonal complement of the hilbert space of kets , if there is one .j. suzuki , s. m. assad , and b .-englert , `` accessible information about quantum states : an open optimization problem , '' chapter 11 in _ mathematics of quantum computation and quantum technology _ , edited by g. chen , s. j. lomonaco , and l. kauffman ( chapman & hall / crc , boca raton 2007 ) , pp .309348 ; available at ` http://physics.nus.edu.sg/~phyebg/papers/135.pdf ` .
we consider a symmetric quantum communication scenario in which the signal states are edges of a quantum pyramid of arbitrary dimension and arbitrary shape , and all edge states are transmitted with the same probability . the receiver could employ different decoding strategies : he could minimize the error probability , or discriminate without ambiguity , or extract the accessible information . we state the optimal measurement scheme for each strategy . for large parameter ranges , the standard square - root measurement does not extract the information optimally . quantum state discrimination ; minimum - error measurement ; unambiguous discrimination ; accessible information
in this paper we study the unique invariant measure of the stochastically perturbed allen - cahn equation where is a one - dimensional order parameter defined for all non - negative times and . here is a formal expression denoting space - time white noise and is a symmetric double - well potential .the canonical choice for is although more general choices are possible ( see assumption [ ass : v ] below ) .we are interested in the properties of the invariant measure for large system sizes , it is well - known that for and fixed system size , the invariant measure of the allen - cahn equation concentrates on minimizers of the energy this follows from large deviation theory .in fact , even for system sizes that grow with , the same is true .indeed , in the second author proved this fact for for any .our main goal in the current paper is to go up to interval sizes that are _ exponential _ with respect to and , specifically , to understand the _ competition between energy and entropy _ that emerges in this regime .let us first consider the effect of on the measure .the intuition is that the invariant measure can be viewed as a gibbs measure with the given energy , i.e. , that it is in some heuristic sense proportional to the heuristic picture then says that , because of the potential term in the energy , functions supported on this measure are most likely to be close to one or the other minimum of on most of ] , we have \big ) \approx 1.\label{unif}\end{aligned}\ ] ] the theorem says that the probability of finding an up transition layer in a subinterval of length given a system size is approximately in the sense expressed in , independent of the location of the subinterval .( the existence of an up transition layer somewhere in the system is forced by the boundary conditions . ) in this sense , the layer locations are approximately uniformly distributed .the theorem is strongest when considering at the lower range of validity : it shows that the uniform distribution holds not only on macroscopic intervals but also down to the logarithmic scale .we remark that the uniform distribution of the layer location in our regime is very different from the characterization of the layer distribution in the case studied in ; see subsection [ ss : back ] below for more discussion .our approach for theorem [ t : layers ] relies on a simple idea .namely , while we can not use large deviation theory directly on , we can use the markovianity of the underlying reference measure to reduce to order - one subintervals on which we can .in particular , by taking large ( but order - one ) subintervals and conditioning on the boundary values of a larger , surrounding subinterval , we can take advantage of _ large deviation bounds _ with a cost that is _ to leading order independent of the subinterval size_. this method is similar in spirit to freidlin and wentzell s approach of calculating the expected exit time from a metastable domain for a diffusion process with small noise ( , see subsection [ ss : back ] for a more detailed account of the related literature ) . to illustrate the idea ,suppose that we want to estimate the probability that there is a transition layer contained within ] for decays exponentially with ( see lemma [ le : onept ] below ) . for boundary values within the compact set ] is approximately the same as that of finding the layer in any other interval ] into paths with a transition in ( or near ) ] , such hitting points exist with high probability .this fact is developed in lemmas [ l : smallu ] and [ l : hittingzero ] using an iterated rescaling argument and large deviation bounds . ( 1,0 ) ( 500,0 ) node[anchor = north east] ; ( 60,0 ) ( 120,0 ) ; ( 60,-.05 ) ( 60,0.05 ) ; ( 120,-.05 ) ( 120,0.05 ) ; ( 90,0.05 ) node[above ] ; ( 330,0 ) ( 390,0 ) ; ( 330,-.05 ) ( 330,0.05 ) ; ( 390,-.05 ) ( 390,0.05 ) ; ( 360,0.05 ) node[above ] ; plot file./graphs / p2g1.txt ; plot file ./graphs / p2g2.txt ; ( 250,.25 ) arc [ start angle= 30 , end angle=290 , x radius=1120pt , y radius=14 pt ] ; the study of the effect of a small noise on a physical system has a rich history in the chemistry , physics , and mathematics literature . with roots in the fluctuation theory of einstein ( 1910 ) , the path integral formulations of wiener ( 1930 ) and feynman ( 1948 ) lie at the heart of the large deviation theory for diffusion processes and the characterization of the corresponding invariant measure .one of the aspects to receive the most applied interest and significant mathematical attention is the question of the first exit time from a metastable basin .the exponential dependence of the mean exit time on the energy barrier goes back to vant hoff and arrhenius ( 1889 ) .refining this picture , the so - called kramers formula determines the prefactor in terms of the curvature of the potential at the critical points and was made famous in the 1940 paper by kramers , although the result ( for the overdamped dynamics ) had been derived as early as 1927 by farkas .see the review paper by hnggi , talkner , and borkovec for a thorough historical survey .the higher dimensional case was analyzed by landauer & swanson in 1961 and further pursued by langer ( see for instance , 1969 ) . in the mathematics literature ,metastability for diffusion processes that depend only on time ( i.e. , constant in space ) was explored early on in the paper by pontryagin , andronov , and vitt ( 1933 ) .the mathematical theory of large deviations was subsequently developed in the 1970s in papers by wentzell and freidlin ( see for instance ) and kifer , and a landmark text is the book of freidlin - wentzell ( published in russian in 1979 and first published in english in 1984 ) .on the level of the mean exit time , the freidlin - wentzell theory confirmed the exponential factor in the kramers formula .the prefactor in kramers law for was established via formal asymptotic expansions in the famous paper by matkowsky and schuss in 1977 .a rigorous derivation was given by sugiura in and independently and with a different method by bovier , eckhoff , gayrard , and klein .the small noise problem for stochastic _ partial _ differential equations appears more recently in the mathematics community .a seminal paper in extending the freidlin - wentzell theory to spatially varying diffusions is the paper of faris and jona - lasinio from 1982 , which specifically established and studied the action functional of the stochastic allen - cahn differential equation on a bounded system ] , set ) \colon u(-2\ell)=u_{- } \ , \text{and } u(2 \ell ) = u_+ \},\\\mathcal{a}_0^{\rm bc}&:=\{u\in \mathcal{a}^{\rm bc } \colon \text{for all , } u(x ) \in [ -1 + \delta , 1-\delta ] \}.\end{aligned}\ ] ] then we have the proof of lemma [ le : lazy ] is given in subsection [ ss : enlem ] .this lemma together with the large deviation bound from proposition [ pr : ld1 ] will imply that for small with respect to , the probability of finding such a layer is bounded above by which we can make negligible by choosing sufficiently large .now we would like to show that the exponential factor in the probability of finding a layer is close to , defined in . specifically, we expect it to be approximately the problem , which we already alluded to at the end of subsection [ ss : methods ] , is that the boundary values ( for instance , ) may make it likely to find a layer .hence , we will employ reflection operators to transform transition layers into events that are unlikely _ regardless of the boundary conditions_. we will call such events wasted excursions : [ def : wasted ] for any , we will say that has a wasted excursion on if there exist points such that and as described above for long transitions , we will estimate the probability of such events using the large deviation estimate from proposition [ pr : ld1 ] .we note that the proposition requires minimizing energy over a ball ( in the space of continuous functions ) around the set of interest .because of the way we have defined wasted excursions , a ball of radius around the set of functions with a excursion in a given interval is equal to the set of functions with a excursion in that interval . hence , our large deviation estimate together with an energetic estimate will bound the probability that we are after .the following lemma contains the necessary energetic estimate : namely , that the difference of energies described in our large deviation estimate is bounded below by plus a small term .[ l : cl ] there exists a constant such that for every and , there exists a constant with the following property . for any andany boundary conditions \delta^- ] , set )\colon u(\pm 2\ell)=u_{\pm}\}\\ \text{and}\quad & \mathcal{a}_{\delta , pre}^{\rm bc}\text { as above in}~\eqref{preset}.\end{aligned}\ ] ]define the optimal cost then we have we will need to consider some additional properties of the energy as we prove the main theorems , but we defer their discussion to a later time when their motivation and hypotheses will be clearer . with the central facts about the energy in hand , we now turn to the probabilistic background for our paper .in this section , we collect some probabilistic facts about the gaussian measures and the measures . after stating a precise definition and some elementary symmetry properties, we will discuss markov properties satisfied by these measures in subsection [ ss : gm ] and large deviation bounds in subsection [ ss : ld ] .for every , we denote by the distribution of a brownian bridge with homogeneous boundary conditions on ] such that , for all ] with vanishing boundary conditions equipped with the homogeneous scalar product indeed , the right - hand side of is the green s function for with dirichlet boundary conditions . in the sequel , we often use the notation to denote the gaussian part of the energy of a function on the interval . it is common to think of as a gibbs measure with energy and noise strength .of course , does not make rigorous sense because there is no flat measure " on path space , and is almost surely infinite under .the heuristic formula is motivated by finite dimensional approximations and it gives the right intuition for the large deviation bounds . for more general boundary conditions , we can define as the image measure of under the shift map where is the affine function interpolating the boundary conditions : similarly to , for any choice of boundary condition and on any interval , we denote by the probability measure whose density with respect to can be expressed as here we have introduced the notation for the normalization constant that ensures that is indeed a probability measure . as we have indicated in the introduction , there are symmetry properties of the measures and that will play an important role in our argument .observe for example that both and are invariant under the _ vertical reflection _ and the _ horizontal reflection _ where furthermore , the measures and are invariant under the _ point reflection _ .we first present a two - sided version of the markov property for the measures and , which states that for any fixed points and for distributed according to to ( or ) , the conditional distribution of ) ] , is ( or ) .then in lemma [ p : markov ] , we give the _markov property , which states that the same statement holds true when the deterministic points are replaced by left and right stopping points .the proofs of these statements are quite standard . for completeness ,we have included them in subsection [ ss:62 ] . in the case of the measures , the markov property can be stated in the following way . for , we define the piecewise linearization of between and as recall the definition of .then the following holds .[ le : markov1 ] suppose are fixed , non - random points .then under the random functions and are independent .furthermore , is zero outside of and is distributed according to between the two points . due to the lack of spatial homogeneity , the corresponding property for the measures has to be stated in a different way . for ] and that ) ] and is distributed according to , resp . , on ] , we get the following identity : } \vee { \mathcal{f}}_{[\hat{x}_+,x_+ ] } \big ) \ , = \ , \ , \mathbb{e}_{(\hat{x}_{-},\hat{x}_{+})}^{\mu_{\varepsilon},{\bf u } } \big ( \phi \big ) .\ ] ] here } \vee { \mathcal{f}}_{[\hat{x}_+,x_+ ] } ] and } ] , we can write here denotes the distribution of the random vector under .formula follows directly by applying times . to state the strong markov property ,we additionally need the notion of left and right stopping points .these are defined analogously to stopping times for markov processes .a random variable taking values in ] the event is contained in } ] . in all of our applicationsthe stopping points are going to be left or rightmost hitting points of a closed set .it is easy to check that these random points are indeed left and right stopping points as defined above .for given left and right stopping points , we define the sigma - algebra } ] of events that happen to the right of by } \ , : = \ , & \big\ { { \mathcal{a}}\in { \mathcal{f}}_{[x_-,x_+ ] } \colon \forall x \quad { \mathcal{a}}\cap \ { \chi_- \leq x \ } \in { \mathcal{f}}_{[x_-,x ] } \big\},\\ { \mathcal{f}}_{[\chi_+ , x_+ ] } \ , : = \ , & \big\ { { \mathcal{a}}\in { \mathcal{f}}_{[x_-,x_+ ] } \colon \forall x \quad { \mathcal{a}}\cap \ { \chi_+ \geq x \ } \in { \mathcal{f}}_{[x , x_+ ] } \big\}.\end{aligned}\ ] ] the strong markov property can be stated in an analogous way to . [p : markov ] suppose and are left and right stopping points with almost surely .suppose that ) \to { \mathbb{r}} ] . in section [ s : layers ] , we will use this observation to reduce the problem of calculating the probability of transition layers to computing the probability of wasted excursions ( see definition [ def : wasted ] ) .large deviation estimates for the measures constitute an important ingredient for our argument .large deviation bounds for gaussian measures with a small variance , e.g. , for , are well - known ( see e.g. ( * ? ? ?they can be extended to the measures with an exponential tilting " argument ( see e.g. , or ) in a standard way . let represent the set of continuous paths on ] and any ] consisting of paths that satisfy the boundary conditions .additionally , assume that then for any there exists an such that for all we have where is defined in .this depends on and but not on the particular choice of .it only depends on the set through the choice of in condition .furthermore , depends on only through the local lipschitz norm in particular , the same bounds hold for the same if varies over a set of potentials with uniformly bounded local -norm .this uniformity of with respect to will be used in subsection [ ss : unilem ] .there , it will be applied to the family of rescaled versions of .we also get the corresponding lower bounds without a condition on the minimal energy of for .[ pr : ld2 ] fix constants and .suppose that ] .assume that there exists an energy minimizer satisfying ] is not necessary and it can be replaced by an approximation .actually , we will show the proposition under the slightly weaker assumption that for every there exists a profile with ] and such that the proofs of these propositions are essentially a careful copy of the classical proofs and can be found in subsection [ ss:63 ] .let us remark here that we do not expect the bounds and to hold uniformly for all open or closed sets .in fact , the argument for the classical statements makes use of qualitative properties such as existence of coverings by finitely many open sets .one sums over this finite number and uses the fact that , for small enough , only the largest summand matters . for different open or closed sets ,this finite number will in general be different , and the choice of would also be different .we can resolve this issue by taking the neighborhood of in the bounds and as a uniform version of the topological assumptions on .in this section we prove theorem [ t : layers ] .this theorem estimates the exponentially small probability of having more than one layer ( with the correct entropic effect and exponential factor ) .hence , the most likely functions are those with only one transition layer . as outlined in subsection [ ss : methods ] , at the heart of the methodis the idea of decomposing the invariant measure into conditional measures and the corresponding marginals , so that we can reduce to estimating the probability of transition layers on order - one subintervals .when the boundary data of the subinterval falls within a compact set ] satisfying the boundary conditions and exhibiting at least transition layers is contained in the union of the following three sets : the set of paths that exhibit an atypically large value at one of the : the complementary set intersected with the set of paths that are bounded away from on all of ] for all in the box .for we have recalled that the boundary conditions force there to be at least one transition . even though has layers, we can expect an additional cost only for the `` extra '' layers and hence only keep track of layers .because the set of interest is contained within the above - mentioned sets , it suffices to bound .we first give a bound on the probability of .this bound follows directly from lemma [ le : onept ] .in fact , we get in particular , we can choose large enough so that and hence , the probability of is of higher order with respect to the right - hand side of .we remark that it is here where ( and therefore also ) acquires a dependence on . .to bound the second probability in , we write \text { on all of } [ x_k , x_{k+1 } ] \notag\\ & \qquad \qquad \qquad \qquad \qquad \text { and } u(x_{k-1 } ) , u(x_{k+2 } ) \in [ -m , m ] \big).\label{jn20.1}\end{aligned}\ ] ] using the markov property , we can write for any \text { on all of } [ x_k , x_{k+1 } ] } \notag\\ & \qquad \qquad \qquad \qquad \qquad \text { and } u(x_{k-1 } ) , u(x_{k+2 } ) \in [ -m , m ] \big ) \notag\\ & = \int_{-m}^m \int_{-m}^m \ , \nu_{k-1 , k+2}(du_- , du_+ ) \notag\\ & \qquad\qquad\times \mu^{u_-,u_+}_{{\varepsilon},(x_{k-1},x_{k+2 } ) } \big ( u \in [ -1+\delta , 1 - \delta ] \text { on all of } [ x_k , x_{k+1 } ] \big),\label{jn20}\end{aligned}\ ] ] where denotes the marginal distribution of the pair .we now want to invoke the large deviation bound and the energy bound from lemma [ le : lazy ] for the measures . to this end , we observe that a ball around functions contained in ] . redefining by up to a factor of to account for the parameter and interval length ( here rather than ) , we have that , for any and , there exists an such that , for all and all ] and , moreover , one is fully contained in ] .the index set satisfies ( recall our convention for the use of the symbol introduced in notation [ n : notation ] . ) hence , to complete the proof of , it suffices to show that for fixed , we have as explained above , the main step is to reduce the problem of estimating the probability of layers to estimating the probability of wasted excursions .this will be achieved through suitable _reflections_. let us at first assume that the are well - separated in the sense that let us also assume that we are away from the boundary , i.e. , that we will consider the possibilities of ( a ) intervals that overlap or are nearby , ( b ) intervals that are the same ( ) , and ( c ) boundary intervals at the end of step 5 .we start by defining left stopping points in the following manner .for we set here we set if the corresponding set is empty .it is easy to see that these random points are all left stopping points . in a similar fashion , for we set herewe set if the corresponding set is empty .then is a right stopping point for all . for any in , all the left and right stopping points contained in the corresponding intervals and , furthermore , we have finally , note that as soon as , we have that . for any left stopping point and any right stopping point , we now define the reflection operator . if ( which is the case for any as remarked above ) , we set ,\\ u(x ) \qquad & \text{for } x \notin [ \chi_{l } , \chi_r ] .\end{cases}\ ] ] if we set .we clearly have ; hence , is injective and onto . in order to show that preserves , we observe that for any measurable and bounded test function ) \to { \mathbb{r}} ] , the probability of a wasted excursion is bounded by choosing sufficiently small with respect to and estimating the integral of by as usual , we have from the combination of , , and that holds ( up to a redefinition of ) .thus , finally , , , and imply which concludes the proof of the upper bound in the well - separated case .it remains to consider the three special cases : ( a ) intervals that overlap or are nearby , ( b ) intervals that are the same ( ) , ( c ) intervals that are boundary intervals .* case ( a ) * if two or more intervals overlap ( i.e. , if ) or are nearby ( i.e. , if ) , then we lump them together into a single , larger interval and proceed as in ( b ) , below .the size of the largest possible interval formed in this way is .our energy estimates require only that the interval length be sufficiently large and our large deviation estimates are uniform as long as the interval length falls within a compact set .( here we rely on the fact that is order - one with respect to . ) * case ( b ) * if a multi - index has repeated indices so that there is more than one transition layer in a single interval , then we will use large deviation estimates for the event of having _ more than one wasted excursion _ in a single interval .assume that we have for some and some .furthermore , assume that the set of indices is maximal in the sense that either or and similarly that either or .in this case , we define the stopping points in the following way .consider any index that satisfies . for , we define as in . on the other hand , for , we define as usual , we define if the set above is empty. now consider any index that satisfies .for , we define as in . on the other hand , for , we define again , we take the usual definition if the set above is empty . as above these random points are left stopping points for and right stopping points for .furthermore , we still have that holds for all . the measure preserving reflection operator be defined as above in , and maps each to a path that has wasted excursions in .( specifically , we mean wasted excursions on intervals for that are mutually disjoint except for possibly the endpoints . )we leave it to the reader to verify that a generalization of lemma [ l : cl ] is : [ l : gen ] there exists with the following property .fix . for any system sizes sufficiently large and boundary conditions , set )\colon u(-\ell_1-\ell_2)=u_{- } , u(\ell_1+\ell_2)=u_+\},\\ \mathcal{a}_0^{\rm bc}&:=\{u\in \mathcal{a}^{\rm bc}\colon u\,\text{has } m\text { disjoint wasted excursions in } ( -\ell_1,\ell_1)\}.\end{aligned}\ ] ] define the optimal cost then uniformly with respect to the boundary values , one has * case ( c ) * suppose for instance that there is a transition layer in . then we know the boundary value , while the boundary value at the other end of the subinterval is unknownthis is easily handled by a suitable `` one - sided '' generalization of lemma [ l : cl ] , which is easy to prove . using the facts from above , the proof of the upper boundis completed by decomposing into the various cases and recovering the correct ( and identical ) bounds in each case .* lower bound . * we turn now to the matching lower bound , i.e. , that as explained in subsection [ s : detac ] , for the lower bound we will work with transition layers ( cf . definition [ def : layerb ] ) .because of the boundary conditions and the definition of layers , it will be sufficient to show that , for some , we have indeed , in analogy with the upper bound , the probability of layers is bounded above by the probability of transition layers , and because of the boundary conditions there must be an odd number of transitions . .once again , we will use the gridpoints defined in .our first step is to get some control on the values of at the gridpoints .the following lemma , used below , is established via techniques similar to those used for the upper bound .[ l : negneg ] for any sufficiently large , there exists and such that , for and , we have for any satisfying that \big)\geq \frac{1}{3}.\label{bigenuf}\end{aligned}\ ] ] recall the definition of in .the proof is similar to the proof of the upper bound , and is deferred to subsection [ ss : negneg ] .the main idea is that while the boundary conditions force there to be a transition layer , with high probability , there is _ only one transition layer_. moreover , by symmetry , this layer is as likely to appear on ] ( hence neither probability can be more than ) . on the other hand , for to hit zero away from the transition layer is energetically unlikely , by arguments similar to those used for the upper bound . .with lemma [ l : negneg ] in hand , we turn to the basic set - up for the lower bound . in this case, we will not want to use overlapping subintervals .we will also not work with the full system , but only with intervals on the left - hand side .specifically , we will work with \quad\text{for}\;\;k\in\{-(n_{\varepsilon}-4),-(n_{\varepsilon}-8),\ldots , -4\}=:e.\ ] ] we have assumed without loss of generality that divides .( if not , then for some and .replace by throughout . )we remark that , as usual , for an event falling in the interval , we will condition on the boundary values on a larger interval .specifically , we will use a markov decomposition in which we condition on the boundary values of the enlarged interval .\ ] ] notice that for all , the enlarged intervals are nonintersecting . for future reference ,let us denote the set of boundary indices the rough idea is to consider sets of functions having layers with a layer in one of the intervals for distinct values of .unfortunately , because we work with functions that have _ at least _ transitions rather than _ exactly _ transitions , a given function may have more than layers and belong to more than one of the sets we have just described . hence we can not translate the probability of the union into the sum of the probabilities . in order to work around this , we will work with more restrictive sets .analogous to the set defined in above , we define the following set . rather than keeping track of all the boundary values , it will be convenient to track only the boundary values for the extended intervals described above .that is , we consider we now introduce a set that is analogous to the set above ( but more restrictive , for the reason we have explained ) .for ease of notation , we do not introduce a new label .let and consider the set clearly , we have the following inclusion of sets of paths : where is the following set of _ well - separated indices on the negative -axis _ : moreover , the sets for are disjoint , so that implies the set on the right - hand side of is certainly smaller than the set on the left - hand side , but the bound will be good enough on the level of scaling since latexmath:[\ ] ] then there holds where this lemma is virtually identical to lemma [ l : last ] .the principal difference is that here the excursion from is only of magnitude .this changes only the leading order cost ( from to ) .we omit the proof of the lemma .we will prove only , the proof of being essentially the same .we will always assume that the left endpoint of the interval is greater than or equal to ( since otherwise the boundary condition at trivially implies the result ) .notice that the set of paths that do not hit in is contained in the following two sets * the set of paths ( a ) in ( extra layers : recall ) or ( b ) without extra layers but more than away from at a gridpoint for some in .* the set of paths in that are within of at all gridpoints with but do not hit in .hence , because of the bounds already established in lemmas [ l : reflection ] and [ l : rightshape ] , we will be done as soon as we show we remark for reference below that we may assume so that implies .the interval can naturally be divided up into subintervals of length .we set \text { for } j \leq\bar{k}_{\varepsilon}-1.\ ] ] we want to use the markov property and then apply lemma [ l : hittingzero ] on these subintervals .therefore , as usual , we introduce some sets for a decomposition .we now write as the intersection and apply the markov property ( lemma [ le : markov1b ] ) times to deduce according to lemma [ l : hittingzero ] , we have uniformly over all paths that satisfy $ ] .we insert this bound into and then use the markov property once more to recover since the combination of and completes the proof of .we thank the max - planck institute for mathematics in the sciences in leipzig , where we had the pleasure of working jointly on this project .the second author would like to thank volker betz for explaining to him some background about schrdinger operators .he would also like to thank martin hairer and andrew stuart for many discussions about this project and related topics .hendrik weber was partially supported by erc grant _ amstat problems at the applied mathematics and statistics interface _ and by a philip leverhulme prize .maria g. westdickenberg was partially supported as an alfred p. sloan research fellow and by the national science foundation under grant no .dms0955051 .m. i. freidlin and a. d. wentzell ._ random perturbations of dynamical systems _ , vol .260 of _ grundlehren der mathematischen wissenschaften [ fundamental principles of mathematical sciences]_. springer - verlag , new york , second ed . , 1998 . translated from the 1979 russian original by joseph szcs .s. r. s. varadhan ._ large deviations and applications _ , vol .46 of _ cbms - nsf regional conference series in applied mathematics_. society for industrial and applied mathematics ( siam ) , philadelphia , pa , 1984 .j. zabczyk .symmetric solutions of semilinear stochastic equations . in _stochastic partial differential equations and applications , ii ( trento , 1988 ) _ , vol .1390 of _ lecture notes in math ._ , 237256 .springer , berlin , 1989 .
we study the invariant measure of the one - dimensional stochastic allen - cahn equation for a small noise strength and a large but finite system . we endow the system with inhomogeneous dirichlet boundary conditions that enforce at least one transition from to . ( our methods can be applied to other boundary conditions as well . ) we are interested in the competition between the `` energy '' that should be minimized due to the small noise strength and the `` entropy '' that is induced by the large system size . our methods handle system sizes that are exponential with respect to the inverse noise strength , up to the `` critical '' exponential size predicted by the heuristics . we capture the competition between energy and entropy through upper and lower bounds on the probability of extra transitions between . these bounds are sharp on the exponential scale and imply in particular that the probability of having one and only one transition from to is exponentially close to one . in addition , we show that the position of the transition layer is uniformly distributed over the system on scales larger than the logarithm of the inverse noise strength . our arguments rely on local large deviation bounds , the strong markov property , the symmetry of the potential , and measure - preserving reflections .
in many practical situations the data of concern is sparse under suitable representations .many problems turn to be computationally amenable under the sparsity assumption . as a notable example, it is now well understood that the minimization method provides an effective way for reconstructing sparse signals in a variety of settings .the goal of compressed sensing ( cs ) is to recover a sparse vector from the linear sampling and the sampling matrix ( or compressed sensing matrix ) .a general model can be of the form with being a vector of errors ( noise ) . in this case, one needs to approximate the sparse vector from the linear sampling , the compressed sensing matrix and some information of .however we shall only be interested in the noiseless case ( i.e. , ) in this paper to simplify discussion . in order to achieve less samples , one requires that the size of be much smaller than the dimension of , namely . a nave approach for solving this problem is to consider minimization , i.e., however this is computationally infeasible .it is then natural to consider the method of minimization which can be viewed as a convex relaxation of minimization .the minimization method in this context is this method has been successfully used as an effective way for reconstructing a sparse signal in many settings . here, a central problem is to construct the compressed sensing matrices so that the solution to is the same as that to .one of the most commonly used frameworks for the compressed sensing matrices is the restricted isometry property ( rip ) which was introduced by cands and tao .rip has been used in the randomized construction of cs matrices ( see for example ) .another well - known framework in compressed sensing is the mutual incoherence property ( mip ) of donoho and huo .several deterministic constructions of cs matrices are based on mip , e.g. , .the main focus of this paper is on the deterministic construction of cs matrices . in , devore presented a deterministic construction of cs matrices using the mutual incoherence : given a prime number and an integer , an ( with ) binary matrix ( i.e. , each of its entry is either or ) can be found in a deterministic manner such that whenever a signal with the sparsity ( i.e. , has at most nonzero components , we also call such a vector -sparse ) can be produced by solving ( ) , it suffices that . but for being able to recover -sparse signal , one needs to use ( [ devore_bound ] ) based on the discussion in . ] .recently , li , gao , ge , and zhang suggested a deterministic construction of of binary cs matrices via algebraic curves over finite fields .as stated in , their construction is more flexible and slightly improves devore s result when is large ( in fact the examples in indicate that one sees improvement only when {m}}) ] for some integer . this class of cs matrices can be used to recover -sparse signal with it is noted that there is a significant gap between the bound ( [ determ_condition_2 ] ) and the ( asymptotically optimal ) bound ( [ devore_bound ] ) .the allowed range of sparsity for the case of a deterministic partial fourier matrices is far smaller than that of the binary case .constructing partial fourier matrices deterministically that work for a larger sparsity range of signals is certainly of particular interest .the aim of this paper is to construct a partial fourier matrix which is a cs matrix , through a deterministic procedure .based on a celebrated character sum estimation of katz , we are able to obtain a bound for the sparsity in the case of partial fourier matrices that is similar to ( [ devore_bound ] ) .more precisely , we have shown that if is a prime power and , setting or ( is required for this case ) , then there is a deterministic process to select rows from the fourier matrix and build a ( column normalized ) matrix , such that if then can be used to reconstruct -sparse signals via ) .it is noted that ( [ determ_condition_3 ] ) also greatly improves ( [ determ_condition_2 ] ) .the result in this paper can also be used to recover the sparse trigonometric polynomial with a single variable . in this paper, we also improve katz estimation for quadratic extension fields with an elementary and transparent approach . using this improvement, we are able to construct a cs matrix which is a partial fourier matrix , and whose columns are a union of orthonormal bases .this is a useful construction for sparse representation of signals in a union of orthonormal bases which has been a topic of some studies ( see , for example ) .moreover , this construction produces an approximately mutually unbiased bases which is of particular interest in quantum information theory .we also conduct some numerical experiments .the results show that the deterministic partial fourier matrices has a better performance over the random partial fourier matrices , provided that these two classes of matrices are of comparable sizes .this paper is organized as follows .the section below provides some necessary concepts and results to be used in our discussion .the main results are given in section [ sec : main_res ] .the discussion of computational issues and numerical results are contained in the last section .let be an matrix with normalized column vectors .assume that each is of unit ( euclidean ) length .the mutual incoherence constant ( mic ) is defined as even though in many situations a small is desired , the following well - known result of welch indicates that is bounded below in , donoho and huo gave a computationally verifiable condition on the parameter that ensures the sparse signal recovery : let be a concatenation of two orthonormal matrices .assume is -sparse . if then the solution for is exactly .this result of was extended to a general matrix by fuchs , and , gribonval and nielsen , both in the noiseless case . for the noisy case of the bounded error, proved that -minimization gives a stable approximation of the signal under some conditions that are stronger than ( [ mip ] ) .the open problem of whether condition ( [ mip ] ) , namely , is sufficient for stable approximation of in the noisy case was settled by cai , wang and xu in actually the authors of even proved that this condition is sharp , for both noisy and noiseless cases .another remark is that if one considers only the noiseless case , the proof in simplifies that of although the sparse recovery condition ( [ mip ] ) is rather strong , it is advantageous in checking whether a matrix meets the condition .such a checking procedure requires steps which is computationally feasible .the mic has been explicitly used in the design of compressed sensing matrices by several works , e.g. , .we say that satisfies the restricted isometry property ( rip ) of order and constant if holds for all -sparse vector ( see ) .in fact , ( [ eq : con ] ) is equivalent to requiring that the grammian matrices has all of its eigenvalues in ] .denote and choose .we have by theorem [ thm:3.1 ] , we can build a partial fourier matrix .to compare , we generate the random partial fourier matrix selecting rows from the fourier matrix uniformly randomly . we take and . then we have where /(x^3+x+1 ) , \alpha=\bar{x } , g=\bar{x}^2 + 2\bar{x} ] for all with . for every value , sets drawn uniformly random over all sets and the statistics are accumulated from 50,000 samples .figure 3 shows the maximum and minimum eigenvalues of for .* acknowledgement * : part of this work was done when the first author visited the inst ., academy of mathematics and systems science , chinese academy of sciences .he is grateful for the warm hospitality .r. baraniuk , m. davenport , r. devore , and m. wakin , a simple proof of the restricted isometry property for random matrices , _ constr ._ , 28(2008 ) , 253 - 263 . j. bourgain , s. dilworth , k. ford , s. konyagin and d. kutzarova , explicit constructions of rip matrices and related problems , _ duke math . j. _ , 159(2011 ) , 145 - 185 .t. cai and t. jiang , limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices . _the annals of statistics _ , 39(2011 ) , 1496 - 1525 . t. cai , l. wang , and g. xu , stable recovery of sparse signals and an oracle inequality , _ ieee trans .inf . theory _ , 56(2010 ) , 3516 - 3522 . t. cai , g. xu , and zhang , on recovery of sparse signals via minimization , _ ieee trans . inf .theory _ , 55(2009 ) , 3388 - 3397 . t. cai and a. zhang , sharp rip bound for sparse signal and low - rank matrix recovery , _ applied and computational harmonic analysis _ , to appear .m. grant , s. boyd and y. ye , cvx : matlab software for disciplined convex programming , version 1.0rc3 , http://www.stanford.edu//cvx , 2007 .e. j. cands and t. tao , decoding by linear programming , _ ieee trans .inf . theory _ , 51(2005 ) , 4203 - 4215 .e. j. cands and t. tao , near - optimal signal recovery from random projections : universal encoding strategies ? _ ieee trans .theory _ , 52(2006),5406 - 5425 .e. j. cands and t. tao , the dantzig selector : statistical estimation when is much larger than ( with discussion ) , _ ann ._ , 35(2007 ) , 2313 - 2351 . m. e. davies and r. gribonval , restricted isometry constants where sparse recovery can fail for , _ ieee trans .inf . theory _ , 55(2009 ) , 2203 - 2214 .r. devore , deterministic constructions of compressed sensing matrices , _ journal of complexity _ , 23(2007 ) , 918 - 925 .donoho , m. elad , and v.n .temlyakov , stable recovery of sparse overcomplete representations in the presence of noise , _ ieee trans .inf . theory _ , 52 ( 2006 ) , 6 - 18 .d. l. donoho , x. huo , uncertainty principles and ideal atomic decomposition , _ ieee trans .inf . theory _, 47(2001 ) , 2845 - 2862 .m. elad and a.m. bruckstein , a generalized uncertainty principle and sparse representation in pairs of bases ._ ieee trans .inf . theory _, 48 ( 2002 ) , 2558 - 2567 . j. fuchs , on sparse representations in arbitrary redundant bases , _ ieee trans .inf . theory _ , 50(2004 ) , 1341 - 1344 . s. ghobber and p. jaming , on uncertainty principles in the finite dimensional setting , _ linear algebra and its applications _r. gribonval and m. nielsen , sparse representations in unions of bases , _ ieee trans .inf . theory _ , 49(2003 ) , 3320 - 3325. j. haupt , l. applebaum , and r. nowak , on the restricted isometry of deterministically subsampled fourier matrices , _ proc .44th annual conf . on information sciences and systems _ , princeton , nj , march 2010 .g. m. katz , an estimate for character sums , __ , 2(1989),197 - 200 .a. klappenecker , m. rtteler , i. shparlinski and a. winterhof , on approximately symmetric informationally complete positive operator - valued measures and related systems of quantum states , _ j. math .physics _ , 46(2005 ) , 82 - 104 .i. e , shparlinski and a. winterhof , constructions of approximately mutually unbiased bases , _ lect notes comput sci _ , 3887 ( 2006 ) , 793 - 799 godsil c , roy a. equianglar line , mutually unbiased bases and spin model ._ euro j. combin ._ , 30(2009 ) , 246 - 262 s. li , f. gao , g. ge , and s. zhang , deterministic construction of compressed sensing matrices via algebraic curves , _ ieee trans .inf . theory _ , 58(2012 ) , 5035 - 5041 . w. -c .w. li , character sums , and abelian ramanujan graphs , _j. number theory _ , 41(1992 ) , 199 - 217 .a. odlyzko , discrete logarithms : the past and the future , _ designs , codes and cryptography _ , 19(2000 ) , 129 - 145 .
the class of fourier matrices is of special importance in compressed sensing ( cs ) . this paper concerns deterministic construction of compressed sensing matrices from fourier matrices . by using katz character sum estimation , we are able to design a deterministic procedure to select rows from a fourier matrix to form a good compressed sensing matrix for sparse recovery . the sparsity bound in our construction is similar to that of binary cs matrices constructed by devore which greatly improves previous results for cs matrices from fourier matrices . our approach also provides more flexibilities in terms of the dimension of cs matrices . as a consequence , our construction yields an approximately mutually unbiased bases from fourier matrices which is of particular interest to quantum information theory . this paper also contains a useful improvement to katz character sum estimation for quadratic extensions , with an elementary and transparent proof . some numerical examples are included . * keywords : * minimization , sparse recovery , mutual incoherence , compressed sensing matrices , deterministic construction , approximately mutually unbiased bases .
data analysis has become a useful technique to organize , process , and analyze large amounts of data in order to obtain useful knowledge effectively such as hidden patterns , implicit correlations , future trends , customer preferences , valuable business information etc .olap ( _ online analytical processing _ ) , as a key technology to provide rapid access to data ( mostly relational data ) for analysis via multidimensional structures , enables users ( e.g. , analysts , managers , executives etc . ) to gain useful knowledge from data in a fast , consistent , interactive accessing way .there are many popular enterprise database management systems for supporting olap .for example , oracle olap is oracle s current computing engine for online analytical processing .ibm company based on the db2 database proposes the ibm db2 olap server which can analyze the relational database quickly and directly .microsoft also provides sql server analytic services ( ssas ) supporting for olap to analyze information , tables , and files scattered across multiple databases .the characteristics of big data is not confined to only volume and velocity ; it is also referred by the variety , variability and complexity of the data . due tothe volume , variety and velocity at which the data grows , it is extremely difficult for organisations to process this data for timely and accurate decisions .for this challenge , big data analysis has become a tool to slove the problem .the primary goal of big data analysis is to help companies make more informed business decisions by enabling data scientists , predictive modelers and other analytics professionals to analyze large volumes of transaction data , as well as other forms of data that may be untapped by conventional business intelligence programs .recently , many techniques have been successfully developped for providing big data analysis in various applications .for example , oracle bigdata builds on hadoop through oracle direct connector connecting hadoop and oracle databases .sql server 2012 provides the extension service of olap and business intelligence on hadoop to support big data analysis .ibm smartcloud provides a hadoop - based analytical software infosphere biginsights which can connect with ibm db2 .however , those existing techniques of big data analysis are mostly based on olap which is not effective to process data in various models ( e.g. , semi - structure ) , they do not always bring highly accurate analysis due to the variety and variability of big data in a complicated application for example , the real - time data on the performance of traffic applications or of mobile applications .besides , how to process big data analysis efficiently is always an important problem when the scale of big data grows exponentially . in this demonstration, we propose a hybrid framework for big data analysis on apache spark ( a high - performance computing architecture ) which builds on hdfs of hadoop .the framework features a three - layer data process module and a business process module which controls the former . within this framework ,we can support multi - paradigm data process ( i.e. , a technical connectivity between various disparate process ) in order to improve the accuracy of analysis , where various big data analysis techniques ( incl .olap , machine learning , and graph analysis etc . )are interoperated to process the analysis of various applications of big data ( incl .data cube , intelligent prediction , and complex network etc . )respectively . moreover , our proposed framework built on spark can process large - scale data efficiently .finally , we implement hmdap and demonstrate the strength of hmdap by using traffic scenarios in a real world .in figure [ fig : architecture ] , we depict the architecture of our framework consisted of four parts : _ the storage management _ , _ the resource scheduling _ , _ the query analysis _ and _ the business process_. in the following sections , we will introduce each part in detail . in figure[ fig : storage ] , there are two parts , the physical storage and the logical storage .the rapid growth of data makes the physical storage of data from single source storage to distributed storage . in order to solve the storage of multi - source data , we adopt the existing distributed file system . in our framework, it is hdfs ( hadoop distributed file system ) .besides , it products many types of data due to the different needs of applications , such as tables , texts , rcfile(the file type of hive ) and sequence data . in order to use these different types of data , we compose the abstract relational views by designing the metadata with semantics to convert data types to the relational data we can handle .+ in our framework , the development is based on spark and the module of the resource scheduling is assigned to spark .the figure [ fig : scheduling ] depicts the resource scheduling in our framework .we use mysql to query over relational database .the part of mllib is spark machine learning library .we call the functions in the library to compute .graphx is the graph query module of spark .we user it to query graphs and it provides a possibility to transform the different data formats to graph to query .+ the module of the query and analysis is located on the top of the framework .it is not only the entrance to provide services , but also provides the standard syntax and semantic specification of multi - paradigm data analytical processing . at present , hiveql is similar to the standard sql , which is oriented to the classic olap task , and does not deal with the query language based on ml analysis and graph data analysis .on the basis of not changing the existing query language syntax standard , we develop a multi paradigm for large data fusion analysis query language expanded of machine learning(ml ) and graph analysis .our big data analysis and processing of the query language is based on the improvement of the fusion of sql and hiveql in multi - paradigm .first of all , we analyze the support of hiveql and sql respectively and count the amount of operations which can be supported by the traditional relational algebra model . on the basis of the relational algebra model ,we add other necessary operators to construct an extension of the algebraic language model , which can fully support the operation of hiveql and standard sql .for the operator with higher complexity , it is split into smaller sub operator or used other methods to optimize it . for the ml analysis ,we count the commonly used analytical processing methods , such as classification and clustering , and define the abstract interfaces for common ml analysis processing methods . for the graph analysis processing, we also count the commonly used analytical processing methods , such as the shortest path algorithm , and define the abstract interfaces for them . in this module, the framework also relates to the implementation of the olap on the relational database and ml and graph data processing tasks on the distributed framework .the traditional relational database query optimization method is no longer applicable to this situation . according to the different characteristics of relational storage management query engine and distributed file system of computing engine , we summarize the query information and optimize the performance .firstly , we investigate the statistical index system used in traditional database and analyze the interaction between each index and the index in the system .then , for each index in the index system of statistical information , we design efficient and accurate sampling methods to calculate the cost model in query optimization . according to the above statistics , we can also design a storage and maintenance programs which is easy to update and manage . and we may use the cost model in the traditional relational database to design a new cost model which can reflect the query cost of the mixed data . figure[ fig : query ] displays the query analysis .the main architectural components of the query analysis are _ query _ and _ data analysis process tools(dap tools)_. in the first part , we can query by sql or the function user defined as specified format .the dap tools contain classical olap , dap on machine learning and dap on graph .+ our framework provides an analysis method for the large scale data analysis process .but in the face of complex business processes in different fields , we need the domain knowledge and according to the domain knowledge , we can design the multi - paradigm fusion of analysis task .we can draw lessons from the method of service composition in service oriented architecture design . in this module , we need to do two things : developing a multi paradigm fusion analysis process orchestration language syntax and the complex business process scheduling method . in the first part, we need to analyze the patterns and characteristics of service orchestration language in service oriented architecture design and design an abstract model of the executable process . on the basis of the abstract model , we summarize the basic activities of complex business process analysis .finally , we define the grammar of the business process . in the semantic, we need to research and analysis the meanings of basic business activities and define the start point , end point and the basic command . in the second part, we need to study and analyze complex business processes in practical applications .then , we build complex business process models and refine the way to exchange messages in public business processes . after that, we need to control the interaction of each part of the resources through the interaction sequence of messages , achieving a reasonable call for each resource service .we still need to investigate the applicability of existing object - oriented design patterns .for the analysis of complex business process integration model , we design data business processes .we refine the design patterns in complex business processes based on the advantages and principles of existing design patterns . in the real world , the business process model is complex and it takes a lot of time to analyze . the figure [ fig : business ] illustrates the details of the business process in our framework .the user needs to write the configuration files before he or she submits the query .the format of configuration files are shown in section [ sec : demonstration ] .when the user submits a query to the framework , the query and analysis module in the framework starts to parse the user s query .this module parses queries according to predefined semantics , such as xml(extensible markup language ) .the module transforms the user s query to two parts , the query over relational databases and the query in machine learning .we default that the user s query including the query over relational databases and the module determines whether or not to carry out the query in machine learning .we think that when the result of the query over relational databases is null , the framework begins to query in machine learning .after the analysis module , the framework uses the query over relational databases and the information about the databases which is read from the configuration files to query the relational databases .then , the framework runs the query in machine learning .the input of machine learning is the result of querying by relational databases which the query statement is stored in the configuration files . andthe parameters of the machine learning algorithm is also stored in the configuration files .when the framework gets the information of the machine learning algorithm , it starts to train and calculate and the parameters of the training of the machine learning also comes from the configuration files .finally , the framework makes a join of the results of two parts .in this section , we present the interface of hmdap based in javascript , which communicate with the service in java .we show the screenshot of hmdap in figure [ fig : screenshot ] and the configuration file we mentioned above in figure [ fig : ml ] .the interface is composed as follows : configuration of machine learning : it is a text to input the path of the configuration file of the machine learning algorithm , such as parameters .configuration of relation database : it is a text to input the path of the configuration file of the relational databases , such as the user name .results : it is a text to display the results of the background . run : it is a button to start the program and when the program runs over , the results are shown in the _results_. save : it is a button to save the context from _ results _ in text file and at the same time , empty all the text box contents .cancel : cancel the running of this program and empty all the text box contents . andthe details of the configuration file is as follows : configuration : it is the beginning of the configuration file . input : it is the training dataset of the machine learning algorithm .database : it indicates that the input dataset comes from the relational database as following information url , user , password : they are the parameters to connect to the relational database , the location of the database , the user name and the password of the user .sql : it is the statement to query the relational database .parameter : the contents under this label are the parameters of the machine learning algorithms except the input parameter . value : a series of these labels are the values of the parameters .algorithm : it is the name of algorithm .for example , the value of _ algorithm _ is _ kmeans _ and our framework runs the algorithm named_ kmeans _ which is defined in our library .user can customize the algorithm and give the location of the algorithm in this label . before running the interface , the user should write two configuration files , the configuration of machine learning algorithms as figure [ fig : ml ] and the configuration of relational databases that the contexts are the parts of _ < database > _ in figure [ fig : ml ] . when the user writes two files , he or she should write the paths of the files in the texts on the interface. then , click the button _run_. if the user wants to save the results , he or she clicks the button of _save_. if the user do nt need the results , he or she clicks the button of _ cancel_.in this demonstration , we proposed hmdap , a hybrid framework for large - scale data analytical processing to support multi - paradigm process on spark .the multi - paradigm processing mechanism of hmdap can provide the interoperability of data analytical process techniques to process data which might be not effectively handled if we only apply single data analytical process technique . on the other hand , hmdap takes advantage of the high - performance of spark in processing large - scale data .we believe that hmdap provides a new approach to big data analysis in a multi - paradigm way .this work is supported by the programs of the key technology research and development program of tianjin ( 16yfzcgx00210 ) , the national key research and development program of china ( 2016yfb1000603 ) , the national natural science foundation of china ( nsfc ) ( 61672377 ) , and the open project of key laboratory of computer network and information integration , ministry of education ( k93 - 9 - 2016 - 05 ) .xiaowang zhang is supported by tianjin thousand young talents program .gray j. , chaudhuri s. , bosworth a. , layman a. , reichart d. , venkatrao m. , pellow f. , and pirahesh h. ( 1997 ) .data cube : a relational aggregation operator generalizing group - by , cross - tab , and sub - totals . , 1(1 ) : 29 - 53 .meng x. , k.bradley j. , yavuz b. , r.sparks e. , venkataraman s. , liu d. , freeman j. , b.tsai d. , amde m. , owen s. , xin d. , xin r. , m. franklin m. , zadeh z. , zaharia m. , and talwalkar a. ( 2016 ) .mllib : machine learning in apache spark . , 17:17 .sheth a. , kochut k. , a. miller j. , worah d. , das s. , lin c. , palaniswami d. , lynch j. , and shevchenko i. ( 1996 ) .supporting state - wide immunisation tracking using multi - paradigm workflow technology . in : _ proc .of vldb96 _ , pp . 263273 .
we propose hmdap , a hybrid framework for large - scale data analytical processing on spark , to support multi - paradigm process ( incl . olap , machine learning , and graph analysis etc . ) in distributed environments . the framework features a three - layer data process module and a business process module which controls the former . we will demonstrate the strength of hmdap by using traffic scenarios in a real world .
the behavior of elastic media is characterized by the stress - strain relationship , or constitutive law . for many materials such as rocks , soil , concrete and ceramics, it appears to be strongly nonlinear , in the sense that nonlinearity occurs even when the deformations are small .extensive acoustic experiments have been carried out on sandstones and on polycristalline zinc . in these experiments ,the sample is a rod of material , which is resonating longitudinally . for this kind of experiments , one - dimensional geometries are often considered .moreover , the small deformations hypothesis is commonly assumed .therefore , the stress is a function of the axial strain , for example a hyperbola , a hyperbolic tangent ( tanh ) , or a polynomial function . known as landau s law , the latter is widely used in the community of nondestructive testing . under these assumptions , elastodynamics write as a hyperbolic system of conservation laws . for general initial data ,no analytical solution is known when is nonlinear .analytical solutions can be obtained in the particular case of piecewise constant initial data having a single discontinuity , i.e. the riemann problem .computing the solution to the riemann problem is of major importance to get a theoretical insight on the wave phenomena , but also for validating numerical methods .when is a convex or a concave function of , one can apply the techniques presented in for the -system of barotropic gas dynamics . in this reference book , a condition which ensures the existence of the solutionis presented .this condition has been omitted in , in the case of the quadratic landau s law .we prove that this kind of condition is obtained also in the case of elastodynamics , and that it involves also a restriction on the initial velocity jump .furthermore , it is shown in how to predict the nature of the physically admissible solution in the case of the -system .we present here how it can be applied to elastodynamics .when has an inflexion point , it is neither convex nor concave .the physically admissible solution is much more complex than for the -system , but the mathematics of nonconvex riemann problems are well established .it has been applied to elastodynamics in , but with a negative young s modulus , which is not physically relevant . here, we state a condition which ensures the existence of the solution to the riemann problem .also , we show how to predict the nature of the physically admissible solution .finally , we provide a systematic procedure to solve the riemann problem analytically , whenever has an inflexion point or not . in the case of landau s law , an interactive application and a matlab toolbox can be found at http://gchiavassa.perso.centrale - marseille.fr / riemannelasto/.let us consider an homogeneous one - dimensional continuum . the lagrangian representation of the displacement field is used . under the assumption of small deformations ,the mass density is constant .therefore , it equals the density of the reference configuration .elastodynamics write as a system : if denotes the -component of the displacement field , then is the infinitesimal strain , and is the particle velocity .we assume that the stress is a smooth function of , which is strictly increasing over an open interval\varepsilon_\text{\it inf},\varepsilon_\text{\it sup}\right[ ] .if the characteristic field satisfies for all states in , then it is _linearly degenerate_. based on ( [ systhypvalp ] ) , linear degeneracy reduces to where is the young s modulus .therefore , ( [ elastolin ] ) corresponds to the classical case of linear elasticity .when linear degeneracy is not satisfied , the classical case is obtained when for all states in .the characteristic field is then _genuinely nonlinear_. here , this is equivalent to state for all in \varepsilon_\textit{inf } , \varepsilon_\text{\it sup}\right[ ] such that three constitutive laws have been chosen for illustrations .they cover all the cases related to convexity or to the hyperbolicity domain . among them ,the polynomial landau s law is widely used in the experimental literature , and the physical parameters given in table [ tab : params ] correspond to typical values in rocks .( a ) ) and ( b ) speed of sound ( [ chyp ] ) compared to the linear case ( [ elastolin]).[fig : hyp ] ] ( b ) ) and ( b ) speed of sound ( [ chyp ] ) compared to the linear case ( [ elastolin]).[fig : hyp ] ] [ [ model-1-hyperbola . ] ] model 1 ( hyperbola ) .+ + + + + + + + + + + + + + + + + + + + this constitutive law writes where . here , \varepsilon_\textit{inf},\varepsilon_\text{\it sup}\right [ = \left]-d,+\infty\right[ ] , the characteristic fields are genuinely nonlinear ( [ nlgen ] ) .( a ) ) and ( b ) speed of sound ( [ chyptan ] ) , zoom .[ fig : hyptan ] ] ( b ) ) and ( b ) speed of sound ( [ chyptan ] ) , zoom .[ fig : hyptan ] ] [ [ model-2-tanh . ] ] model 2 ( tanh ). + + + + + + + + + + + + + + + this constitutive law writes where .figure [ fig : hyptan ] displays the law ( [ blawhyptan ] ) and its sound speed strict hyperbolicity is ensured for all in . among the constitutive laws considered here , the tanh is the only model with an unbounded hyperbolicity domain .however , at .therefore , genuine nonlinearity is not satisfied ( [ nlngen ] ) . at ,the sound speed reaches its maximum ( [ clin ] ) .[ [ model-3-landau . ] ] model 3 ( landau ) .+ + + + + + + + + + + + + + + + + this constitutive law writes where is the young s modulus and are positive .figure [ fig : landau ] represents the constitutive law ( [ blawlandau ] ) and its sound speed in the particular case where the nonlinearity in ( [ blawlandau ] ) is quadratic ( ) , the hyperbolicity domain is \varepsilon_\textit{inf},\varepsilon_\text{\it sup}\right [ = \left]{-\infty},1/2\beta\right[ ] .then , the -riemann invariants are constant on -rarefaction waves . in practice, this property is used to rewrite ( [ eqrarefaction1 ] ) as finally , using the expressions of the eigenvalues ( [ systhypvalp ] ) and the riemann invariants ( [ riemanninvar ] ) , one obtains in ( [ eqrarefaction2])-([solrarefaction ] ) , can be replaced by , or by any other state on the rarefaction wave .now , we put in the - plane , and we construct the locus of right states which can be connected to through a -rarefaction .the states and must satisfy .thus , we obtain the rarefaction curves and denoted by for the sake of simplicity : a few properties of these curves are detailed in appendix [ subsec : wavecurves ] .[ [ model-1-hyperbola.-2 ] ] model 1 ( hyperbola ) .+ + + + + + + + + + + + + + + + + + + + to compute rarefaction waves , one needs the expressions of and in ( [ solrarefaction ] ) . for the hyperbola law ,a primitive of the sound speed ( [ chyp ] ) is and the inverse function of is [ [ model-2-tanh.-2 ] ] model 2 ( tanh ) .+ + + + + + + + + + + + + + + a primitive of the sound speed ( [ chyptan ] ) is since is not monotonous ( figure [ fig : hyptan]-(b ) ) , its inverse is not unique .the inverse over the range ] ( see ( [ clandaumax ] ) ) is made of two branches : the choice of the inverse in ( [ solrarefaction ] ) depends on . if , the inverse ( [ cinvlandau ] ) must be lower than ( first expression ) . else, it must be larger ( second expression ) . in this section, has an inflection point at ( [ nlngen ] ) .the characteristic fields are thus not genuinely nonlinear over \varepsilon_\textit{inf } , \varepsilon_\text{\it sup}\right[ ] ; * if , we construct the concave hull of over ] ; * if , we construct the convex hull of over ] .therefore , the characteristic fields are genuinely nonlinear and is strictly concave . in this case ,compound waves are not admissible .also , discontinuities and rarefactions have to satisfy the admissibility conditions ( [ laxelasto ] ) and ( [ pseudolax ] ) respectively .thus , forward and backward wave curves become since the characteristic fields are genuinely nonlinear , and are of class ( section i.6 in ) . from the properties of each elementary curve studied before , we deduce that is an increasing bijection over \varepsilon_\text{\it inf } , \varepsilon_\text{\it sup}\right[ ] .moreover , is strictly increasing while is strictly decreasing .therefore , they intersect once over \varepsilon_\textit{inf } , \varepsilon_\text{\it sup}\right[ ] .the limit of when tends towards is equal to .also , the limit of when tends towards is equal to .therefore , theorem [ thm : intersect ] is satisfied for every left and right states in \varepsilon_\textit{inf } , \varepsilon_\text{\it sup}\right[ ] . at the lower edge ,but at the upper edge , vanishes when tends towards .therefore , theorem [ thm : intersect ] is not satisfied for high values of the velocity jump . to illustrate, we take and the parameters issued from table [ tab : params ] . condition ( [ intersectcondnlgen ] ) then becomes m.s .a graphical interpretation is given on figure [ fig : admissibleconcave]-(b ) .let us assume that is strictly decreasing and equals zero at .therefore , the characteristic fields are neither linearly degenerate nor genuinely nonlinear .the stress function is strictly convex for and strictly concave for . forany and , let us denote similar notations are used for other kinds of inequalities , such as , etc . from the graphical method in section [ subsec : graphmeth ] based on convex hull constructions , forward and backward wave curves write when , the constitutive law becomes strictly concave .in this case , , and are always higher than . thus , can be replaced by in ( [ wnlngen ] ) ( idem for similar notations ) .moreover , , , and tend towards .therefore , we recover the wave curves ( [ wnlgen ] ) . forward and backward wave curves are lipschitz continuous and they are in the vicinity of the states or .their regularity may be reduced to after the first crossing with the line ( sections 9.3 to 9.5 of ) . from the properties of each elementary curve studied before , we deduce that is an increasing bijection over \varepsilon_\textit{inf } , \varepsilon_\text{\it sup}\right[ ] , region , * else , if , and , region . * else , if , and , region , * else , if , and , region .* else , if , and , region , * else , if and ] . the limit of when tends towards is equal to .therefore , the velocity jump is always bounded .this property is illustrated on figure [ fig : admissibletanh ] .if , the velocity jump must satisfy . and .[fig : admissibletanh ] ] [ [ model-3-landau.-5 ] ] model 3 ( landau ) .+ + + + + + + + + + + + + + + + + here , \varepsilon_\textit{inf } , \varepsilon_\text{\it sup}\right[ ] if .the computation of the solution is detailed in section [ sec : numex ] , for a configuration with two compound waves .with the parameters issued from table [ tab : params ] , we give two examples for the hyperbola constitutive law ( [ blawhyp ] ) and one for landau s law ( [ blawlandau ] ) . [ [ shock-2-shock- . ] ] 1-shock , 2-shock .+ + + + + + + + + + + + + + + + + + on figure [ fig : hyp1 ] , we display the solution with initial data , , m.s and m.s .the solution consists of two shocks : here .therefore , the shock speeds are km / s and km / s ( [ eqrh2 ] ) .[ [ rarefaction-2-rarefaction- . ] ] 1-rarefaction , 2-rarefaction .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + on figure [ fig : hyp4 ] , we represent the solution with initial data , , m.s and m.s .it consists of two rarefactions : where and satisfy ( [ solrarefaction ] ) with and respectively . here , .[ [ shock - rarefaction-2-rarefaction - shock- . ] ] 1-shock - rarefaction , 2-rarefaction - shock .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + on figure [ fig : lan1 ] , we display the solution with initial data , , m.s and m.s .it consists of two compound waves : here , .the rarefactions break at and ( [ eqwendrofflandau ] ) .( a ) ) with two shock waves .( b ) hugoniot loci .( c ) analytical solution at and ms.[fig : hyp1 ] ] ( b ) ) with two shock waves .( b ) hugoniot loci .( c ) analytical solution at and ms.[fig : hyp1 ] ] ( c ) ) with two shock waves .( b ) hugoniot loci .( c ) analytical solution at and ms.[fig : hyp1],title="fig : " ] ) with two shock waves .( b ) hugoniot loci .( c ) analytical solution at and ms.[fig : hyp1 ] ] ( a ) ) with two rarefactions .( b ) rarefaction curves .( c ) analytical solution at and ms.[fig : hyp4 ] ] ( b ) ) with two rarefactions .( b ) rarefaction curves .( c ) analytical solution at and ms.[fig : hyp4 ] ] ( c ) ) with two rarefactions .( b ) rarefaction curves .( c ) analytical solution at and ms.[fig : hyp4],title="fig : " ] ) with two rarefactions .( b ) rarefaction curves .( c ) analytical solution at and ms.[fig : hyp4 ] ] ( a ) ) with two compound waves .( b ) 1-shock - rarefaction and 2-rarefaction - shock curves .( c ) analytical solution at ms .the -axis is broken from to m.[fig : lan1 ] ] ( b ) ) with two compound waves .( b ) 1-shock - rarefaction and 2-rarefaction - shock curves .( c ) analytical solution at ms .the -axis is broken from to m.[fig : lan1 ] ] ( c ) ) with two compound waves .( b ) 1-shock - rarefaction and 2-rarefaction - shock curves .( c ) analytical solution at ms .the -axis is broken from to m.[fig : lan1],title="fig : " ] ) with two compound waves .( b ) 1-shock - rarefaction and 2-rarefaction - shock curves .( c ) analytical solution at ms .the -axis is broken from to m.[fig : lan1 ] ]when the constitutive law is convex or concave , the system of 1d elastodynamics is similar to the -system of barotropic gas dynamics . the - plane can be split into four admissibility regions : one for each combination of a 1-wave and a 2-wave . in this case, we obtain a new condition on the velocity jump which ensures the existence of the solution to the riemann problem , whether the hyperbolicity domain is bounded or not .also , we provide analytic expressions to compute the solution straightforwardly for the hyperbola and the quadratic landau s law . these results have been extended to constitutive laws which are neither convex nor concave .indeed , for constitutive laws with one inflection point , we obtain a new condition on the velocity jump which ensures the existence of the solution to the riemann problem .furthermore , we propose a partition of the - plane into nine admissibility regions .an application and a matlab toolbox are freely available at http://gchiavassa.perso.centrale - marseille.fr / riemannelasto/. the mathematics and the approach presented here could be applied to more complicated constitutive laws , e.g. with a disjoint union of inflexion points .we acknowledge stphane junca ( jad , nice ) for his bibliographical insights . k.e.a .van den abeele and p.a .johnson , elastic pulsed wave propagation in media with second- or higher - order nonlinearity .simulation of experimental measurements on berea sandstone , _j. acoust .* 99 * -6 ( 1996 ) 33463352 .since is the primitive of a strictly positive continuous function , is strictly increasing and continuous .therefore , is an increasing bijection and is a decreasing bijection ( [ eqrarefaction ] ) .let use differentiate equation ( [ eqrs ] ) .we obtain where applying the implicit functions theorem to in ( [ eqwendroff ] ) requires .since , the hypotheses of the theorem are satisfied if ( [ nlngen ] ) .finally , thus , therefore , is an increasing bijection and is a decreasing bijection .we go back to the condition that must be satisfied by the initial data when the constitutive law is concave , i.e. equation ( [ intersectcond ] ) in theorem [ thm : intersect ] . according to the expressions of and in ( [ wnlgen ] ), one has this can be expressed in terms of the velocity jump .based on ( [ eqrh ] ) and ( [ eqrarefaction ] ) , condition ( [ intersectcondcw1 ] ) becomes the same condition ( [ intersectcondnlngenthm ] ) must be satisfied by the initial data when the constitutive law is strictly convex for and strictly concave for ( theorem [ thm : intersectnlngen ] ) .the expressions of and are given by ( [ wnlngen ] ) .for instance , when tends towards in , one needs a comparison between and to choose the correct elementary wave curve . since , it is immediate that .similar comparisons can be written to select the correct elementary curve in or when tends towards .finally , ( [ intersectcondnlngenthm ] ) writes
under the hypothesis of small deformations , the equations of 1d elastodynamics write as a hyperbolic system of conservation laws . here , we study the riemann problem for convex and nonconvex constitutive laws . in the convex case , the solution can include shock waves or rarefaction waves . in the nonconvex case , compound waves must also be considered . in both convex and nonconvex cases , a new existence criterion for the initial velocity jump is obtained . also , admissibility regions are determined . lastly , analytical solutions are completely detailed for various constitutive laws ( hyperbola , tanh and polynomial ) , and reference test cases are proposed .
zika virus ( zv ) , an emerging mosquito - borne flavivirus related to yellow fever , dengue , west nile and japanese encephalitis , has taken the americas by a storm .zv is transmitted primarily by _aedes aegypti _ mosquitoes , which also transmits dengue and chikungunya , are responsible for huge outbreaks in some regions in brazil , colombia , and el salvador .zv cases as of february 9 , 2016 , according to the cdc , have been reported throughout the caribbean , mexico as well as in most south american nations except for chile , uruguay , argentina , paraguay and peru .several states within the united states have reported zv cases and although it is expected that zv will be managed within the usa , the possibility of localized zv outbreaks is not out of the question . + phylogenetic analyses have revealed the existence of two main virus lineages ( african and asian ) . up to date, there are no concise clinical differences between infections with either one of these two lineages .this is in part due to the few human isolates belonging to the african lineage which were mainly obtained from sentinel rhesus in 1947 in uganda , where it was first discovered during primate and mosquito surveillance for yellow fever .its geographic habitat at that time was tied in to narrow equatorial belt running across africa and into asia .the african lineage circulated primarily in wild primates and arboreal mosquitoes such as _ aedes africanus _ ; spillover infections in humans rarely occurred even in areas were it was found to be highly enzootic .the more geographically expanded asian lineage seems to have originated from the adaptation of the virus to invade a different vector , _ aedes aegypti _ , which unfortunately seems to be perfectly adapted to infect humans .the first human infection was reported in nigeria in 1954 . in 2007 ,zika moved out of africa and asia causing an outbreak in yap island in the federated states of micronesia , this was followed by a large outbreak in french polynesia in 2013 - 2014 and then spreading to new caledonia , the cook islands and eastern islands .some evidence exists regarding zika spread following chikungunya epizootics and epidemics as african researchers observed this decades ago .this pattern was also present in 2013 when chikungunya pandemic spread from west to east followed by zika . in early 2015 , zv was detected in brazil .phylogenetic analyses of the virus isolated from patients placed the brazilian strains in the asian lineage , which has been previously detected during the french polynesian outbreak . since the first detection of zv in brazil , the following countries have reported ongoing substantial transmission of zv in south america : bolivia , brazil , colombia , ecuador , french guyana , guyana , paraguay , suriname and venezuela .several central america countries are also affected including costa rica , el salvador , guatemala , honduras , nicaragua and panama .the rapid expansion of zv has led the world health organization ( who ) to declare it a public health emergency of international concern .+ it has been estimated that about 80% of persons infected with zv are asymptomatic and it is known that those with clinical manifestations present dengue - like symptoms that include arthralgia , particularly swelling , mild fever , lymphadenopathy , skin rash , headaches , retro orbital pain and conjunctivitis which normally last for 2 - 7 days . due to the similarity in clinical characteristics between dengue , chikungunya and zv , the lack of widely distributed zv - specific tests , the high proportion of asymptomatic individuals , it may turn out that the number of patients infected with zv may actually be a lot higher than what it is being reported .moreover , co - infection with dengue and zv is not uncommon .in fact , it has been previously reported making zv diagnosis even more difficult .+ the challenges linked to the control of zv must include the fact that there is no vaccine available , a troublesome situation given the fact that zv has been linked recently to potential neurological ( microcephaly ) and auto - immune ( guillain - barr syndrome ) complications .further the evidence so far points to the likelihood that zv can also be sexually transmitted .education about zv modes of transmission and ways of preventing transmission are essential in order to halt mosquito growth and thus zv spread at the community , population , regional , national and global levels .control measures available are limited and include the use of insect repellents to protect us against mosquito bites and sex abstinence or protection while engage in sexual activity .some countries face immense challenges that if not addressed would make current efforts by officials to educate the public highly ineffective .and additional challenges that may limit if not stop the use of whatever control or education measures that a city , or nation or region may be able to put in place , are tied in to the effectiveness of public safety , violence and organized crime activity ( including gangs ) .the latin america and the caribbean population comprise only 9% of the global population and yet , it accounts for 33% of the world shomicides . in this study, we analyze the impact of that restrictions to public safety may have in the control of zv .the motivation comes from our interest in addressing the role of violence and insecurity within the context of the caribbean , particularly el salvador .as specified in the introduction , the objective of this manuscript is to look at the impact of mobility and security on the transmission dynamics of the zika virus ( zv ) , in regions where insecurity , violence and resource limitations make it difficult to implement effective intervention efforts . as a first step ,we proceed to build a two - patch model , both patches defined by their relationship to security and associated per capita resources . and so , the first patch is defined by low level security , which makes it difficult to have access and carry out systematic vector control efforts due , for example , to gang activity . the second patch a safe territory ,that is , a place where security is high , access to health services is expected , and relative high levels of education are the norm . + building detailed parametrized models that account for all the above factors would require tremendous amount of data and information and so , the model would require a large number of parameters , some that have never been measured . its use as a policy simulation tool would also require impressive amounts of information on , for example , individuals daily scheduled activities . here , it is assumed , that each patch is made up of individuals all experiencing the same degree of risk to infection .it is also assumed that all individuals are typical representatives of either the high risk ( patch1 ) or low risk ( patch 2 ) communities .the level of risk ( violence and infection ) is incorporated within a single parameter and so , by definition , we have in general that .this assumption captures in a rather simplistic way the essence of what we want to address , namely , the fact that we are looking at the dynamics of zv in a highly heterogeneous world , modeled , for the purpose of exploring its role , in as simple scenario as possible . herewe look at an extreme case where the situation , idealistically defined via two patches ; a high and a low risk patch .the dynamics of individuals in both patches , short time scales are incorporated .the analysis is over the duration of a single outbreak .+ in the rest of this section , we introduce the prototypic model that will be used to model zv dynamics within a patch . and so , we let denote the host patch population size interacting with a vector population of size .the transmission process is model through the interactions of the following epidemiological state variables . and so , we let , , , and denote the susceptible , latent , infectious asymptomatic , infectious symptomatic and recovered sub - populations , respectively while , and are used to denote the susceptible , latent and infectious mosquito sub - populations . since the focus is on a single outbreak , we neglect the hosts demography while assuming that the vector s demography does not change , this is done , by simply assuming that the birth and death per capita mosquito rates are the same .new reports point to the identification of an increasing number of asymptomatic infectious individuals from ongoing zv outbreaks .and so , we consider two classes of infectious and , asymptomatic and symptomatic individuals . furthermore , since not much is known about the dynamics of zv transmission , we assume that and individuals are equally infectious and that their periods of infectiousness are roughly the same .this is not a terrible assumption given our current knowledge of zv epidemiology and the fact that in general zv infections are not severe .furthermore , since the infectious process of zv is similar to dengue we proceed to use parameter estimates for dengue transmission in el salvador as well as our current estimates of the basic reproduction number for zv obtained from the data on barranquilla s colombia current zv outbreak .the selection of model parameters ranges benefited from those estimated from the 2013 - 2014 french polynesia outbreak .the dynamics of the prototypic single patch system , single epidemic outbreak , is modeled via the following nonlinear system of differential equations : cc parameters & description + & infectiousness of human to mosquitoes + & infectiousness of mosquitoes to humans + & biting rate in patch + & humans incubation rate + & fraction of latent that become asymptomatic and infectious + & recovery rate in patch + & proportion of time residents of patch spend in patch + & vectors natural mortality rate + & vectors incubation rate + the parameters of model [ 1patch ] are collected and described in table [ tab : param ] while the flow diagram of the model is provided in fig [ fig : flow ] .we now proceed to compute the formula for the _ basic reproduction number _ for this prototypic model , that is , the average number of secondary infections generated by a typical infectious individual in a population where nobody has experienced a zv - infection .that is , we take , that is , we focus on perturbations of the disease free equilibrium of model ( [ 1patch ] ) , that is , on . and so , using standard approaches , we find that the basic reproduction number is given by }{n_h\gamma_{h , a}\gamma_{h , s}\mu_v(\mu_v+\nu_v)}\\ & : = & \mathcal r_{0,a}^2+\mathcal r_{0,s}^2,\end{aligned}\ ] ] where the average number of secondary cases produced by infectious asymptomatic during their infectious period whereas is the average number of secondary cases produced by a symtptomatic infectious during their infectious period .we also define the level of risk as the product of the biting rate , vector - host ratio and the infectiousness of humans to mosquitoes , .the dynamics of the single patch model are well known . in short ,if , there is no epidemic out break , that is , the proportion of introduced infected individuals , decrease while if we have that the host population experiences an outbreak , that is , the number of cases exceeds the initial size of the introduced infected population at time .when , the population of infected individuals eventually decreases and the disease dies out ( single outbreak model ) .+ in the next section , a two patch model using a lagrangian approach found in is introduced . and so , individual in patch never loose their residency status .the mobility of patch , residents visiting other patches is modeled by the use of a _ residence times matrix_. each entry models the proportion of time that each resident spends in his own patch or as a visitor , every day .we make use of the matrix , with entries ; .parameters specific to el salvador are used and the most recent estimates of for zv we explore , via simulations , the consequences of mobility , modeled by the matrix , the impact that the differences in risk , captured on the assumption , have on the transmission of zv by asymptomatic and symptomatic infectious individuals .the role of mobility between two communities , within the same city , living under dramatically distinct health , economic , social and security settings is now explored using as simple model as possible .we need two patches , the inclusion of within- and across - patches vector - host transmission , and the ability of each individual to ` move ' without loosing , within the model , its place of residence .we consider two highly distinct patches .patch 1 with access to health facilities , regular security , resources and effective public health policies , while patch 2 lacks nearly everything . within thishighly simplified setting , the differences in risk , which depend on host vector ratios , biting rates , use of repellents , access to vector control crews and more , are captured by assuming dramatic differences in transmission ; all captured by the single parameter .it is therefore assumed that , where defines the risk in patch [ high risk patch 1 and low risk patch 2 ] . + the two - patch lagrangian model makes use of a host population stratified by epidemiological classes indexed by residence patch , that is , we let , , , and denote the susceptible , latent , infectious asymptomatic , infectious symptomatic and recovered sub - populations , respectively while , and denote the susceptible , latent and infectious mosquito sub - populations .again , denotes the host patch population size and the total vector population in each patch .it is further assumed , a reasonable assumption in the case of _ aedes aegypti _ , that the the vector does not travel between patches .the single - patch model - parameters are collected and described in table [ tab : param ] while the flow diagram of the single - patch dynamics model , when residents and visitors do not move ; that is , when the residence times matrix is such that , are captured in fig [ fig : flow ] + the residence matrix with entries , , where denotes the proportion of the day that a resident of patch spends in patch , defines host - mobility , lagrangian approach used here .the use of this approach , under the assumption that vectors do not move , has consequences .for example , it leads to the conclusion that at time , the population on each patch does not necessarily have to be equal to the number of residents in that patch , in other words , the _ effective population patch size _ must account for residents and visitors at _ each patch at time . the specifics of the model are now provided following the work of bichara et al . . following our recently developed version of residence patch models , leads to the following two patch model where and represents the residence time that an individual from patch spends in patch . the basic reproduction of this model is the largest eigenvalue of the following matrix , where particularly , we also have the relationship .if the two are isolated , that is , , a case that allow us to estimate the power of zv transmission in the absence of mobility , that is , when each community deals with zv independently and mobility is not allowed , the basic reproduction number is where , for , and so , we proceed to make use of the disparity between patches assumption , captured by the inequality , , where is directly proportional to the local basic reproduction number or , which defines the risk in patch , [ high risk patch 1 and low risk patch 2 ] under idealized , that is , no mobility conditions .changes in the entries of lead to changes in the global basic reproduction number , , which naturally impact for example , the long - term dynamics of zv in the two - patch system .it impacts the overall final epidemic size as well as the residents prevalence within each patch . a possible scenarios that allow us to explore the role that mobility , and local risk , have on global risk ( ) and on the prevalence of infections among the residents of each patch are explored via simulations .simulations are used to assess the impact of local mobility , on the dynamics of zv , between two close communities . the first heavily affected by violence , poverty and lack of resources and a second with access to public health vector control measures , health facilities , resources and the ability to minimize crime and violence .we assume the high risk community ( patch 1 ) is at a higher risk of acquiring zv ; this risk will be manifested through higher biting rates and a higher vector - host ratio leading well captured with a high reproductive number .the simulations of this idealized world considers two scenarios for patch 1 , the first defined by a high local basic reproductive number , and the second by taking , both within the ranges of previous zv outbreaks .it is assumed that in the absence of mobility , patch 2 would be unable to support a zv outbreak and since its mobility - free reproductive number is assumed to be .it is further assumed that mobility from patch 2 into patch 1 is unappealing due to , for example , high levels of violent activity in patch 1 . andso , individuals from patch 2 spend on average a a limited amount of their day in patch 1 ; thus for our baseline simulations we take .figure [ fig2 ] , shows the proliferation of the outbreak as a result of mobility when we assume the reproduction number from the current barranquilla outbreak ( ) . both the incidence and final size of patch 1 and patch 2 increase for all the mobility values tested . , and .mobility does not have a significant effect in the final size of patch 1 , while the final size of patch 2 increases significantly . and .,title="fig : " ] , and .mobility does not have a significant effect in the final size of patch 1 , while the final size of patch 2 increases significantly . and .,title="fig : " ] considering a a lower reproductive number , , figure [ fig1 ] suggest that under high mobility values , the behavior of the final size in patch 1 shifts , meaning that for particular mobility values ( around 0.6 or greater ) for which patch 1 benefits from mobility .nonetheless , this reduction on the final size of patch 1 is not significant enough to have a positive effect on the outcome of the global final size . and and .there is no significant change in patch 1 , but a notable increment of the final size in patch 2 . and .,title="fig : " ] and and .there is no significant change in patch 1 , but a notable increment of the final size in patch 2 . and .,title="fig : " ] analysis of the final size as a function of residence times ( mobility values ) show , in figure [ finsro ] , the aforementioned behavior along with a change in behavior for the case where .furthermore , the global reproductive number is reduced for almost all mobility values compared to the zero mobility case .it is important to notice that the global reproductive number never drops below 1 , meaning that for these two particular cases ( ) , mobility is not enough to bring . .although mobility reduces the global , allowing mobility in the case of el salvador ( ) leads to a detrimental effect in the global final size.,title="fig : " ] .although mobility reduces the global , allowing mobility in the case of el salvador ( ) leads to a detrimental effect in the global final size.,title="fig : " ] figure [ ros1 ] , suggests that a small change in the local basic reproductive number of the high risk patch , is able to drive a considerable change in the behavior of the cumulative final size . meaning that the effective implementation of control measures along with specific mobility patters could have a beneficial impact on the outcome of the epidemic . . is fixed to 0.9 , meanwhile is varied . shows an interval of residence times that reduces the final size . ] finally , we wanted to study the effect produced by the population size from patch 2 in the cumulative final size and the global .figure [ ros ] summarize the mobility effects for both scenarios and , while and . in both scenarios ,the effect of mobility is adverse when , however when , it is possible to find a set of residence times ( around ) that reduces the cumulative final size in comparison with the zero movement case . under mobility ( ) andpopulation proportions effects . when , it is possible to find a set of residence times ( around ) that reduces the cumulative final size in comparison with the zero movement case , title="fig : " ] under mobility ( ) and population proportions effects .when , it is possible to find a set of residence times ( around ) that reduces the cumulative final size in comparison with the zero movement case , title="fig : " ] [ prop ] on the other hand , the global is being reduced considerably when , while for the case the global remains almost equal . for the two epidemiological scenarios and , tables [ tab : prop1 ] and [ tab : prop2 ] show a summary of the average proportion of infected population when low ( ) , intermediate ( ) and high mobility ( ) is allowed for . a fixed population size for patch 1 and different population sizes for patch 2are considered and the case .ccccc & low mobility & intermediate mobility & high mobility & min + 1000 & ( 0.9996 , 0.7110 ) & ( 0.9996 , 0.8124 ) & ( 0.9987 , 0.9757 ) & 2.8643 + 3000 & ( 0.9996 , 0.7398 ) & ( 0.9995 , 0.8263 ) & ( 0.9935 , 0.9722 ) & 2.6421 + 5000 & ( 0.9995 , 0.7468 ) & ( 0.9994 , 0.8308 ) & ( 0.9870 , 0.9688 ) & 2.4666 + 7000 & ( 0.9994 , 0.7478 ) & ( 0.9992 , 0.8310 ) & ( 0.9807 , 0.9648 ) & 2.3237 + 10000 & ( 0.9992 , 0.7451 ) & ( 0.9989 , 0.8276 ) & ( 0.9720 , 0.9579 ) &2.1519 + ccccc & low mobility & intermediate mobility & high mobility & min + 1000 & ( 0.8442 , 0.3526 ) & ( 0.8442 , 0.3736 ) & ( 0.8144 , 0.6902 ) & 1.4382 + 3000 & ( 0.8332 , 0.3886 ) & ( 0.8336 , 0.4169 ) & ( 0.7673 , 0.6702 ) & 1.3386 + 5000 & ( 0.8194 , 0.3924 ) & ( 0.8183 , 0.4299 ) & (0.7272 , 0.6427 ) & 1.262 + 7000 & ( 0.8043 , 0.3883 ) & ( 0.8005 , 0.4310 ) & ( 0.6899 , 0.6117 ) & 1.2016 + 10000 & ( 0.7799 , 0.3771 ) & ( 0.7704 , 0.4223 ) & ( 0.6367 , 0.5630 ) & 1.1323 + figure [ rodata ] shows the global over all mobility values for different population sizes of patch 2 , for the two epidemic scenarios .the minimum value is reached for all cases when mobility is at and this value is being reduced when .the global is dominated by the local , this is , the local of the high risk patch .dynamics through mobility when .patch 2 populations are varied from up to .the global hits its minimum always at of mobility and as this minimum value is decreasing.,title="fig : " ] dynamics through mobility when . patch 2 populations are varied from up to .the global hits its minimum always at of mobility and as this minimum value is decreasing.,title="fig : " ]this manuscript looks at the role of mobility in an idealized setting involving two adjacent highly distinct communities .the first community has the resources and means to control a zv outbreak ( ) while the second faces dramatic limitations , , very large , or strong limitations , large .we also explored the role of density by assuming that with k = 1, .. ,10 .each simulation looks at the role of the mobility matrix , the global , and local , i=1,2 . on the local and overall prevalence of zv .we verify the expected results , density matters , the global has a minimum value , which depends on the entries of the mobility matrix , and , very large is a lot worse than large .we also see that unless movement is dramatically reduced that there is no hope that zv can be contained .certainly , stopping mobility would lead to the elimination of zv in patch 2 . by stopping mobilitywould bring the economy to a halt , in this idealized two - world community . naturally , the set up is not representative of any real situation .the model could be easily modified to include the type of heterogeneity found in ` real ' communities .the dynamics of zv in places where violence is high due to gangs or other type of criminal activities , would be no doubt , make it nearly impossible to eliminate zv .this project has been partially supported by grants from the national science foundation ( dms-1263374 and due-1101782 ) , the national security agency ( h98230 - 14 - 1 - 0157 ) , the office of the president of asu , and the office of the provost of asu .the views expressed are sole responsibility of the authors and not the funding agencies .
in november 2015 , el salvador reported their first case of zika virus ( zv ) leading to an explosive outbreak that in just two months had over 6000 suspected cases . many communities along with national agencies initiated the process to implement control measures that ranged from vector control and the use of repellents to the suggestion of avoiding pregnancies for two years , the latter one , in response to the growing number of microcephaly cases in brazil . in our study , we explore the impact of short term mobility between two idealized interconnected communities where disparities and violence contribute to the zv epidemic . using a lagrangian modeling approach in a two - patch setting , it is shown via simulations that short term mobility may be beneficial in the control of a zv outbreak when risk is relative low and patch disparities are not too extreme . however , when the reproductive number is too high , there seems to be no benefits . + this paper is dedicated to the inauguration of the centro de modelamiento matemtico carlos castillo - chvez at universidad francisco gavidia in san salvador , el salvador . * mathematics subject classification : * 92c60 , 92d30 , 93b07 . [ [ keywords ] ] * keywords : * + + + + + + + + + + + vector - borne diseases , zika virus , residence times , multi - patch model .
some time ago , feinberg and one of us ( in a paper to be referred to as fz ) proposed the study of the equation where the real numbers are generated from some random distribution .two particularly simple models were studied : ( a ) the s are equal to with equal probability , and ( b ) with the angle uniformly distributed between and imposing the boundary condition on ( [ seq ] ) we can write the set of equations as the eigenvalue equation with the column eigenvector with components and the by non - hermitian random matrix while quantum mechanics is of course hermitian it is convenient to think of as a hamiltonian and ( [ seq ] ) as the non - hermitian schrdinger equation describing the propagation of a particle hopping on a 1-dimensional lattice .some applications of non - hermitian random hamiltonians include vortex line pinning in superconductors and growth models in population biology .a genuine localization transition can occur for random non - hermitian schrdinger hamiltonians in one dimension .as mentioned in fz , with the open chain boundary condition the more general equation can always be reduced to ( [ seq ] ) by an appropriate `` gauge '' transformation furthermore , applying the transformation to ( [ seq ] ) we see that if we change then the spectrum changes by thus , scaling the magnitude of the s merely stretches the spectrum , and flipping the sign of all the s corresponds to rotating the spectrum by it is also useful to formulate the problem in the transfer matrix formalism .write ( [ seq ] ) as where the transfer matrix is defined as the 2 by 2 matrix define then the boundary condition implies the solution of this polynomial equation in determines the spectrum . since is non - hermitian the eigenvalues invade the complex plane . for modelb , the spectrum has an obvious rotational symmetry and forms a disk ( see fig .[ fig1 ] , which displays the support of the density of states ) .an expansion of the density of eigenvalues around to very high orders in has been given by derrida et al .. this analytic expansion however can not predict singularities in the density of states .in contrast , for model a the spectrum enjoys only a rectangular symmetry .the first corresponds to obtained by complex conjugating the eigenvalue equation the second corresponds to obtained by the bipartite transformation remarkably , fz found that the spectrum has an enormously complicated fractal - like form . in fig[ fig2a ] we plot the support of the density of eigenvalues in the complex plane for a matrix , for a specific realization of the disorder . in fig .[ fig2b ] we plot the support of the density of states in the complex plane for a matrix , averaged over 100 realizations of the disorder . contrasting figs .[ fig2a ] and [ fig2b ] with fig . [ fig1 ] , one can see why it has been a challenge for mathematical physicists to understand the nature of the spectrum .in general , for random non - hermitian matrices , the density of eigenvalues can be obtained by where the green s function is defined by with the bracket denoting averaging and ( see , for example , ref . for a proof of these relations ) .equation ( [ rho ] ) follows from the identity where .expanding we see that counts the number of paths of a particle returning to the origin in steps .evidently , for model a each link has to be traversed 4 times . the spectrum of model a was studied by cicuta et al . and by gross and one of us by counting paths .in particular , cicuta et al. gave an explicit expression for the number of paths .recently , the more general problem of even - visiting random walks has been studied extensively ( see ref . ) . in addition, exact analytic results for the lyapunov exponent have been found by sire and krapivsky . in this paperwe propose a different approach based on the theory of words .we will focus on model a although some of our results apply to the general class of models described by ( [ schrody ] ) .one important issue is the dimensionality of the spectrum of model a. in general , the spectrum of non - hermitian random matrices when averaged over the randomness is 2-dimensional ( for example , the spectrum of model b ) .many of the authors who have looked at model a believe that its spectrum , as shown in figs .[ fig2a ] and [ fig2b ] , is quasi-0-dimensional : the spectrum seems to consist of many accumulation points .here we claim that the dimension of the spectrum actually lies between and dimensions , in the sense described below .consider the degree characteristic polynomial , where denotes the identity matrix .we easily obtain the recursion relation with and note that this second order recursion relation can be expressed in terms of transfer matrices as very similar to those defined for the wave function , with the transfer matrix given by following dyson and schmidt we consider the characteristic ratio which satisfies the recursion relation with initial condition .the hope is that , while the characteristic polynomials obviously changes dramatically as varies , the characteristic ratio might converge asymptotically . from the definition in eq .( [ y ] ) , it is clear that a point is in the spectrum of the matrix iff and .thus a point belongs to the spectrum if the corresponding set of variables is unbounded , that is , if the probability of escape of the variable to is finite .as is shown in , this condition is sufficient to determine the spectrum along the real axis ( ) , but is insufficient in the complex case .let be the probability distribution of ( note that is complex ) .then where the brackets denote the average over the disorder variables . in the thermodynamic limit , under fairly general conditions can be shown that the probability distribution has a limit , called the invariant distribution , which is determined by the self - consistent equation where we have used the fact that near a zero , , of is given by . for modela , we obtain the amusing equation this type of equation has been studied extensively for the real case in . it can be shown that it can be solved by the ansatz where the s depend on whereas the s do nt .note that the index is not necessarily an integer and can refer to a continuous set .in addition , it can be shown that the are the stable fixed points of the product of any sequence of transfer matrices ( see the next section ) . plugging ( [ ansatz ] ) into ( [ ds ] ) we have since the right hand side has twice as many delta function spikes as the left hand side , for the two sides to match we expect that , in general , the index would have to run over an infinite set .for a given complex number , we demand that the two sets of complex numbers and be the same .this very stringent condition should then determine the to see how this works , focus on a specific ( since the label has not been specified this can represent any ) .it is equal to either or for some but must in its turn be equal to either or for some . this process of identification must continue until we return to .indeed , if the process of return to occurs in a finite number of steps , then it will repeat indefinitely ( since the system is back at its starting point .it is this infinite repetition which gives a finite weight to the at .by contrast , if the number of steps needed to return to the initial point is infinite , then the weight associated with this point vanishes , and it will not be present in the spectrum . we thus conclude that the support of the distribution of is the closure of the set of all the stable fixed points of the product of any sequence of transfer matrices . we also conclude that the support of the density of states of the non - hermitian matrix , in the thermodynamic limit , is given by the zeroes of any stable fixed point : .what is important to notice is that the are independent of and depend only on the length of the word .thus , we conclude that the set of complex numbers is determined by the solution of an infinite number of fixed point equations .it is useful here to introduce the theory of words .a word of length is defined as the sequence where the letters . in other words, we have a binary alphabet .let us also define the repetition of a given word a specific number of times as a simple sentence .we can then string together simple sentences to form paragraphs .for a given word of length , consider a function to be constructed iteratively .for notational simplicity we will suppress the dependence of on and indicating only its dependence on the length of the word the iteration begins with and continues with we define the set of complex numbers are then determined as follows .consider the set of all possible words .for each word , determine the solution of the fixed point equation by considering small deviations from the solution , we see that the solution is a stable fixed point only if the set of all possible words generates the set of complex numbers in other words , is determined by a continued fraction equation , since we see that has the form with the polynomials and determined by the recursion relations and with the initial condition and we notice that and satisfy the same recursion relation as that satisfied by with the correspondence note also that ( [ alpha ] ) and ( [ beta ] ) can be packaged as the matrix equation where the transfer matrix defined in the previous section appears .this is closely related to the transfer matrix formalism discussed earlier .indeed , defining we have the initial condition .hence a given word of length can also be characterized by a matrix where for convenience we have written , and for a given word the fixed point value is determined by the quadratic equation which is the fixed point equation of the homographic mapping associated with the matrix .the geometric interpretation is clear : the matrix acts on vectors , and we ask for the set of such that the ratio of the first component to the second component is left invariant by the transformation . in other words ,we look for the projective space left invariant by the transformation : the fixed point value defines the direction of the invariant ray .hence is given by with polynomials of degree in , where denotes the length of the word .explicitly , and the stability condition ( [ stable ] ) determines which root of ( [ b ] ) is to be chosen .we will see shortly that determines the spectrum . anticipating this , we see that if we form a compound word by stringing the word together twice ( for example , the japanese word `` nurunuru '' ) then we expect the contribution to the spectrum to be the same . but given the preceding discussion , this is obvious , since if a ray is left invariant by it is manifestly left invariant by we have determined , how do we extract the density of eigenvalues ?the eigenvalues of the matrix are given by . from ( [ y ] )we have , and thus using the identity ( [ cauchy ] ) we can differentiate the right hand side of ( [ bob ] ) to obtain the density of eigenvalues in the complex plane plugging in our solution we finally deduce that since the do not depend on , the spectrum is determined by the zeroes of the fixed point solutions we see from ( [ b ] ) that the density of eigenvalues is given as a sum over of terms like -\frac{r^{\prime } ( z)}{% r(z)}\right\}.\ ] ] thus the spectrum consists of isolated poles given by the zeroes of and , and of the cuts of , and is made of isolated points plus curved line segments connecting the zeroes of contrary to what some authors have believed , the spectrum is not , but , with .each word gives rise to a line segment , and words which differ slightly from each other gives rise to line segments near each other .indeed , given a word , it is possible to construct a word with a spectrum as close to that of as desired .for that purpose , we may construct as where is any `` corrupting '' word , and the two lengths and are sufficiently long .indeed , in terms of transfer matrices and invariant rays , we see that acting on any initial ray brings it close to the stable invariant ray of .then the direction of this ray is corrupted by , but it is brought back arbitrarily close to the invariant ray of by applying the transfer matrix , provided that is large enough . presumably ( although this remains to be proven rigorously ) , the spectrum associated with the corrupted word can be made as close as we want to that of .we have thus this property that for any word , there is a word generating a spectrum as close as we want to that of . in figs . [ fig3a ] and [ fig3b ] we plot the eigenstates of a word and the spectrum of the word .we see that the two spectra are very close . as is clear from this discussion , the spectrum is indeed `` incredibly complicated . ''we content ourselves by focusing on the cuts of since for a word of length is a polynomial of degree with roots , it gives rise to curved line segments .the curves are given by the condition ( the sign of depends on the choice for the square root branch cut . )given a word of length , the corresponding spectrum must be invariant under a cyclic permutation of the letters namely under as an example , for the word and , which has roots at , and .it is now clear what the words correspond to `` physically '' : a matrix with s given by an endless repetition of has a spectrum given by a straight line connecting and two algebraic curves connecting to and to , plus poles at notice that the two poles are buried under a cut . in fig .[ fig4 ] we show the spectrum associated with the word together with the spectrum of a random matrix .we now give a complete study of all words .the polynomial is easily found to be with , and the condition ( [ imq ] ) for the curves in the spectrum reduces to there are only three non - trivial words , namely and their contribution to the spectrum of together with the spectrum of a random matrix is shown in fig .[ fig5a ] . in fig .[ fig5b ] we show the contribution of all one , two , three , and four letter words to the density of states .thus , an by matrix with s given by repeating the word has a spectrum determined by the stable fixed point value corresponding to furthermore , consider an by matrix with s given by first repeating the word ( of length times and then by repeating the word ( of length times .as we would expect , in the limit in which , and all tend to infinity , the spectrum of is given by superposing the spectra of , where is constructed with s given by first repeating the word times .this clearly generalizes . in figs .[ fig6a][fig6c ] we show the spectrum of the word , the spectrum of , and the spectrum of the word .we see the superposition principle at work . from this discussionit becomes clear why the spectrum of the matrix in fz is so complicated .the sequence is a book written in the binary alphabet that , in the mathematical limit , contains all possible words , sentences , and paragraphs .in fact , contains everything ever written or that will be written in the history of the universe , including this particular paper .this familiar mind boggling fact accounts for the complicated looking spectrum first observed in fz .it also explains why numerical studies of the spectrum suggest that it is . even for as large as the sequence contains an infinitesimally small subset of the set of all possible words , sentences , and paragraphs .in the ensemble of all books there are particularly simple books such that consists of a word of length repeated again and again . in this case , we can determine the spectrum explicitly by two different methods .let be the transfer matrix corresponding to in other words , where the matrix product is ordered .after repeating the word times , we have diagonalizing we see immediately that is a linear function of and we remind the reader that all quantities in ( [ drl ] ) are functions of the spectrum of is determined by the zeroes of as we note that in this limit the solution of does not depend on knowing the detailed form of and indeed , ( [ zero ] ) implies or since in the limit tends towards a ( ) complex number of modulus unity . thus , we conclude that namely , that the eigenvalues of lie on the unit circle .this constraint suffices to determine the eigenvalues of plugging ( [ unit ] ) , that is into the eigenvalue equation we obtain if and if , which we can combine into the single equation after a trivial phase shift . as ranges from to this traces out the spectrum in the complex plane .as an example , consider the word , in which case ( [ result ] ) reduces to this traces out the algebraic curve shown in fig . [ fig7 ] , which is to be compared to fig . [ fig6b ] .of course , since is now translation invariant with period , we can apply bloch s theorem to determine the spectrum of imposing we reduce the eigenvalue problem of to the eigenvalue problem of the by matrix one can verify that with a suitable relation between and the eigenvalue equation becomes identical to ( [ result ] ) .we have seen that the structure of this simple tridiagonal non - hermitian random matrix possesses an amazing richness .this complexity can be understood if one realizes that the spectrum of the random matrix is the sum of the spectra of all tridiagonal matrices with a periodic subdiagonal obtained by repeating an infinite number of times any finite word of length , weighted by a factor .the number of lines does not have the cardinal of the continuum .the number of lines is equal to the number of words of any length that can be made with a 2-letter alphabet ; this is a countable number .there are many open questions concerning the fine structure of the spectrum , such as whether the spectrum contains holes in the complex plane ( in its domain of definition ) .we also have not touched upon the question of the nature of the eigenstates .are they localized or delocalized ?numerical data seems to suggest a localization transition .we hope to address these and other questions in future work .one of us ( ho ) would like to thank m. bauer , d. bernard and j.m .luck for helpful discussions .this work was partially supported by the nsf under grant phy99 - 07949 to the itp .as remarked in the text , the coefficients of an even polynomial of degree must be constructed out of the cyclic invariants made of there is presumably a well - developed mathematical theory of cyclic invariants , but what we need we can easily deduce here .for any we have the two obvious cyclic invariants and the number of cyclic invariants grows rapidly with apparently different cyclic invariants can be constructed out of other cyclic invariants , for example , it is easy to work out for low values of as follows : with , where we have written in a form which shows that its roots can be found explicitly , with and with where and with the quantities are manifestly cyclic invariants .
a non - hermitian random matrix model proposed a few years ago has a remarkably intricate spectrum . various attempts have been made to understand the spectrum , but even its dimension is not known . using the dyson - schmidt equation , we show that the spectrum consists of a non - denumerable set of lines in the complex plane . each line is the support of the spectrum of a periodic hamiltonian , obtained by the infinite repetition of any finite sequence of the disorder variables . our approach is based on the `` theory of words . '' we make a complete study of all 4-letter words . the spectrum is complicated because our matrix contains everything that will ever be written in the history of the universe , including this particular paper .
in this paper , we consider network coding for multiple unicast sessions over directed acyclic graphs . in general , non - linear network coding should be considered in order to achieve the whole rate region of network coding . yet , there exist networks , for which routing is sufficient to achieve the whole rate region .we refer to these networks as _ routing - optimal _ networks .we attempt to answer the following questions : 1 ) what are the distinct topological features of these networks ?2 ) why do these features make a network routing - optimal ? the answers to these questions will not only explain which kind of networks can or can not benefit from network coding , but will also deepen our understanding on how network topologies affect the rate region of network coding .a major challenge is that there is currently no effective method to calculate the rate region of network coding .some researchers proposed to use information inequalities to approximate the rate region .however , except for very simple networks , it is very difficult to use this approach since there is potentially an exponential number of inequalities that need to be considered . provides a formula to calculate the rate region by finding all possible entropy functions , which are vectors of an exponential number of dimensions , thus very difficult to solve even for simple networks . in this paper, we employ a graph theoretical approach in conjunction with information inequalities to identify topological features of routing - optimal networks .our high - level idea is as follows .consider a network code .for each unicast session , we choose a cut - set between source and sink , and a set of paths from source to sink such that each path in passes through an edge in .since the information transmitted from the source is totally contained in the information transmitted along the edges in , we can think of distributing the source information along the edges in ( details will be explained later ) .moreover , we consider a routing scheme in which the traffic transmitted along each path is exactly the source information distributed over the edge in that is traversed by .such a routing scheme achieves the same rate vector as the network code .however , since the edges might be shared among multiple unicast sessions , such a routing scheme might not satisfy the edge capacity constraints .this suggests that the cut - sets and path - sets we choose for the unicast sessions should have special features. these are essentially the features we are looking for to describe routing - optimal networks .we make the following contributions : * we identify a class of networks , called _ information - distributive _ networks , which are defined by three topological features .the first two features capture how the edges in the cut - sets are connected to the sources and the sinks , and the third feature captures how the paths in the path - sets overlap with each other . due to these features , given a network code , there is always a routing scheme such that it achieves the same rate vector as the network code , and the traffic transmitted through the network is exactly the source information distributed over the cut - sets between the sources and the sinks .* we prove that if a network is information - distributive , it is routing - optimal .we also show that the converse is not true .this indicates that the three features might be too restrictive in describing routing - optimal networks .* we present examples of information - distributive networks taken from the index coding problem and single unicast with hard deadline constraint .we expect that our work will provide helpful insights towards characterizing all possible routing - optimal networks .the network is represented by an acyclic directed multi - graph , where and are the set of nodes and the set of edges in the network respectively .edges are denoted by , or simply by , where and .each edge represents an error - free and delay - free channel with capacity rate of one .let and denote the set of incoming edges and the set of outgoing edges at node .there are unicast sessions in the network .the unicast session is denoted by a tuple , where and are the source and the sink of respectively .the message sent from to is assumed to be a uniformly distributed random variable with finite alphabet , where is the source information rate at .all s are mutually independent .given , denote .we assume for all .let denote the minimum capacity of all cut - sets between two nodes and .given two nodes , let denote the set of directed paths from to .the _ routing domain _ of , denoted by , is the sub - graph induced by the edges of the paths in .a _ routing scheme _ is a transmission scheme where each node only replicates and forwards the received messages onto its outgoing edges .define the following linear constraints : & _ p _s_id_i f_i(p ) r_i 1i k [ eqroutingcond1 ] + & ^k_i=1 _ p _ s_id_i , ep f_i(p ) 1 e e [ eqroutingcond2 ] where represents the amount of traffic routed through path for .a rate vector is achievable by routing scheme if there exist s such that ( [ eqroutingcond1 ] ) and ( [ eqroutingcond2 ] ) are satisfied .the rate region of routing scheme , denoted by , is the set of all rate vectors achievable by routing scheme .a network coding scheme is defined as follows : [ defnc ] an _ network code _ with block length is defined by : 1 . for each and , a local encoding function : ; 2 . for each and , a local encoding function : ; 3 . for each , a decoding function : ; 4 . for each ,the decoding error for is , where is the value of as a function of .given , let , where is the value of as a function of , denote the random variable transmitted along in a network code. for a subset , denote .[ defachievable ] a rate vector is _ achievable _ by network coding if for any , there exists for sufficiently large , an network code such that the following conditions are satisfied : & _ e 1 + ee [ eqncachieve1 ] + & r_i r_i - 1i k [ eqncachieve2 ] + & _ i 1i k [ eqncachieve3 ] the _ capacity region _ achieved by network coding , denoted by , is the set of all rate vectors achievable by network coding . given a network code that satisfies ( [ eqncachieve1])-([eqncachieve3 ] ) , the following inequalities must hold : & h(u_e ) ( _ e ) 1 + ee [ eqncachievei1 ] + & h(y_i ) = ( 2^nr_i ) r_i r_i - 1i k [ eqncachievei2 ] + & i(y_i;u_(d_i ) ) ( 1-)(r_i- ) - 1i k [ eqncachievei3 ] where ( [ eqncachievei3 ] ) is due to fano s inequality : & i(y_i;u_(d_i ) ) ( h(y_i ) - _ i |_i| - 1 ) + = & ( 1 - _ i ) h(y_i ) - ( 1- ) ( r_i - ) - since routing scheme is a special case of network coding , . a network is said to be _ routing - optimal _, if , _i.e. , _ for such network , routing is sufficient to achieve the whole rate region of network coding .in this section , we present a class of routing - optimal networks , called _ information - distributive _ networks .we first use examples to illustrate the topological features of these networks , and show why they make the networks routing - optimal .then , we define these networks more rigorously .[ ex1 ] we start with the simplest case of single unicast . it is well known that for this case , a network is always routing - optimal . in this example , we re - investigate this case from a new perspective in order to highlight some of the important features that make it routing optimal .let , and is a cut - set between and .assume .therefore , for ( ) , there exists a network code such that ( [ eqncachieve1])-([eqncachieve3 ] ) are satisfied . in the followings ,all the random variables are defined in this network code .one important feature of this network is that each path from to must pass through at least an edge in .thus , is a function of .the following inequality holds : [ eqex1determine ] i(y_1;u_(d_1 ) ) i(y_1;u_c ) the following equation holds : [ eqex1distr ] i(y_1;u_c ) = ^m_j=1 i(y_1;u_e_j | u_\{e_1,,e_j-1 } ) intuitively , we can interpret ( [ eqex1distr ] ) as follows : is the amount of information about that can be obtained from , the amount of information about that can be obtained from , excluding those already obtained from , and so on .hence , ( [ eqex1distr ] ) can be seen as a `` distribution '' of the source information over the edges in . moreover , for each , we have : [ eqex1cap ] i(y_1 ; u_e_j | u_\{e_1,,e_j-1 } ) h(u_e_j ) another important feature is that due to menger s theorem , there exist edge - disjoint paths , , from to such that for . due to this feature, we can construct a routing scheme by simply letting each transmit the information distributed on : [ eqex1routing ] f^n , k(p ) = i(y_1;u_e_j|u_\{e_1,,e_j-1 } ) & p = p_j , 1j m + 0 & .clearly , due to ( [ eqncachievei1 ] ) and ( [ eqex1cap ] ) , the above routing scheme satisfies the following inequalities : [ eqex1routingcap ] & f^n , k(p_j ) h(u_e_j ) 1 + moreover , due to ( [ eqncachievei3])-([eqex1distr ] ) , we have : [ eqex1routingrate ] & _p_s_1d_1 f^n , k(p ) = ^m_j = 1 f^n , k(p_j ) = i(y_1;u_c ) + & i(y_1;u_(d_1 ) ) ( 1- ) ( r_i - ) - since have an upper bound ( see ( [ eqex1routingcap ] ) ) , there exists a sub - sequence such that each sequence approaches a finite limit .define the following routing scheme : f_1(p ) = _ l f^n_l , k_l(p ) & p = p_j ( 1j m ) ; + 0 & .due to ( [ eqex1routingcap ] ) and ( [ eqex1routingrate ] ) , the above routing scheme satisfies ( [ eqroutingcond1 ] ) and ( [ eqroutingcond2 ] ) .hence , , which implies .therefore , the network is routing - optimal . as shown above , two features are essential in making a network with single - unicast routing - optimal .the first feature is the existence of a cut - set such that each path from the source to the sink must pass through an edge in the cut - set . due to this feature ,the source information contained in can be completely obtained from the messages transmitted through the cut - set ( see ( [ eqex1determine ] ) ) .the second feature is the existence of edge - disjoint paths , each of which passes through exactly one edge in .due to this feature , a routing scheme can be constructed such that the traffic transmitted along the paths is exactly the information distributed on the edges in ( see ( [ eqex1routing ] ) ) .these two features together guarantee that the routing scheme achieves the same rate as network coding ( see ( [ eqex1routingcap ] ) , ( [ eqex1routingrate ] ) ) .however , extending these features to multiple unicast sessions is not straightforward .one difference from single unicast is that may not be a function of , where is a cut - set between and , and thus ( [ eqex1determine ] ) might not hold .another difference is that the information from multiple unicast sessions might be distributed on an edge , and thus ( [ eqex1cap ] ) might not hold .moreover , the paths for multiple unicast sesssions might overlap with each other , and thus ( [ eqex1routingcap ] ) might not hold .these differences suggest that the cut - sets and the paths , over which a routing scheme is to be constructed , should have additional features in order for the resulting routing scheme to achieve the same rate vector as network coding .we use an example to illustrate some of these features .[ ex2 ] consider the network shown in fig .[ figex0twounicast]a .consider an arbitrary rate vector .therefore , for ( ) , there exists a network code that satisfies ( [ eqncachieve1])-([eqncachieve3 ] ) . in the sequel ,all the random variables are defined in this network code . for , we choose a cut - set between and , and a set of paths that pass through respectively ; for , we choose a cut - set between and , and a set of paths that pass through respectively .we first investigate .one important feature is that each path from to passes through at least an edge in .thus , is also a cut - set between and , and is a function of .hence , we have : [ eqex2determine1 ] i(y_1;u_(d_1 ) ) i(y_1;u_c_1 ) moreover , is a cut - set between and , and is a function of .hence is a function of , which implies : [ eqex2determine2 ] i(y_2;u_(d_2 ) | y_1 ) i(y_2;u_c_2 | y_1 ) we distribute the source information over as follows : [ eqex2distr ] & i(y_1;u_c_1 ) = i(y_1;u_e_1 ) + i(y_1;u_e_2 | u_e_1 ) + & + i(y_1;u_e_3|u_\{e_1,e_2 } ) + &i(y_2;u_c_2 | y_1 ) = i(y_2;u_e_2 | y_1 ) + i(y_2;u_e_3 | y_1 , u_e_2 ) another feature about is that edge is connected to only one source , and thus is a function of . as shown below , this feature guarantees that the information distributed on an edge is completely contained in .first , for , it can be easily seen that : [ eqex2cap1 ] i(y_1;u_e_1 ) h(u_e_1 ) for , we have : [ eqex2cap2 ] & i(y_1 ; u_e_2 | u_e_1 ) + i(y_2 ; u_e_2 | y_1 ) + & i(y_1 ; u_e_2 | u_e_1 ) + i(y_2 ; u_e_2 | y_1 , u_e_1 ) + = & i(y_1,y_2 ; u_e_2 | u_e_1 ) h(u_e_2 ) where is due to the fact that is a function of , and thus , .similarly , for , we have : [ eqex2cap3 ] & i(y_1 ; u_e_3 | u_\{e_1,e_2 } ) + i(y_2 ; u_e_3 | y_1 , u_e_2 ) + & i(y_1 ; u_e_3 | u_\{e_1,e_2 } ) + i(y_2 ; u_e_3 | y_1 , u_\{e_1,e_2 } ) + = & i(y_1,y_2 ; u_e_3 | u_\{e_1,e_2 } ) h(u_e_3 ) where is again due to the fact that is a function of .next , we investigate .one important feature is that if overlaps with , .for example , overlaps with , and .this feature ensures that the information distributed over can be further distributed over the paths in . to see this , we construct the following routing scheme : & f^n , k_1(p ) = i(y_1;u_e_j| u_\{e_1,,e_j-1 } ) & p = p_1j,1j 3 + 0 & .+ & f^n , k_2(p ) = i(y_2;u_e_2 | y_1 ) & p = p_21 ; + i(y_2;u_e_3 | y_1 , u_e_2 ) & p = p_22 ; + 0 & . due to( [ eqex2cap1])-([eqex2cap3 ] ) , we can derive that for each , [ eqex2routingcap ] ^2_i=1 _ p_s_id_i , ep f^n , k_i(p ) h(u_e ) 1 + for , we have : & ^2_i=1 _ p_s_id_i , e_4p f^n , k_i(p ) + = & f^n , k_1(p_12 ) + f^n , k_2(p_21 ) h(u_e_2 ) 1 + likewise , we can prove that ( [ eqex2routingcap ] ) holds for all the other edges of the paths in .due to ( [ eqex2determine1])-([eqex2distr ] ) , the following inequalities hold for [ eqex2routingrate ] _p_s_id_i f^n , k_i(p ) ( 1 - ) ( r_i- ) + by ( [ eqex2routingcap ] ) , there exists a sub - sequence such that for all and , the sub - sequence approaches a finite limit . define a routing scheme : f_i(p ) = _ l f^n_l , k_l_i(p ) & p _i , i=1,2 ; + 0 & due to ( [ eqex2routingcap ] ) and ( [ eqex2routingrate ] ) , satisfies ( [ eqroutingcond1 ] ) and ( [ eqroutingcond2 ] ) .hence , , and .the network is routing - optimal . in this subsection , we present the definition of information - distributive networks . similarly to single unicast , for each unicast session ( ) , we choose a cut - set between and such that , and a set of paths from to .the collection of these cut - sets , denoted by , is called a _ cut - set sequence _ , and the collection of these path - sets , denoted by ,is called a _ path - set sequence_. for instance , in example [ ex2 ] , we choose a cut - set sequence , where is a cut - set between and , and is a cut - set between and , and a path - set sequence , where is a path - set from to , and a path - set from to .moreover , we arrange the edges in each cut - set in in some ordering .for instance , in example [ ex2 ] , we arrange the edges in in the ordering , and the edges in in the ordering .each such ordering is called a permutation of the edges in the corresponding cut - set .the collection of these permutations , denoted , is called a _ permutation sequence_. for , let denote the subset of edges before in . for , define , and the largest index of the source to which is connected .the first feature is described below .next , we formalize the three features we have shown in example [ ex2 ] . the first feature is described below .[ defcumulative ] given a cut - set sequence , if for all , each path from to must pass through an edge in , we say that is _cumulative_. this feature guarantees that the source information contained in the incoming messages at each sink can be completely obtained from .[ lemmadetermine ] consider a network code as defined in definition [ defnc ] .if is a cumulative cut - set sequence , then for each , is a function of , and the following inequality holds : [ eqdetermine ] i(y_i ; u_(d_i ) | y_1:i-1 ) i(y_i ; u_c_i | y_1:i-1 ) see appendix [ appinfodistr ] .given a cumulative cut - set sequence and a permutation sequence for , we can distribute the source information over the edges in as follows : [ eqinfodistr ] i(y_i ; u_c_i | y_1:i-1 ) = _ ec_i i(y_i ; u_e | y_1:i-1 , u_t_i(e ) ) the second feature is presented below . without loss of generality , let , where .[ defdistributive ] given a cut - set sequence , we say that it is _ distributive _ if there exists a permutation sequence for such that for each , the following conditions are satisfied : for all , & ( e ) n_k et_n_j+1(e ) - t_n_j(e ) [ eqdistr1 ] + & ( e ) n_j+1 - 1 et_n_j(e ) - t_n_j+1(e ) [ eqdistr2 ] as shown in example [ ex2 ] , let , and . for , , , and thus , ( [ eqdistr1 ] ) is trivially satisfied ; , , and ( [ eqdistr2 ] ) is satisfied .similarly , we can verify other edges .hence , is distributive .the above two features ensure that the information from multiple unicast sessions that is distributed on an edge can be completely obtained from .[ lemmainfodistr ] consider a network code as defined in definition [ defnc ] .given a cumulative cut - set sequence , if is distributive , for each , the following inequality holds : [ eqtotaldistr ] _1ik , ec_i i(y_i ; u_e | y_1:i-1 , u_t_i(e ) ) h(u_e ) see appendix [ appinfodistr ] .the third feature is presented below .[ defextendable ] given a path - set sequence for , we say that is _ extendable _ , if for all , and such that overlaps with , .as shown in example [ ex2 ] , let . clearly , we have , , and , .thus , is extendable .[ definfodistributive ] a network with multiple unicast sessions is said to be _ information - distributive _ , if there exist a cumulative and distributive cut - set sequence , and an extendable path - set sequence for in the network .as shown in the next theorem , the three features together guarantee that the network is routing - optimal .[ thinfodistr ] if a network is information - distributive , it is routing - optimal .see appendix [ appinfodistr ] .consider the network shown in fig .[ figex0threeunicast ] .define the following cut - sets : & c_1=\{(s_,v_1 ) , ( v_2,v_3 ) , ( v_4,v_5 ) } + & c_2=\{(v_2,v_3 ) , ( v_4,v_5 ) } + & c_3=\{(v_6,v_7 ) , ( s_3,d_3 ) } define .define the following paths : & p_11=\{(s_1,v_1 ) , ( v_1,d_1 ) } + & p_12=\{(s_1,v_2 ) , ( v_2,v_3 ) , ( v_3,d_1 ) } + & p_13=\{(s_1,v_4 ) , ( v_4,v_5 ) , ( v_5,d_1 ) } + & p_21=\{(s_2,v_2 ) , ( v_2,v_3 ) , ( v_3,d_2 ) } + & p_31=\{(s_3,v_6 ) , ( v_6,v_7 ) , ( v_7,d_3 ) } + & p_33=\{(s_3,d_3 ) } define ,, .it can be verified that is cumulative and distributive , and is extendable .the network is information - distributive .we consider a multiple - unicast version of index coding problem . in this problem , there are terminals , a broadcast station , and source messages , all available at .all s are mutually independent random variables uniformly distributed over alphabet .each terminal requires , and has acquired a subset of source messages such that . uses an encoding function to encode the source messages , and broadcasts the encoded message to the terminals through an error - free broadcast channel .each uses a decoding function to decode by using the received message and the messages in .the encoding function and the decoding functions s are collectively called an index code , and is the length of this index code .the minimum length of an index code is denoted by .this index coding problem can be cast to a multiple - unicast network coding problem over a network , where , .the unicast sessions are .it can be verified that there exists an index code of length , if and only if is achievable by network coding in .let , .define and , where . since each contains only one edge, is distributive .meanwhile , since all s overlap at , is extendable . the following theorem states that if the optimal solution to the index coding problem is to let the broadcast station transmit raw packet , _i.e. , _ no coding is needed , then the corresponding multiple - unicast network is information - distributive , and the converse is also true .[ thindexcode ] if and only if is cumulative , _ is information - distributive .see appendix [ appproofex ] . ][ exindexcode ] in fig .[ figindexcode ] , we show an example of , which corresponds to an index coding problem defined by : , , , and . clearly , is cumulative , and thus . in this example , we consider the network coding problem for a single - unicast session over a network , where each edge is associated with a delay , and each node has a memory to hold received data .given a directed path , let denote its delay . for , let denote the minimum delay of directed paths from to .the data transmission in the network proceeds in time slots .the messages transmitted from is represented by a sequence )^k_{t=0} ] is a uniformly distributed random variable , and represents the message transmitted from at time slot .all ] must be received by within time slots . otherwise , it is regarded as useless , and is discarded .this problem was first proposed by .recently , it has been shown that network coding can improve throughput by utilizing over - delayed information .this problem can be cast to an equivalent network coding problem for multiple unicast sessions .we construct a time - extended graph as follows : the node set is : 0 \le t \le k+\tau\} ] to ; for , and , we add edges from ] , where is the amount of memory available at ; for each , we add edges from to ] to , where is a sufficiently large integer .thus , the original single unicast session is cast to unicast sessions over .let ] .it can be seen that each ] .given a subset of edges , define =\{(u[k+t],v[l+t ] ) : ( u[k],v[l])\in u\} ] be a cut - set between and such that for , and a set of edge disjoint paths from to such that \in p_j ] .we consider the cut - set sequence )^k_{t=0} ] .[ lemmadelaycumulative ] is cumulative .see appendix [ appproofex ] . given , a _recurrent _ sequence of is a sequence consisting of all the edges in that are time - shifted versions of the same edge . ] such that for each recurrent sequence )^k_{j=1} ] , the following conditions are satisfied : 1 . for each , if \in c[0] ] , and \notin c[0] ] lies before ] , then . is said to be _ extendable _ if for all and , e[l]\in \tilde{e} ] and \in p_j ] is distributive , and is extendable , is information - distributive , and thus is routing - optimal .see appendix [ appproofex ] .[ exncdelay ] in fig .[ figex3net ] , we show an example of single unicast with delay constraint . in fig .[ figex3routingdomain ] , we show the routing domain ] , and , where are marked as black dashed lines in fig .[ figex3routingdomain ] .it can be verified that ] . clearly , ),(s[i],s[i+1]),\cdots,(s[j-1],s[j])\ } \cup p_1 ] is a cut - set between ] , must pass through an edge \in c[i] ] .this means that is cumulative .let =(e_i[t_i+t])^k_{i=1} ] for .we will prove that if ] .let)^k_{i=1} ] , in which all the edges are time - shifted versions of . without loss of generality ,let . next ,consider \in c[k] ] denote the subset of cut - sets which contain ] and ] that lies immediately before and after ] is the last cut - set in ) ] be an edge that lies before ] , but does nt appear before ] .this means that \notin c[0] ] lies before ] , but does nt appear before ] .this implies that \notin c[0]$ ] .thus , the following equation holds : x. yan , r. w. yeung , and z. zhang , `` the capacity region for multi - source multi - sink network coding , '' in _ the proceedings of ieee international symposium on information theory _ ,nice , france , june 2007 , pp .116120 .m. kodialam and t. lakshman , `` on allocating capacity in networks with path length constrained routing , '' in _ the proceedings of allerton conference on communication , control , and computing _ , monticello , il , u.s.a . , sept .2002 . c. wang and m. chen , `` sending perishable information : coding improves delay - constrained throughput even for single unicast , '' in _ the proceedings of ieee international symposium on information theory _ , hawaii , u.s.a . , june 2014 .
in this paper , we consider the problem of multiple unicast sessions over a directed acyclic graph . it is well known that linear network coding is insufficient for achieving the capacity region , in the general case . however , there exist networks for which routing is sufficient to achieve the whole rate region , and we refer to them as _ routing - optimal networks_. we identify a class of routing - optimal networks , which we refer to as _ information - distributive networks _ , defined by three topological features . due to these features , for each rate vector achieved by network coding , there is always a routing scheme such that it achieves the same rate vector , and the traffic transmitted through the network is exactly the information transmitted over the cut - sets between the sources and the sinks in the corresponding network coding scheme . we present examples of information - distributive networks , including some examples from ( 1 ) index coding and ( 2 ) from a single unicast session with hard deadline constraint .
recently it has been found that cumulant moments of the multiplicity distributions in both annihilations and hadronic collisions show prominent oscillatory behavior when plotted as a function of their order . in ref. behavior is attributed to the qcd - type of branching processes apparently taking place in those reactions . however , in refs. we have shown that the same behavior of the moments emerges essentially from the modified negative binomial distribution ( mnbd ) ( which actually describes the data of _ negatively charged particles _ much better than the negative binomial distribution ( nbd ) ) .this distribution can be derived from the pure birth ( pb ) process with an initial condition given by the binomial distribution .+ in this paper , first of all , we analyze various experimental data of _ charged particles _ - including - collisions by the mnbd and the nbd , to elucidate why the mnbd describes the data better than the latter in annihilations .second , we derive the kno scaling function of the mnbd both by the straightforward method ( i.e. , proceeding to the limit of large multiplicities and large average multiplicities while keeping the scaling variable finite and fixed ) and by the poisson transform . using this kno scaling functionwe analyze the observed multiplicity distributions in annihilations , - collisions and - collisions . finally the concluding remarks are given .the generalized mnbd is discussed in appendix .it is known that the mnbd is obtained from the following equation governing the pb stochastic process , with an initial condition here is a birth rate of particles .the parameter describes the evolution of the branching processes .the generating function ( gf ) of the distribution at is given by ^n,\end{aligned}\ ] ] the gf for the mnbd at is given as refs. , ^n,\label{gene - mnbd}\end{aligned}\ ] ] where and ( an integer ) corresponds to the number of possible excited hadrons at the initial stage .the term is evaluated by .the mnbd is given by the gf , eq.(4 ) , as ^n , \nonumber \\p(n ) & = & \frac{1}{n!}\frac{\partial^n \pi(u)}{\partial u^n}=\frac{1}{n!}\left(\frac{r_{1}}{r_{2}}\right)^n \left(\frac{r_2}{1+r_2}\right)^n \sum_{j=1}^{n } { } _ { n}c_{j } \frac{\gamma ( n+j)}{\gamma ( j ) } \left ( \frac{r_2-r_1}{r_1 } \right)^j \frac{1}{(1+r_2)^j},\nonumber \\ \label{pn - mnbd}\end{aligned}\ ] ] equation ( [ pn - mnbd ] ) is applied to various experimental data- . for the sake of comparison with results obtained by eq.([pn - mnbd ] ) , we also use the negative binomial distribution ( nbd ) given by where .the gf of the nbd is given by ^{-k}. \end{aligned}\ ] ] the gf , eq.(4 ) , of the mnbd reduces to that of the nbd , if .both the mnbd and the nbd are applied to analyses of the experimental data in annihilations- , - collisions- and - collisions . since there are maximum values of multiplicity observed , , we introduce a possible bound for and truncate the multiplicity distribution at and renormalize it as follows , in analyses in ref. , authors used the following treatments ; 1 ) in the first case , the data is used . and are free . only a constraint , , is used .2 ) in the second case , , and are free . is used . on the other hand we use and as inputs .only is free . our results by means of eqs.([pn - mnbd ] ) and( [ pn - nbd ] ) are given in table i and some typical results of fitting are also given in figs.1(a ) and ( b ) .it is found that in annihilations , the minimum values of s obtained by fitting eq.([pn - mnbd ] ) to the data are much smaller than those of the nbd .this fact corresponds to our previous work in which we have found that the moments obtained from the mnbd are much better than those of the nbd in annihilation[2,3 ] : however in - collisions they are almost equivalent .because the data for - collisions by h1 collaborations have recently reported , we calculate moments by the mnbd and the nbd .two results of comparisons are shown in figs.2(a ) and ( b ) for energies 115 - 150 gev and 150 - 185 gev , respectively .it is found that the results of the mnbd and the nbd are almost equivalent as observed in - collisions .in order to know stochastic property of the mnbd , we consider the kno scaling function of eq.([pn - mnbd ] ) . traditionally the kno scaling function is derived from the multiplicity distribution multiplied by the corresponding mean multiplicity by going to the large multiplicity and large mean multiplicity limit while keeping their ratio , fixed . in our case , starting from eq.([pn - mnbd ] )we arrive at the following function the parameters and in eq.([eq:5 ] ) are given by which are slightly different from , in eq.([pn - mnbd ] ) , because .it should be noticed that the normalization of eq.([eq:5 ] ) differs from the unity , where the second term corresponds to the term in eq.([pn - mnbd ] ) .+ the kno scaling function is related to the multiplicity distribution function by the poisson transform + ( 400,30)(0,0 ) ( 280,10) .( 120,10) ( 160,14)(1,0)110 ( 270,10)(-1,0)110 ( 180,18)inverse poisson trans .( 190,2)poisson trans . in this approachwe obtain that where is the generating function of .these equations hold also at ( the stationary function ) .+ using the generating function eq.([gene - mnbd ] ) , the kno scaling function is given by the following inverse laplace transform then we arrive at the kno scaling function for the mnbd, where is the associated laguerre s polynomial . in eq.([mnbd - kno ] ) the first term corresponds to the constant term in eq.([eq:10 ] ) ( , or ) . in the numerical calculations the first term is very small because where .therefore eqs.(7 ) and ( 13 ) are almost equivalent in numerical analyses .the generalized mnbd is discussed in appendix .the moment of the mnbd is given by we also analyze the data by the gamma function which is the kno scaling function obtained from eq.([pn - nbd ] ) ; we investigate the applicability of the mnbd as presented in its kno form ( i.e. , using eq.([mnbd - kno ] ) ) to the description of the observed multiplicity distributions in annihilations- , - collisions and - collisions . in actual analysisthe values of the and are determined by using the experimental data of and as menstioned before .table ii shows obtained parameters in eq.([mnbd - kno ] ) and minimum values of s of fitting to the experimental data .see figs.3(a ) and ( b ) .we show here some typical results of fittings .the results using eq.([kno - nbd ] ) are also shown there . as is seen from table ii , the minimum values of s for the mnbd fitting are a little smaller than those for the nbd fitting in low energies annihilations and - collisions ( below 50 gev ) . for - collisionsthe minimum values of s are also smaller than those of the nbd up to energy =220 gev. on the other hand , in high - energy ( lep and sps energies for the annihilations and - collisions , respectively ) , values of the minimum s by the mnbd fitting are almost equivalent to those of the nbd fitting .through analyses of the data by discrete distributions , we find that the mnbd describes the data in annihilation much better than the nbd .see table i. we find that except for the data by hrs and tasso collabs . on the other hand , in for charged particlesis obtained , provided that their treatments for parameters are used . in other words, we have to pay our attention to the determination of parameters contained in the formulas .second the moments of - collisions by h1 collab .are analyzed by two distributions .the results are almost the same as - collisions .the s in table i are slightly better than those in refs. , when our treatment for parameters is used .third , the kno scaling function of the mnbd is obtained .as far as our knowledge for the kno scaling functions is correct , we have not known the kno scaling function expressed by the laguerre polynomials .it is applied to the analyses of the observed multiplicity distribution in annihilations , - collisions , and - collisions .the data are also analyzed by the gamma distribution . as is seen from table ii, the mnbd describes the experimental data better ( for annihilations ) than or as well as ( for - and - collisions ) the nbd .what we have found in the present analyses suggests us the following : in annihilation , the stochastic pure birth process with the binomial distribution at =0 is useful , becasuse the finite number ( corresponding to ) of the excited hadrons ( or the pair creation of quarks ) is probably expected . in other words , the binomial distribution asthe initial condition is more realistic than other condition ( ) in the stochastic approach .to know stochastic property of the mnbd , we have considered the generalized mnbd . in its concrete application , we have known that the discrete distribution of eq.(a5 ) can not explain the oscillating behaviors of the cumulant moment observed in annihilations and in hadronic collisions than the mnbd , eq.([pn - mnbd ] ) .for example , s values for discrete analysis of data at =900 gev are as follows : ( )=(1 , 2.22 , 65.2 ) , ( 2 , 1.33 , 63.0 ) and ( 3 , 0.06 , 60.8 ) .these results suggest us that it is difficult to determine the best combination of parameters , because of three parameters .it seems to be necessary more carefulness and skillfulness than the use of the mnbd .thus the generalized mnbd is given in appendix .however , we are expecting that eq.(a5 ) will become useful in analyses of data of some reactions at higher energies , since it has both stochastic characteristics of the mnbd and the nbd . *acknowledgments : * one of the authors ( m. b. ) is partially supported by the grant - in aid for scientific research from the ministry of education , science , sports and culture ( no .06640383 ) and ( no .09440103 ) .t.o . would like to thank those who supported him at the department of physics of shinshu university . n. s. thanks for the financial support by matsusho gakuen junior college .we are grateful to g. wilk for his reading the manuscript .* appendix : * + in order to know the stochastic structure of the mnbd in detail , we discuss the following point : the solution obtained from the branching equation of the pure birth process with the immigration under the initial condition of the binomial distribution is one of the extensions of both the mnbd and the nbd . in this case the stochastic equation ( [ pb - eq ] ) should be changed as follows : where is an immigration rate .its generating function is given as ^{n}[1-\tilde{r}_2(u-1)]^{-k - n}. \eqno{(a2)}\ ] ] where } ~~\right \ } \eqno{(a3)}\ ] ] and .the mnbd is obtained by neglecting the power in .the generating function of the nbd is given by neglecting the power in eq.(a2 ) .the physical meaning of the immigration term may be interpreted as a possible contribution from constituent quarks and gluons . using eq.([gene - mnbd ] ) , we have directly the kno scaling function for eq.(a2 ) + here the term in the parenthesis in the squared root of eq.(a3 ) is neglected because .it also noticed that all factors s in eq.(a5 ) are canceled out .this function becomes the kno scaling function of the mnbd eq.([mnbd - kno ] ) when , and reduces to the gamma distribution eq.([kno - nbd ] ) if .cc i. m. dremin and v. a. nechitailo , jetp lett .* 58 * ( 1993 ) 881 ; i. m. dremin and r. hwa , phys . rev . * d49 * ( 1994 ) 5805 ; i. m. dremin , phys . lett .* b341 * ( 1994 ) 95 .n. suzuki , m. biyajima and n. nakajima , phys . rev .* d53 * ( 1996 ) 3582 and * d54 * ( 1996 ) 3653 .n. nakajima , m. biyajima and n. suzuki , phys . rev .* d54 * ( 1996 ) 4333 . p. v. chliapnikov and o. g. tchikilev , phys . lett .* b242 * ( 1990 ) 275 .p. v. chliapnikov , o. g. tchikilev , and v. a. uvarov , phys . lett .* b352 * ( 1995 ) 461 .o. g. tchikilev , phys .* b382 * ( 1996 ) 296 , _ ibid _ * 388*(1996),848 .n. suzuki , m. biyajima and g. wilk , phys .b268 * ( 1991 ) 447 .m. biyajima , n. suzuki , g. wilk and z. wodarczyk .b386 * ( 1996 ) 279 .hrs collab ., m. derrick et al ., phys . rev .* d34 * ( 1986 ) 3304 .tasso collab ., w. braunschweig et al . , z. phys .* c45 * ( 1989 ) 193 .amy collab ., h. w. zheng et al . ,phys . rev .* d42 * ( 1990 ) 737 .aleph collab ., d. decamp et al .* b273 * ( 1991 ) 181 .aleph collab . , d. buskulic et al ., z. phys . *c69 * ( 1995 ) 15 .delphi collab . , p. abreu et al ., z. phys . *c52 * ( 1991 ) 271 .opal collab .d. acton et al . , z. phys . *c53 * ( 1992 ) 539 .opal collab . , p. d. acton et al ., cern - ppe/96 - 47 .a. firestone et al . , phys . rev .* d*10(1974 ) 2080 .e743 collab .r. ammer et al ., phys .lett.*b*178(1986 ) 124 .isr collab . ,a. breakstone et al . , phys. rev . * d30 * ( 1984 ) 528 .ua5 collab ., g. j. alner et al ., phys . rep .* 154 * ( 1987 ) 247 ; ua5 collab . , r. e. ansorge et al ., z. phys . *c43 * ( 1989 ) 357 .h1 collab ., s. aid et al ., desy preprint 96 - 160 . m. biyajima , prog . theor. phys . * 69*(1983 ) 966 ; _ ibid _ * 70*(1983 ) 1468a .a. giovannini , s. lupia and r. ugoccini , phys . lett . *b374 * , ( 1996 ) 231 .f. becattini , a. giovannini and s. lupia , z. phys . * c72 * ( 1996 ) , 491 . s. saleh , ` photoelectron statistics ' , ( springer - verlag , berlin , 1998 ) .r. szwed , g. wrochna , and a.k .wrblewski , mod .a6 * , ( 1991 ) , 245 .* table i * + the minimum values of s for fitting to the experimental data using the mnbd and the nbd , respectively . the data of and are used .we also show giving the minimum values of s in eq.(6 ) . in analysis of the data by ua5 collab ( =900 gev ), we find a minimum at =3 giving at small integer n. we discard this and adopt giving for all . in analysis of the data by delphi collab . and are calculated from the data .+ the symbole denotes the minimum value of s between 2 columns . s in ref. are obtained by means of the statistical errors only .+ + * table ii * + the same as table i but for fitting of the kno scaling function obtained from the mnbd and the nbd , respectively .the data of and are used . in analysis of the data by opal ( =133 gev ) and ua5 ( = 900 gev ) collabs , we find minimum s at and giving in small , respectively .we discard these and adopt and giving for all for the data by opal and ua5 collabs , respectively .+ the symbol denotes the minimum value of s between 2 columns . *1 ] * results obtained by discrete distributions .+ ( a ) the result of fitting by eqs.(5 ) and ( 7 ) to the data observed by tasso collab . + ( =34.8 gev . ) + ( b ) the same as ( a ) but for the data observed by opal collab . ( =91.2 gev . ) + * [ fig.2(a ) and ( b ) ] * analyses of moment in - collisions by h1 collaboration for energies ( a ) =115 - 150 gev and ( b ) =150 - 185 gev . + * [ fig .3(a ) and ( b ) ] * results obtained by the kno scaling functions .+ the same as figs.1(a ) and ( b ) but for fittings eqs.([mnbd - kno ] ) and ( [ kno - nbd ] ) .
we analyze various data of multiplicity distributions by means of the modified negative binomial distribution ( mnbd ) and its kno scaling function , since this mnbd explains the oscillating behavior of the cumulant moment observed in annihilations , - collisions and - collisions . in the present analyses , we find that the mnbd ( discrete distributions ) describes the data of charged particles in annihilations much better than the negative binomial distribution ( nbd ) . to investigate stochastic property of the mnbd , we derive the kno scaling function from the discrete distribution by using a straightforward method and the poisson transform . it is a new kno function expressed by the laguerre polynomials . in analyses of the data by using the kno scaling function , we find that the mnbd describes the data better than the gamma function . thus , it can be said that the mnbd is one of useful formulas as well as nbd . = 3.0 mm
in recent years , financial markets have been studied using statistical physics approaches and some stylized facts have been observed including ( i ) the probability density function ( pdf ) of the logarithmic stock price changes ( log - returns ) has a power - law tail , ( ii ) the absolute value of log - returns are long - term power - law correlated .statistical properties of price fluctuations are important to understand market dynamics , and are related to practical applications . in particular , the volatility of stocks attracted much attention because it is a key input of option pricing models such as the black - scholes .yamasaki _ et al . _ investigated the return intervals between volatility above a certain threshold in the us stock and foreign exchange markets .they analyzed _ daily _ data and found scaling and memory effects in return intervals . studied the return intervals in _ intraday _ data of the us market , and found similar scaling and memory effects . analyzed the memory in volatility return intervals . in this manuscript, we further test the generality of the above findings in the japanese stock market data where we include both _daily _ and _ intraday _ data sets .we find that also in this case the pdf of return intervals mainly depends on the scaled parameter ; the ratio between the return intervals and their mean .memory effects also exist in the return intervals sequences .in addition , we study scaling and memory effects considering different type of market dynamic periods .the japanese market in recent decades ( 1977 - 2004 ) can be divied into two periods , the inflationary ( before 1989 ) and the deflationary ( after 1989 ) .kaizoji showed that some statistical properties of the returns are different in the two periods .the absolute return distribution in the inflationary period behave as a _ power - law _ distribution , while the return distribution in the deflationary period obeys an _exponential law_. here , we find that scaling and memory effects of the return intervals show similar features in both periods .in this section , we analyze the statistical properties of the japanese stock market using _ daily _ and _ intraday _ return intervals .we investigate the _ daily _ data of three representative companies , nippon steel , sony and toyota motor listed on the tokyo stock exchange ( tse ) for the 28-year period from 1977 to 2004 , a total of 7288 trading days . also , we study the _ intraday _ data of 1817 companies listed on the tse from january 1997 to december 1997 .the sampling time is 1 minute and the data size is about 9 million .the logarithmic return is written as where is the stock price at time , and the normalized volatility is defined as : where means time average .we pick every _ event _ of volatility above a certain thresold .the series of the time intervals between those events , depending on the threshold , , are generated .we investigate the pdf to better understand its behavior and how it depends on the threshold ( the left panels of fig .[ fig : pdf_daily ] ) .the scaled pdf , , as a function of the scaled return intervals is shown in the right panels of fig .[ fig : pdf_daily ] .previous study showed the distributions of are different with different threshold , and we find the same result .however , when plotting as a function of , we obtain an approximate collapse onto a single curve .the collapse means that the distributions can be well approximated by the scaling relation the scaling function of eq .( [ eq : p ] ) does not depend directly on the threshold but only through .therefore , if is known for one value of , the distribution for other values can be predicted using eq .( [ eq : p ] ) . figs .[ fig : pdf_daily]g and [ fig : pdf_daily]h show that the same features , distribution and scaling of return intervals exist for _ intraday _ data after removing the intraday trends .the size of the intraday data set is basically larger than that of daily data , and consists of 1817 companies .therefore , we are able to extend our study to larger values of and get better statistics ( less scattering ) compared to those in fig . [ fig : pdf_daily ] ( a)-(f ) .( color online ) distribution and scaling of return intervals using for ( a ) and ( b ) nippon steel , ( c ) and ( d ) sony , ( e ) and ( f ) toyota motor , and ( g ) and ( h ) mixture of 1817 japanese companies .daily data is used for ( a)-(f ) , and intraday for ( g ) and ( h ) .the sampling time for intraday data is 1 min .symbols represent different threshold varying from 1 to 2 ( for ( a)-(f ) ) and 1 to 5 ( for ( g ) and ( h ) ) , respectively.,scaledwidth=100.0% ] previous similar studies on the us stock and foreign exchange market suggested that there might be a universal scaling function for the return time intervals of different financial markets .we observe the same result also for both daily and intraday data of the japanese market , and it raises the possibility that the scaling function is universal .we also test whether the sequence of the return intervals is fully characterized by the distribution .if the sequence of are uncorrelated , the return intervals are independent of each other and chosen from the probability distribution .however if they are correlated , the memory also affects the order in the time sequence . represents the conditional pdf which is the probability of finding a return interval following a return interval .if memory does not exist , we expect that the conditional pdf will be independent of and identical to .we study for a range of values .the full data set of is divided into eight subsets with return intervals in increasing order .we show for being in the lowest subset ( full symbols ) and , in the largest subset ( open symbols ) in fig .[ fig : memory_daily ] .the results show that for in the lowest subset , the probability of finding below is enhanced compared to , while the opposite occurs for in the largest subset .the pdfs , , for all thresholds collapse onto a single scaling function for each .this suggests that does not characterize the sequence of and memory exists in the sequence .( color online ) scaled conditional distribution as a funtion of using daily data for ( a ) nippon steel , ( b ) sony , ( c ) toyota motor , and ( d ) mixture of 1817 companies .symbols represent different threshold .,scaledwidth=100.0% ] the memory effects are also observed in the mean conditional return interval which is the first moment of shown in fig .[ fig : mean ] , where we plot as a function of .it is seen clearly that large ( or small ) tend to follow large ( or small ) .note that the shuffled data ( open symbols ) exhibit a flat shape , which means is independent on .the above results show that the return intervals strongly depend on the previous return interval .( color online ) scaled mean conditional return interval as a function of for ( a ) nippon steel , ( b ) sony , ( c ) toyota motor , and ( d ) mixture of 1817 companies .open symbols correspond to shuffled data .the lines serve only as a guide to the eyes.,scaledwidth=100.0% ] we also analyze clusters of short and long return intervals in order to investigate clustering phenomena , which represent further and longer term correlations compared to and . the sequence of return intervals is devided into two bins by the median of the entire database .the two bins consist of intervals which are above " and below " the median respectively .a cluster is formed by consecutive return intervals that are above " or below " the median .[ fig : cum_histo ] shows the cumulative distribution of clusters of size for three japanese companies and mixture of 1817 companies . both above " and below " clusters have long tails compared to the surrogate volatility shuffled case .( color online ) cumulative distribution of size for return intervals clusters for ( a ) nippon steel , ( b ) sony , ( c ) toyota motor , and ( d ) mixture of 1817 companies .the distributions consist of consecutive return intervals that are all above ( closed symbols ) or below ( open symbols ) the median of all the interval records .the straight lines show the shuffled volatility case ( q=1 , above the median ) where memory is removed.,scaledwidth=100.0% ]in this section , we investigate the nikkei 225 index data to answer one question : even the return distributions have different features , do the return time intervals show similar features ?the nikkei 225 index reached the highest position on the last trading day of 1989 , but declined from the first trading day of 1990 .it has dropped about 63 percent from 1990 to august of 1992 .this is a famous japanese market bubble and crash .therefore , the japanese market between 1977 and 2004 can be divided into two parts : the period of inflation , before december 1989 , and the period of deflation , after january of 1990 ( fig .[ fig : nikkei ] ) .kaizoji showed that the return statistics of those two periods are clearly different .the return distribution in the inflationary period is approximated by an asymptotic power law , while the return distribution in the deflationary period seem to obey an exponential law .the time series of nikkei 225 ( a ) index and ( b ) log return from january 1984 to december 2004.,scaledwidth=100.0% ] fig .[ fig : inde](a ) represents the scaled pdf , , as a function of the scaled return intervals in the inflationary period ( full symbols ) and the deflationary period ( open symbols ) .no significant differences is seen between these two periods . also , conditional mean return intervals of two periods show that depends on in a similar way in the two periods ( fig .[ fig : inde](b ) ) .it has been suggested that the pdf and the scaled pdf are universal functions for different financial markets .here we observe that even though the return distributions of the periods are different , the return intervals show similar features .( color online ) ( a ) distribution and scaling of return intervals and ( b ) scaled mean conditional return interval as a function of .full symbols correspond to the inflationary period and open ones the deflationary period .symbols represent different threshold .the line is only a guide to the eyes.,scaledwidth=100.0% ]we investigated scaling and memory effects in volatility return intervals for the japanese stock market using daily and intraday data sets .for both data sets , we found that the distribution of return intervals are well approximated by a single scaling function that depends only on the ratio , and the scaling function is different from the poisson distribution expected for uncorrelated records . also , our results for the conditional distribution and mean return interval support the memory between subsequent return intervals , such that a large ( or small ) return interval is more likely to be followed by a large ( or small ) interval .the clustering shown in fig .[ fig : cum_histo ] shows that the memory exists even between nonsubsequent return intervals .our results also support the possibility that the scaling and memory properties are similar functions for different financial markets .in addition , we tested scaling and memory effects in the inflationary and deflationary periods of the japanese market .while the return distributions show different features , the scaling and memory properties of the return intervals are similar .it should be noted that similar scaling properties and memory in the return intervals have been found earlier also in climate and earthquakes .f. wang , k. yamasaki , s. havlin , and h. e. stanley , phys .e * 73 * , 026117 ( 2006 ) ; f. wang , p. weber , k. yamasaki , s. havlin , and h. e. stanley , eur .j. b * 55 * , 123 ( 2007 ) ; f. wang , k. yamasaki , s. havlin , and h. e. stanley , arxiv:0707.4638 ( 2007 ) .
we investigate scaling and memory effects in return intervals between price volatilities above a certain threshold for the japanese stock market using daily and intraday data sets . we find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval and its mean . we also find memory effects such that a large ( or small ) return interval follows a large ( or small ) interval by investigating the conditional distribution and mean return interval . the results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets . we also compare our results between the period before and after the big crash at the end of 1989 . we find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different .
in their recent paper , ili and stankovi have suggested ( in remark 5.3 ) that there may be problem for the class of hybrid entropies introduced in and elaborated in our recent paper . in particular, they argued that the entropies in question do not satisfy the fourth axiom in generalized shannon khinchin axioms ( j - a axioms ) introduced in .the fourth axiom in question is basically the -additive entropic chain rule .they do so by generalizing the single - parameter j - a axiomatics by inserting yet another independent parameter into axioms .the new parameter appears in the definition of the conditional entropy where it specifies the order of the weighting escort distribution .ili and stankovi have succeeded in solving their new axiomatic system and found that ensuing solutions split into 4 distinct classes : i ) , ii ) , iii ) and iv ) .interestingly enough , when [ i.e. special case in iv ) ] then the axioms turn out to be identical with the j - a axioms but their solution does not coincide with our hybrid entropy but rather with the usual tsallis entropy .this latter finding is particularly intriguing because tsallis entropy is not usually affiliated with the kolmogorov nagumo means ( which explicitly enter the j - a axiomatics and ili stankovi calculations ) . in this comment ,we are not concerned with this , the most interesting part of ili stankovi fine paper .rather , we seek to clarify their remarks about the rle of our hybrid entropy in their axiomatic system . instead of following the route outlined in ili stankovi paper , we simply consider the hybrid entropy as given in and try to see where the points of incompatibility with the j - a axiomatics arise .we do so within the framework of escort distributions , which offer a particularly illuminating tool for this task .the upshot of this analysis is that our hybrid entropies satisfy the tsallis - type -additivity condition for any two independent events .as long as the events ( or systems ) considered are dependent , the -additive entropic chain rule is not satisfied .we trace down the root cause of this behavior to a peculiar behavior of the de finetti kolmogorov relation for escort distributions .we recall first the so - called j - a axioms for hybrid entropy , namely 1 ._ continuity _ : is a continuous function of all arguments 2 . _maximality _ : for given is maximal for 3 ._ expansibility _ : 4 . _j - a additivity _ : , where and is some positive - definite invertible function on .the j - a axioms 1 - 4 were introduced in as a unifying one - parameter framework for both tsallis and rnyi entropy . as such, the axioms appear quite instructive because they allow to address the currently popular concept of -nonextensive entropies from an entirely new angle of view .of course , this is true provided some non - trivial entropy functional satisfying these axioms exists . in ref . it was argued that the unique solution of the j - a axioms has the form . here is the escort distribution of -th order . before we point out the potential problem with this form ,let us discuss first the respective j - a axioms in the connection with entropy functional ( [ hybrid ] ) .this is a worthy pursuit because a ) it allows to asses the extend of damage caused and b ) it suggests potential rectifications to be taken .in addition , maxent distributions derived from have a very rich practical applicability ( see , e.g . , ) and it would be nice to retains some of the desired properties of even in a limited sense .let us now take a closer look at in the context of the j - a axiomatic .+ _ 1 ) continuity : _ apparently , the entropy is continuous in all arguments for arbitrary .+ _ 2 ) maximality : _ maximality axiom has been extensively discussed in ref . and it has been shown that hybrid entropy obeys the maximality axiom for .+ _ 3 ) expansibility : _ there is no doubt that is expansible function .this is clear from the sum structure and the fact that if and . + _4a ) additivity rule independent events : _first , let us discuss two independent events and with respective distributions and and associated _ escort distributions _ ^{1/q}}{\sum_i [ p(q)_i]^{1/q}}\ , , \nonumber \\ & & q(q)_{k } \ = \ \frac{q_k^q}{\sum_i q_i^q } \ , \leftrightarrow \ , q_k \ = \ \frac{[q(q)_k]^{1/q}}{\sum_i [ q(q)_i]^{1/q}}\ , , \end{aligned}\ ] ] we also introduce a function as ^{1/(1-q ) } \ \ ; \rightarrow \ ; \ f_q^{-1}(x ) \ = \ \ln_q \exp ( x)\ = \ \frac{1}{(1-q)}\left[e^{(1-q)x } -1\right].\end{aligned}\ ] ] the function coincides with the kolmogorov nagumo function from 4th axiom as found in .the relevance of and stems from their close connection to -calculus , and from the fact that they are precisely identical with -exponential and -logarithmic functions used in tsallis statistics . in particular ,if we define the -addition as then it is easy to check that fulfills the relation for , the -addition reduces to a standard addition operation and boils down to an identity function . with the help of the relation ,the 4th axiom can be equivalently expressed as expression is also known as the aczl darczy ( additive ) entropy . by employing the above -mappingwe will simplify number of cumbersome mathematical steps and stay comparatively close to the reasonings used in the ili stankovi article . under the -mapping the respective aczl darczy entropies read & = & \frac{1}{q } { \mathcal{s}}(p(q ) ) - \frac{(1-q)}{q}{\mathcal{i}}_{1/q}(p(q))\ , , \label{3}\\[2 mm ] f_q({\mathcal{d}}_q(b ) ) & = & - \frac{\sum_j q_j^q \ln q_j}{\sum_j q_j^q } \ = \ \frac{1}{q } { \mathcal{s}}(q(q ) ) - \frac{(1-q)}{q}{\mathcal{i}}_{1/q}(q(q))\ , .\label{4}\end{aligned}\ ] ] here and denote shannon s and rnyi s entropy ( of the order ) , respectively . applying j - a additivity [ in the form ( [ additivity ] ) ] alongside with the entropy form ( [ hybrid ] ) and the fact that for _ independent _ events [ which implies that from `` 4 '' is simply we can write & = & \ ! \frac{1}{q } [ { \mathcal{s}}(q(q ) ) + { \mathcal{s}}(p(q ) ) ] - \frac{(1-q)}{q}[{\mathcal{i}}_{1/q}(q(q ) ) + { \mathcal{i}}_{1/q}(p(q))]\nonumber \\[2 mm ] & = & \ ! \frac{1}{q } { \mathcal{s}}(q(q)p(q ) ) - \frac{(1-q)}{q}{\mathcal{i}}_{1/q}(q(q)p(q ) ) . \label{eq:1}\end{aligned}\ ] ] on the last line we have used the additivity of both shannon and rnyi entropy . by realizing that we can write the last line of ( [ eq:1 ] ) as [ cf . also ( [ 3 ] ) and ( [ 4 ] ) ] as this is indeed nothing but .so we see that the hybrid entropy does satisfy the j - a additivity axiom ( basically -additivity ) for _ independent _ events .note also that the validity of the j - a additivity rule for the hybrid entropy is in this case a direct consequence of the additivity of both shannon and rnyi entropy .4b ) additivity rule dependent events : _ according to 4th axiom , we can calculate in two ways : we denote the conditional probability so that . with this we can rewrite ( [ eq : 3 ] ) as & = & \ \frac{1}{q } \left[{\mathcal{s}}(r(q ) ) - { \mathcal{s}}(p(q))\right ] - \frac{(1-q)}{q}\left[{\mathcal{i}}_{1/q}(r(q ) ) - { \mathcal{i}}_{1/q}(p(q))\right ] .\label{13}\end{aligned}\ ] ] here we have employed the notation or equivalently ^{1/q}/\sum_{kl}[r(q)_{kl}]^{1/q} ] is defined via the usual de finetti kolmogorov relation {i|j} ] can not represent a joint probability distribution even though it derives from the genuine joint distribution .this is clear because the marginal distribution can not be obtained from by summing over index . in this connectionwe should recall obvious but underappreciated fact about the de finetti kolmogorov relation , namely that {k|l} ] .this follows from the simple chain of reasonings : here denotes the correct joint distribution . the equality holds only when , for all indices `` '' .by subtracting two with different s on the right - hand side ( say and ) we get the latter can be satisfied only when is constant for all s , i.e. , when the two events involved are independent .this in turns says that represents a genuine joint distribution only in the case of independent events .second , should we used the defining relation ( [ eq : 2 ] ) for the conditional hybrid entropy together with the fact that we would obtain in terms of escort distributions the latter can be recast into the form - \frac{(1-q)}{q}\left[{\mathcal{i}}_{1/q}(r(q ) ) - { \mathcal{i}}_{1/q}(p(q))\right ] .\label{12aa}\end{aligned}\ ] ] where .note , in particular , that ( [ 12aa ] ) differs from ( [ 13 ] ) only in the term .the equivalence holds if and only if . in order to better understandwhen this is satisfied we write and use the relation here we may observe that even when is zero for some pair the fraction is not singular . with the help of the min - max theorem for means and eq .( [ 20ab ] ) we can write \ln r(q)_{kl } \ \leq \\tilde{{\mathcal{s}}}(r(q ) ) - { { \mathcal{s}}}(r(q ) ) \ \leq \ - \sum_{kl } r(q)_{kl } \left [ \frac{\sum_{n } ( \max_l r_{n|l}^q - r_{n|k}^q ) } { \sum_{n } r_{n|k}^q}\right ] \ln r(q)_{kl } .\end{aligned}\ ] ] while the left inequality is always less than or equal to zero , the right inequality is always greater than or equal to zero .clearly when both inequalities are saturated at the same time . according to the min - max theorem , this happens if and only if all the s are equal ( i.e. , independent ) for each fixed . in other words , if and only if .so again , we can track down our problem to the fact that is not a true joint distribution unless events are independent . in casewe wish to retain the form of the conditional entropy obtained from ( [ 12a ] ) as defined in the j - a axioms , it is not difficult to find corrections to the -additive entropic chain rule .in fact , from ( [ 13 ] ) and ( [ 12aa ] ) we can see that we should employ the following substitution in the j - a additivity rule }\left(\mathcal{d}_q(b|a ) + \frac{1}{1-q } \right ) - \frac{1}{1-q } \ = \ \mathcal{d}_q(b|a)\ + \ { \mathcal{o}}(\tilde{{\mathcal{s}}}(r(q ) ) - { { \mathcal{s}}}(r(q ) ) ) \ , , \label{22aaab}\end{aligned}\ ] ] with representing the error symbol .clearly the j - a additivity rule with the substitution ( [ 22aaab ] ) included reduces to the standard -additivity form in the case when the two events are independent . on the other hand , should we wish to retain the -additive entropic chain rule together with the form of defined in ( [ hybrid ] ) we should change the the definition of the conditional entropy according to prescription ( [ 13 ] ) .in this comment we have demonstrated that the hybrid entropies introduced in satisfy the tsallis - type -additivity condition for any two independent events . as long as the events ( or systems )considered are dependent , the -additive entropic chain rule is not satisfied .we could trace down the root cause of this behavior to a peculiar trait of the de finetti kolmogorov relation for escort distributions employed in the proof in ref .in particular , the latter is generally not implied by its non - escort counterpart .this is notably reflected in the fact that is not a joint distribution although it is constructed directly from a genuine joint distribution .we have shown here that reduces to a joint distribution only for independent events , which in turn also defines the region of validity of the -additive entropic chain rule for .it should be noted , in passing , that in standard thermostatistical situations one deals only with entropies containing independent subsystems .this is fully in accord with landsberg s classification of thermodynamical systems with non - extensive entropies . in this respect ,most of the results obtained for in the literature can be used safely .we acknowledge the support by the gar grant ga14 - 07983s .99 m. ili , p. stankovi , physica a * 411 * ( 2014 ) 138 .p. jizba and t. arimitsu , physica a * 340 * ( 2004 ) 110 .p. jizba and j. korbel , physica a * 444 * ( 2016 ) 808 .r. hanel and s. thurner , epl * 96 * ( 2011 ) 50003 ; s.r .valluri et al . , j. math* 50 * ( 2009 ) 102103 .g. hardy , j.e .littlewood and g. plya , _ inequalities _ , ( cambridge university press , cambridge , 1952 ) , ( chapter 2.3 ) .see , e.g. , p.t .landsberg , braz .* 29 * ( 1999 ) 46 ; p.t .landsberg and d.tranah , collective phenomena * 3 * ( 1980 ) 73 ; collective phenomena * 3 * ( 1980 ) 81 .j. aczl and z. darczy .publ . math .debrecen * 10 * ( 1963 ) 171190 .e. p. borges , physica a * 340 * ( 2004 ) 95. t. yamano , physica a * 305 * ( 2002 ) 486 .
recently in [ physica a 411 ( 2014 ) 138 ] ili and stankovi have suggested that there may be problem for the class of hybrid entropies introduced in [ p. jizba and t. arimitsu , physica a 340 ( 2004 ) 110 ] . in this comment we point out that the problem can be traced down to the -additive entropic chain rule and to a peculiar behavior of the definetti kolmogorov relation for escort distributions . however , despite this , one can still safely use the proposed hybrid entropies in most of the statistical - thermodynamics considerations . entropic chain rule ; tsallis entropy ; escort distribution 05.90.+m
despite convincing gravitational evidence for the existence of dark matter ( dm ) in our universe ( from galactic to cluster scales ) its nature remains a mystery .yet great progress has been made .in particular direct detection experiments have set progressively stronger limits on the properties of dark matter , gaining several orders of magnitude in less than a decade for masses in the gev to tev range .several direct detection experiments have reported dark matter - like events in their data ( e.g. cogent , cresst - ii and dama ) , with the most recent positive result coming from the cdms - si experiment . such hints are in tension with the limits published by the lux and xenon100 collaborations .however several authors have claimed that the systematic uncertainties inherent in their analysis may provide a way of reducing such tension .in addition if one moves beyond the most basic model of dm - quark scattering and considers e.g. inelastic scattering or isospin - violating dm , where the coupling to neutrons and protons is different , then such tension can also be greatly reduced .given the present situation , it is essential to exploit all the information contained in the data . in this articlewe propose a bayesian approach , based on the information hamiltonian , with a view to providing the community with a a novel and robust interpretation of these conflicting experimental signals .this is not the first bayesian analysis of direct detection data , however our method is distinct in that it extracts the maximum amount of information from the available data , by exploiting the differences between expected signal and background events . for the purpose of illustration, we will make use of data from the xenon100 calibration .this is an independent analysis of xenon100 data , and will enable us to check and also confront our new method with the collaboration s approach .this example is also highly relevant for the lux experiment , which works under a similar principle . as we will show for the case where there are signal - like points in the data our method is particularly powerful , since one can simultaneously set an exclusion limit and define a potential signal region using bayesian regions of credibilitythis is in contrast to current analytical approaches , which usually involve methods designed only to set limits , such as the method , or the profile likelihood analysis with the cl method .we do not claim that our method is technically superior for all cases , however our approach is particularly transparent and easily generalised to many different data - sets . in section [ sec : ih ]we first introduce our method and show how to apply it to direct detection experimental data in general ; this includes a discussion of when to set limits or claim discovery . in section [ sec : x100 ]we apply our method to data from the xenon100 experiment as a worked example and conclude in section [ sec : conc ] .our method is inspired from information theory , in the sense that it employs bayesian techniques ( see for a review ) with the aim to fully exploit the different expected distributions of signal and background events .ultimately this should either enhance the characteristics of a potential signal ( and therefore the evidence for a dark matter discovery ) , or place stringent bounds on dark matter models . before proceeding, we would like to clarify the distinction between this approach , and the profile likelihood method used by e.g. the lux and xenon100 collaborations to set upper limits on the dm - nucleon cross section ( and also by cdms to fit to their data ) .the major difference is that our approach is bayesian and the profile likelihood is frequentist , and hence for example both methods have different ways of dealing with nuisance parameters .however , in most cases , with the same likelihood function the bayesian and profile likelihood results should agree , and each can provide an important cross - check of the other . in section [ sec : x100 ]we discuss the xenon100 experiment as a worked example , and when referring to the profile likelihood in this context we mean specifically the likelihood used by the xenon100 collaboration to analyse their data .indeed , as an alternative , the lux collaboration also use a profile likelihood method , but not necessarily the same likelihood function as xenon100 . in the absence of any nuisance parameters , a profile likelihood analysis performed with our likelihood function should give similar limits to those derived in this work using a bayesian approach .even so , the two approaches are distinct and should be considered complimentary to each other . our general strategy is to treat any 2d data - set effectively as an image , which we pixelate and exploit using pattern recognition .said differently , we map the data contained in a 2d plot onto a 2d data - space .a point in this space is identified by its two coordinates and , the coordinates of the initial plot and in fact the discrimination parameters used to identify events ( e.g. scintillation intensity , ionisation , phonon signals ) .the next key step is to then grid the data - space by pixelating it into two - dimensional bins of equal size in given by and labelled with the index .if such 2d - bins are chosen to be small enough , the ability of the analysis to discriminate between signal and background will be maximised . within a pixel at position in the - planethere will be a certain number of _ experimental _ data - points , each of which are identified by their coordinates ( with running from 0 to , the total number of data - points in the whole space ) .for the same pixel , the _ theoretically expected _ number of points is given by .hence we can compare to given fluctuations in the latter , which we assume obey poisson statistics .the function is the expected distribution of events , which constitutes the theoretical expectation of both the background and possible signal in a pixel .. ] we can now analyse the data using the method described above .the main issue is to find for which theoretical parameters is closest to for all pixels , within poisson fluctuations .if there is no dm signal in the data , one expects that for the configuration where is closest to that the former is equal to the theoretically expected number of background events in each pixel . for this purpose , we will define a poisson likelihood to describe the theoretical number of background and signal - like events in each pixel . here represents the mean expectation value of the number of points expected in each pixel .such a likelihood is given by , in this expression , represents the data and the signal . to make the interpretation easier, we decompose into a dm component composed entirely of nuclear recoils ( nr ) and a background component ( dominantly electronic recoils ( er ) , but with a possible nr component ) , leading to . the predominance of the signal over the background essentially depends on the number of signal events with respect to that from the background , at a given location in the data - space . since both the number of events and the location are important , and since the location depends on the dm mass ( i.e. can be computed once for each mass ) , we have explicitly separated out these two contributions .our calculations are therefore significantly speeded up by using the decomposition : where the term represents the signal position ( or shape ) in the data - space and its magnitude ( or intensity ) . for the standard picture of a non - relativistic wimp, the interaction rate depends linearly on cross section , and hence .the number of events is governed by the interaction cross section between the dark matter and the nucleons of the detector .if the shape of the signal matches that of the data points ( above background ) , then an inspection of the number of events should reveal the value of the cross section , and therefore the strength of the dm interactions . on the other hand ,if the shape does not match the data - point distribution , one can set a limit on the dm interaction cross section . in practicethe finite experimental sensitivity means we can only exclude values of which would lead to too large a signal . hence it is convenient to start with a value that is already excluded from previous experimental searches , namely , and decrease it until one reaches the experimental sensitivity . for this reason we will work with the ratio where , so that provides us with a direct measurement of the intensity of the signal .an exclusion limit is then set by determining the smallest value that still leads to too many signal - like events , so that all are excluded , while keeping values of which the experiment is not sensitive to . the number of expected signal events in a pixel at is therefore given by .to proceed , we must now define a prior for .we have no theoretical prejudice on its value and therefore consider a flat prior i.e. assign to all possible cross section values $ ] the same a priori probability density function ., we would have to take .however in practice we can take to be very large but finite , such that we are confident that the probability of finding dm with this interaction strength is vanishingly small , given previous experimental knowledge . ]we can now combine the likelihood and prior into the joint data and signal probability .we will work with the information hamiltonian , where the indicates signal - independent terms , which do not contribute to the determination of the ratio .inserting our decomposition for ( cf eq.[lambdadecomposition ] ) and rearranging we obtain , + \dots\ ] ] the limit can now be taken where , can only contain either or data - points . hence in this limit tends to a delta - function and the hamiltonian becomes + \dots % h\ , \widehat{= } \int_{\omega } \mathrm{d}x \bigg [ f(x ) s - \mathrm{ln } \left ( 1 + \frac{f(x ) s}{b(x ) } \right ) \delta^n ( x - x^{\mathrm{data}}_i ) \bigg ] , \label{eqn : ham_int}\end{aligned}\ ] ] where the -function picks out the positions of the data - points .we define , the total number of reference signal ( nuclear - recoil from dark matter ) events in the data - space calculated at .with this likelihood we are ready to look for a dark matter signal in our data and we now outline this process explicitly ( see also ) . as with standard methods ,we seek to minimise the hamiltonian .there is a positive identification of a dm signal in the experimental data only when the hamiltonian possesses a minimum . in this casethe shape of the signal matches the distribution of the data points , in some region of data - space where is expected to be small .the strength of the dm - nucleon interaction is given by the intensity of the signal , , corresponding to , with representing the properties of the signal that fit the data best .to define the goodness of the fit in the standard approach , one would then consider all ( or equivalently ) values leading to where is fixed by the confidence level that one wants to have .here we shall proceed slightly differently ( but ultimately this is equivalent ) : we define the significance of the signal by integrating the posterior distribution over , retaining in particular values around .note that the last equality holds only for flat priors ( f.p . ) , and assuming that .however , in the following we will take out the normalisation of explicitly , such that : hence in our case a discovery will be established at a confidence level by using the definition , where the discovery region is bounded from below by and from above by .such a region is therefore a two - sided region of credibility , while an exclusion limit by contrast is said to be one - sided .one could also relate our two - sided bayesian region of credibility to a frquentist confidence interval with a certain number of ` sigmas ' , though this is only strictly possible for a gaussian likelihood and posterior and could be directly related to the distance from the best - fit point in units of the gaussian variance i.e. a number of ` sigmas ' . ] .however one may find that the hamiltonian possesses no minimum . in this casethere is no value of for which the data is compatible with the signal distribution , no matter how intense this distribution becomes .one can not completely rule out dark matter however , since we know that our experiment has finite sensitivity , but we can set a limit , hereafter referred to as , on the dm interactions .since the experiment is not sensitive to dm cross section values smaller than , all values below are equally good ( or equally bad ) .hence there is a region of the parameter space corresponding to where the posterior probability is practically constant , as the experiment can not discriminate between these values of the cross section ( for a given exposure ) .the allowed region below is thus characterised by while the excluded region above ( where one expects too much signal ) is identified by a sharp cut - off in the posterior probability . to determine the exclusion limit ( i.e. ), we thus seek to quantify this cut - off .we have some freedom in choosing its value : it will depend on the confidence with which we set out limit .for example to set an exclusion limit at a confidence of ( e.g. for confidence we take ) , we define analogously with our best - fit region , as by integrating the constant region of the posterior probability until the integration reaches the value that we set , we identify and the cut - off .note also that for ease of calculation we tend to use the hamiltonian in the form of , where sums over all data - points at positions and are data weights with . for setting a limit the first term in eqn .( [ eq : sigma_limit ] ) is data - independent and gives the absolute limit in the case where no signal - like events are observed in the data , while the second term accounts for potential signal - like events present in the data , and weakens the limit .the statistical treatment is largely similar for setting limits or claiming discovery , and our method provides a natural transition between the two , though the approach to how one thinks about regions of credibility is different in either case . indeedboth a signal region and an exclusion limit are equally valid regions of credibility , and so one may wish to highlight both if there is a hint of signal present in the data , but one wishes to remain conservative as to its interpretation .+ + the strongest limits ( for ) on the spin independent cross section for dark matter elastic scattering with nuclei have been set by xenon - based experiments i.e. xenon100 and lux .we focus on the xenon100 experiment as a worked example , which operates using both liquid and gaseous xenon with a fiducial mass of ( for the most recent data - set ) .the xenon100 detector identifies events by using two distinct signals : primary ( s1 ) and secondary ( s2 ) scintillation , the former of which is due to scintillation light originating from the liquid part of the detector , while the latter comes from ionised electrons , which drift to the gaseous part of the detector under an electric field .the lux detector operates on a similar principle , but with a larger fiducial mass .the lux collaboration also employ different cuts ( e.g. a cut at s1 = 2 pe , instead of 3 pe ) and potentially a different likelihood function for their own analysis .otherwise , the following discussion should be interesting for an understanding of the analysis of lux data , as well as xenon100 . in order to derive limits on the spin - independent cross section as a function of dark matter mass , the xenon100 collaboration employs a profile likelihood approach .such a method takes advantage of the distinct signatures in s1-s2 of electronic and nuclear recoils by splitting the data - space into a number of bands ( 23 in and 12 in ) .we can contrast this approach with our method , where the data - space is split into a grid of rectangular pixels , which are associated with a point in the data - space .hence , we expect our gridded approach to perform better than this method of bands used by the xenon100 collaboration , since we can exploit the difference between signal and background to the maximum amount , while they are limited by the rather coarse - grained resolution of their bands . this application should serve as a clear demonstration of the advantages to _ any _ direct detection experiment of using our method .we can identify s1 and s2 with our discrimination parameters and from section [ sec : method ] , though here we choose instead to take , , to match more closely the method used by the xenon100 collaboration themselves ( and also the lux collaboration ) .we will proceed first to discuss the determination of the signal and background distributions for the xenon100 experiment , before applying our method to data , both the more recent 225 live days data ( 225ld ) and the older 100 live days data - set ( 100ld ) .potential wimp events are characterised by their recoil spectra , parameterised as , where is the wimp - nucleus cross - section as a function of energy , is the wimp - nucleus reduced mass , is the local dark matter density and is the wimp mean velocity .the mean velocity is integrated over the distribution of wimp velocities in the galaxy boosted into the reference frame of the earth by .the lower limit of the integration is , which is the minimum wimp velocity required to induce a recoil of energy .we assume the standard halo model such that is given by a maxwell - boltzmann distribution cut off at an escape velocity of .we assume that wimps interact identically with protons and neutrons and . ] giving , where is the zero - momentum wimp - nucleon cross section , is the atomic mass of xenon , is the wimp - proton reduced mass and is the helm nuclear form factor . for 225ld ( 100ld )we use a value of days ( days ) for the exposure and kg ( kg ) for the target mass . at a given nuclear - recoil energy the expected primary ( ) and secondary ( ) scintillation signalsare obtained from the following formulae , where represents a poisson distribution with expectation value , , , is a gaussian - distributed value with mean pe per electron and width , is the relative scintillation efficiency and is the ionisation yield . for there is a degree of uncertainty on its functional form ; we use the model of in this work , however we have obtained similar results with the best - fit curve from . is obtained from a cubic spline fit to data from . to obtain the and signals observed in the detector, we must include the finite detector resolution and the cuts imposed by the xenon100 collaboration on the data . both and blurred with a gaussian of width for photoelectrons ( pe ) to take account of the finite photomultiplier ( pmt ) resolution . the effect of cutsis then implemented using the cut - acceptance curve as a function of s1 after applying the resolution effect .additionally an s2 threshold cut is applied before gaussian blurring , cutting away all points with . the expected signal distribution for a given wimp mass in the data - space now be calculated using of section [ sec : rec_spec ] , at a value of the reference cross - section ( or for ) .the energy range between and is separated into bins of size . for each binned energy calculate and a total of times , where , to obtain the full signal distribution as expected in xenon100 .the result is shown for two different masses in fig .[ fig : wimp_dists ]. similar simulations of the signal distribution expected from xenon100 have been performed in , however our method goes further and directly links these to the analysis through the weight function , as shown in figure [ fig : wimp_dists ] .the expected distribution of electronic - recoil background events is determined from fits to calibration data would improve were we to use the calibration data ( especially for the anomalous component ) , collected by the xenon100 collaboration for their most recent analysis , however this is not currently publicly available . ] , as is done in .although the electronic recoil events appear mostly gaussian distributed , the xenon100 collaboration noticed the presence of an anomalous ( non - gaussian ) background component . this could be due to double - scatter gamma events , where only one of the gammas contributes to the s2 signal .both such components of the er background are included , indeed the anomalous component can be seen in figure [ fig : wimp_dists ] predominantly at low - s1 .the distribution is normalised by the total number of expected background events , whose rate takes the constant value of .we also model the nuclear - recoil background due to neutrons .the distribution is calculated as for the signal distribution , but replacing with the expected energy spectrum of neutron scatters in the detector .hence the total background distribution is .now that we know how to calculate the expected signal and background distributions and , we are ready to apply our method to the data from the xenon100 experiment .all relevant ingredients are displayed in fig .[ fig : wimp_dists ] ; the left panels show the regions where the expected signal and background are expected to be largest , while the right panels show plots of as used directly for our analysis .the discrimination between signal and background is maximised provided the two - dimensional bins for are small enough : data - points where is large are more likely to be due to signal than background , while the opposite is true for points located where is small .this is then fed directly into our analysis , hence figure [ fig : wimp_dists ] contains all of the main ingredients of our method . shown in figure [ fig : lims ] are the results of applying our method to the data . in order to understand the effect of data - points consistent with a signal interperetation , we have performed the analysis with both the full dataset ( with a lower cut on s1 at ) , and with a reduced dataset , where the two hint " data - points ( i.e. the starred points in figure [ fig : wimp_dists ] ) have been removed by cutting away the data - space below . to , as for the 100ld data - set , which would remove one of these points .] the former is displayed in the left panel of fig .[ fig : lims ] , while the results for the reduced dataset are shown on the central panel . results from the 100ld data are shown on the right .as discussed in section [ sec : sigs_lims ] we can define regions of credibility ( either exclusion limits or potential discovery regions ) by integrating under the normalised posterior .hence in the lower panels of figure [ fig : lims ] we show exclusion limits for various levels of confidence , between and , calculated by integrating the posterior from up to the limiting value of .one can equivalently consider the parameter space between these limits as a region of credibility .the limit for the full 225ld data - set can be compared with the result from , while the shaded band represents how the limit changes with different confidence .the upper panels show the dependence of the likelihood as a function of for various wimp masses .one can see directly that for the full 225ld dataset the likelihood function has a maximum ( corresponding to a minimum in the hamiltonian ) , indicating a preference for the data of a particular value of , which is strongest for lighter wimps .indeed this can also be observed in the exclusion curve as we change the significance value : particularly for lighter wimps the region of credibility between the and limits is denser as compared to heavier dm .this is due directly to the presence of a maximum in the posterior and likelihood .this is particularly interesting in the context of the potential hints of light dm in cdms and cogent ( and to some extent dama ) .however the significance of such a hint is weak . indeed the credible region for an wimplies between and , with a best - fit cross section at .of course the cross - section is still inconsistent with the best - fit region from cdms , unless one changes the systematic parameters to a rather extreme degree or considers less standard interactions .claims that these points are consistent with a dm signal are likely to be overly optimistic .the significance of the signal is comparable to a fluctuation confidence interval as ( roughly ) comparable to a region of credibility , then we actually find the significance to be a bit less than .indeed our choice of was motivated by the fact that it is close to the largest two - sided interval we could set around the maximum - likelihood value of cross section .the sigma - level is only approximate though , as our likelihood is non - gaussian ( see fig .[ fig : lims ] ) .] , and hence these data - points may just be events from the non - gaussian er background , which we already model .we can additionally compute the bayes factor for e.g. an wimp , by calculating the ratio of the joint signal and data probability integrated over all , to i.e. the no - dm scenario , where .hence the size of should tell us to what degree a positive signal of dm is preferred , relative to the scenario where no signal is present ( see for details ) .we calculate , which is just on the boundary of being a positive result . hence , again we can conclude there is only a weak hint of signal for a low - mass wimp .there are also systematic uncertainties from and , though they are unlikely to result in a significant enhancement of the signal significance . indeed , as can be seen from [ fig : wimp_dists ] if one attributes these points to a wimp signal , one must also explain why no data is seen where the signal from dm is expected to be even larger , at lower values of s1 for example .even so , the presence of consistency with signal , however weak , indicates some sort of new phenomenon may be present : either dm or an unknown ( or possibly misunderstood ) background .hence an interpretation of these points in terms of dark matter is possible but premature , however they are instructive as an example of the effect of signal - like points on our ability to set limits on light dm . by contrastwhen the two hint " data - points are removed from the analysis by the more stringent low - s1 cut ( see figure [ fig : wimp_dists ] for details ) , there is no maximum in the likelihood and posterior for any wimp mass , as one would expect since all points are in a region where the weight is small . indeed the density of the posterior is now less for all masses than for the full data - set , with the contrast particularly stark for lighter dm . the same is seen for 100ld , for which no hint of signal is present .in addition , the limits without the hint " points are stronger since the data are now almost completely consistent with a negative result . if the xenon100 collaboration were to observe additional signal - like points in their data , one would expect the density of the posterior to increase around the best - fit region . in any casethis demonstrates the ability of our method to accurately set limits or define potential discovery regions .all of the relevant information is contained within the posterior , which can be integrated over to define the degree of belief that a given region of parameter space is consistent with the data . before forming any firm conclusions on the efficacy of our method in searching for dark matter signals in direct detection data, we must compare our results to those previously found by the xenon100 collaboration .shown in figure [ fig : limit_comparison ] is our confidence limit ( identical to the one in figure [ fig : lims ] ) , compared with the limit derived by the xenon100 collaboration with the same 225 live days dataset , but their own profile likelihood analysis .uncertainties due to the relative scintillation efficiency are shown as a shaded region around our limit ( see e.g. for a review ) .in addition , in the lower panel of figure [ fig : limit_comparison ] we also show the results of applying our method to the 100 live days dataset , along with the limit from the xenon100 collaboration using their profile likelihood method , and a limit we have independently derived using the same method , but with identical inputs to our information theory analysis . and .the limit from our bayesian information theory method agrees with the xenon100 published limit for 225ld , but is several times stronger for 100ld . ]the exclusion limit derived with our information hamiltonian method agrees with that derived by the xenon100 collaboration for the 225 live days data - set for large masses . for lighter wimpsour limit is stronger , though this is likely due to uncertainty in the low - energy extrapolation of .indeed the xenon100 collaboration employ the most conservative approach and cut to zero below , where no data is available .our limit is derived using a constant extrapolation instead , though the uncertainty band shows the limit under different parameterisations of .hence one can consider our result as an independent cross - check of the limit published by the xenon100 collaboration .there are undoubtably other small differences between our inputs and those used by the xenon100 collaboration , however the agreement of both limits indicates that our method does indeed perform correctly when analysing direct detection data .note also that for the hint "- removed data - set , where the low - s1 cut is moved to s1 , the limit is stronger for heavy wimps due to the removal of the signal - like points by the cut .this is not so for lighter wimps , since much of the region where one expects to see signal is cut away in addition to the hint " points .we note however that when applying our method to the 100ld data that our information theory limit is stronger than that derived using the profile likelihood analysis , both performed directly by the xenon100 collaboration and from an independent analysis we have carried out .since the latter two limits are in agreement , it would be difficult to blame the inputs of the analysis on this discrepancy between the limits , hence it is likely that the coarse - graining ) .] of the profile likelihood analysis has resulted in the derivation of an over - conservative limit .to reiterate : we refer specifically to the profile likelihood analysis used by xenon100 here .the issue is not with the frequentist method itself , but rather with the choice of likelihood function used by the collaboration .hence , our limit is more accurate because we use a likelihood which exploits the whole data - space , and this should also be reflected in a profile likelihood analysis which followed the same principles .the reason for this discrepancy arising only for the 100ld dataset is not entirely clear , though it is likely that the increased background in this dataset relative to that from 225 live days ( due to the krypton leakage ) has effectively fooled the analysis into treating too many points as potential signal , thereby weakening the limit .hence we believe that this demonstrates the robustness of our method as compared to such a profile likelihood analysis , since it is less susceptible to leakage of background points into the signal region .in this work we have introduced a bayesian method of analysing data from dark matter direct detection experiments .our method takes as input the data itself and the expected signal and background distributions , defined over the whole data - space , which is divided into a grid of two - dimensional pixels .this enables us to take full advantage of the distinct expected distributions signal and background events , and hence to set limits ( or discovery regions ) without resorting to conservative approximations . using data from the xenon100 experiment as a worked example we demonstrated how one would apply our method to direct detection data .this has direct relevance also to lux experiment , and any future runs of xenon100 .we have shown that there is merit in looking beyond the confidence limit , as hints of signal may be affecting the structure of the likelihood and posterior in a non - trivial manner .indeed an analysis of the xenon100 data from 225 live days indicates a weak preference in the data for a light dm particle . at 50 confidencethe best fit cross section is in between and for an wimp ; the error bars being relatively large , it is very premature to argue that this is evidence for dark matter .similar regions can be obtained for any dark matter particle with a mass below gev , with a possible evidence for a dark matter signal in the data vanishing for masses above about 20 gev .if indeed these points are due to a detection of dark matter , more data from the xenon100 experiment should increase the confidence level and shrink the error bars on the cross section .alternatively , these events may be found to be due to an additional background process or the anomalous component of the er background , in which case the signal significance would vanish with more data .considering the recent null result from the lux experiment , the latter would seem to be a more plausible explanation .we also demonstrated that our new method can produce a complementary analysis to the one currently used by the xenon100 collaboration , where the data are placed into bands .indeed our limit and theirs agree for the most recent 225 live days data - set , however ours is several times stronger for the data from 100 live days .the reason for this disagreement for the older data - set is not clear .however it is possible that since the background was higher due to krypton contamination , there was a greater proportion of background events leaking into the region where signal was expected ( i.e. the more signal - like bands of the analysis used by the xenon100 collaboration ) , which may have fooled their analysis into setting too weak a limit .additionally our method could be even more robust , especially if one exploits the full detector volume ( with and now depending on physical positions in the detector ) .our analysis can be seen as an independent analysis of the xenon100 data , and more importantly could be employed by any present or forthcoming experimental collaboration for such a purpose . in particular , our method can be easily applied to the lux experiment , since it operates on a similar principle to xenon100 . in this caseone should hope to find agreement with our bayesian results and the frequentist method used by the lux collaboration , which should provide an important cross - check of the lux results .future experiments such as xenon1 t , lz and supercdms could also benefit from a bayesian cross - check. the use of our formalism should be very convenient to set limits and potential regions of discovery simultaneously , allowing scenarios where the presence of a signal is ambiguous to be studied without bias .additionally , our method can be used to go beyond the conservative approach , and to set the strongest limit possible by exploiting the different distributions of signal and background events . with a consistent analytical method used by all dark matter direct detection experiments ,the current constraints on the wimp cross - section should be both stronger and clearer .jhd and cb are supported by the stfc .
the experimental situation of dark matter direct detection has reached an exciting cross - roads , with potential hints of a discovery of dark matter ( dm ) from the cdms , cogent , cresst - ii and dama experiments in tension with null - results from xenon - based experiments such as xenon100 and lux . given the present controversial experimental status , it is important that the analytical method used to search for dm in direct detection experiments is both robust and flexible enough to deal with data for which the distinction between signal and background points is difficult , and hence where the choice between setting a limit or defining a discovery region is debatable . in this article we propose a novel ( bayesian ) analytical method , which can be applied to all direct detection experiments and which extracts the maximum amount of information from the data . we apply our method to the xenon100 experiment data as a worked example , and show that firstly our exclusion limit at confidence is in agreement with their own for the 225 live days data , but is several times stronger for the 100 live days data . secondly we find that , due to the two points at low values of s1 and s2 in the 225 days data - set , our analysis points to either weak consistency with low - mass dark matter or the possible presence of an unknown background . given the null - result from lux , the latter scenario seems the more plausible .
modern state - of - the - art object detection systems usually adopt a two - step pipeline : extract a set of class - independent object proposals at first and then classify these object proposals with a pre - trained classifier .existing object proposal algorithms usually search for possible object regions over dense locations and scales separately .however , the critical correlation cues among different proposals ( _ e.g. _ , relative spatial layouts or semantic correlations ) are often ignored .this in fact deviates from the human perception process as claimed in , humans do not search for objects within each local image patch separately , but start with perceiving the whole scene and successively explore a small number of regions of interest via sequential attention patterns .inspired by this observation , extracting one object proposal should incorporate the global dependencies of proposals by considering the cues from the previous predicted proposals and future possible proposals jointly . in this paper , in order to fully exploit global interdependency among objects , we propose a novel tree - structured reinforcement learning ( tree - rl ) approach that learns to localize multiple objects sequentially based on both the current observation and historical search paths . starting from the entire image , the tree - rl approach sequentially acts on the current search window either to refine the object location prediction or discover new objects by following a learned policy .in particular , the localization agent is trained by deep rl to learn the policy that maximizes a long - term reward for localizing all the objects , providing better global reasoning . for better training the agent ,we propose a novel reward stimulation that well balances the exploration of uncovered new objects and refinement of the current one for quantifying the localization accuracy improvements .the tree - rl adopts a tree - structured search scheme that enables the agent to more accurately find objects with large variation in scales .the tree search scheme consists of two branches of pre - defined actions for each state , one for locally translating the current window and the other one for scaling the window to a smaller one .starting from the whole image , the agent recursively selects the best action from each of the two branches according to the current observation ( see fig .[ fig : framework ] ) .the proposed tree search scheme enables the agent to learn multiple near - optimal policies in searching multiple objects . by providing a set of diverse near - optimal policies ,tree - rl can better cover objects in a wide range of scales and locations .extensive experiments on pascal voc 2007 and 2012 demonstrate that the proposed model can achieve a similar recall rate as the state - of - the - art object proposal algorithm rpn yet using a significantly smaller number of candidate windows. moreover , the proposed approach also provides more accurate localizations than rpn .combined with the fast r - cnn detector , the proposed approach also achieves higher detection map than rpn .our work is related to the works which utilize different object localization strategies instead of sliding window search in object detection .existing works trying to reduce the number of windows to be evaluated in the post - classification can be roughly categorized into two types , _i.e. _ , object proposal algorithms and active object search with visual attention . early object proposal algorithms typically rely on low - level image cues , _e.g. _ , edge , gradient and saliency .for example , selective search hierarchically merges the most similar segments to form proposals based on several low - level cues including color and texture ; edge boxes scores a set of densely distributed windows based on edge strengths fully inside the window and outputs the high scored ones as proposals .recently , rpn utilizes a fully convolutional network ( fcn ) to densely generate the proposals in each local patch based on several pre - defined `` anchors '' in the patch , and achieves state - of - the - art performance in object recall rate .nevertheless , object proposal algorithms assume that the proposals are independent and usually perform window - based classification on a set of reduced windows individually , which may still be wasteful for images containing only a few objects . another type of works attempts to reduce the number of windows with an active object detection strategy .et al . _ proposed a branch - and - bound approach to find the highest scored windows while only evaluating a few locations .et al . _ proposed a context driven active object searching method , which involves a nearest - neighbor search over all the training images .gonzeles - garcia _ et al . _ proposed an active search scheme to sequentially evaluate selective search object proposals based on spatial context information .visual attention models are also related to our work .these models are often leveraged to facilitate the decision by gathering information from previous steps in the sequential decision making vision tasks .et al . _ proposed an attention model embedded in recurrent neural networks ( rnn ) to generate captions for images by focusing on different regions in the sequential word prediction process .et al . _ and ba _ et al . _ also relied on rnn to gradually refine the focus regions to better recognize characters .perhaps and are the closest works to ours . learned an optimal policy to localize a single object through deep q - learning . to handle multiple objects cases, it runs the whole process starting from the whole image multiple times and uses an inhibition - of - return mechanism to manually mark the objects already found . proposed a top - down search strategy to recursively divide a window into sub - windows .then similar to rpn , all the visited windows serve as `` anchors '' to regress the locations of object bounding boxes .compared to them , our model can localize multiple objects in a single run starting from the whole image .the agent learns to balance the exploration of uncovered new objects and the refinement of covered ones with deep q - learning .moreover , our top - down tree search does not produce `` anchors '' to regress the object locations , but provides multiple near - optimal search paths and thus requires less computation .the tree - rl is based on a markov decision process ( mdp ) which is well suitable for modeling the discrete time sequential decision making process .the localization agent sequentially transforms image windows within the whole image by performing one of pre - defined actions .the agent aims to maximize the total discounted reward which reflects the localization accuracy of all the objects during the whole running episode .the design of the reward function enables the agent to consider the trade - off between further refinement of the covered objects and searching for uncovered new objects .the actions , state and reward of our proposed mdp model are detailed as follows .[ [ actions ] ] actions : + + + + + + + + the available actions of the agent consist of two groups , one for scaling the current window to a sub - window , and the other one for translating the current window locally .specifically , the scaling group contains five actions , each corresponding to a certain sub - window with the size 0.55 times as the current window ( see fig . [fig : actions ] ) . the local translation group is composed of eight actions , with each one changing the current window in one of the following ways : horizontal moving to left / right , vertical moving to up / down , becoming shorter / longer horizontally and becoming shorter / longer vertically , as shown in fig . [fig : actions ] , which are similar to .each local translation action moves the window by 0.25 times of the current window size .the next state is then deterministically obtained after taking the last action .the scaling actions are designed to facilitate the search of objects in various scales , which cooperate well with the later discussed tree search scheme in localizing objects in a wide range of scales .the translation actions aim to perform successive changes of visual focus , playing an important role in both refining the current attended object and searching for uncovered new objects .[ [ states ] ] states : + + + + + + + at each step , the state of mdp is the concatenation of three components : the feature vector of the current window , the feature vector of the whole image and the history of taken actions .the features of both the current window and the whole image are extracted using a vgg-16 layer cnn model pre - trained on imagenet .we use the feature vector of layer `` fc6 '' in our problem . to accelerate the feature extraction ,all the feature vectors are computed on top of pre - computed feature maps of the layer `` conv5_3 '' after using roi pooling operation to obtain a fixed - length feature representation of the specific windows , which shares the spirit of fast r - cnn .it is worth mentioning that the global feature here not only provides context cues to facilitate the refinement of the currently attended object , but also allows the agent to be aware of the existence of other uncovered new objects and thus make a trade - off between further refining the attended object and exploring the uncovered ones .the history of the taken actions is a binary vector that tells which actions have been taken in the past .therefore , it implies the search paths that have already been gone through and the objects already attended by the agent .each action is represented by a 13-d binary vector where all values are zeros except for the one corresponding to the taken action .50 past actions are encoded in the state to save a full memory of the paths from the start .[ [ rewards ] ] rewards : + + + + + + + + the reward function reflects the localization accuracy improvements of all the objects by taking the action under the state . we adopt the simple yet indicative localization quality measurement , intersection - over - union ( iou ) between the current window and the ground - truth object bounding boxes .given the current window and a ground - truth object bounding box , iou between and is defined as .assuming that the agent moves from state to state after taking the action , each state has an associated window , and there are ground - truth objects , then the reward is defined as follows : this reward function returns or .basically , if any ground - truth object bounding box has a higher iou with the next window than the current one , the reward of the action moving from the current window to the next one is , and otherwise .such binary rewards reflect more clearly which actions can drive the window towards the ground - truths and thus facilitate the agent s learning .this reward function encourages the agent to localize any objects freely , without any limitation or guidance on which object should be localized at that step .such a free localization strategy is especially important in a multi - object localization system for covering multiple objects by running only a single episode starting from the whole image . another key reward stimulation is given to those actions which cover any ground - truth objects with an iou greater than 0.5 for the first time . for ease of explanation ,we define as the hit flag of the ground - truth object at the step which indicates whether the maximal iou between and all the previously attended windows is greater than 0.5 , and assign to if is greater than 0.5 and otherwise . then supposing the action is taken at the step under state , the reward function integrating the first - time hit reward can be written as follows : the high reward given to the actions which hit the objects with an for the first time avoids the agent being trapped in the endless refinement of a single object and promotes the search for uncovered new objects .the tree - rl relies on a tree structured search strategy to better handle objects in a wide range of scales . for each window , the actions withthe highest predicted value in both the scaling action group and the local translation action group are selected respectively .the two best actions are both taken to obtain two next windows : one is a sub - window of the current one and the other is a nearby window to the current one after local translation .such bifurcation is performed recursively by each window starting from the whole image in a top - down fashion , as illustrated in fig .[ fig : tree ] . with tree search ,the agent is enforced to take both scaling action and local translation action simultaneously at each state , and thus travels along multiple near - optimal search paths instead of a single optimal path .this is crucial for improving the localization accuracy for objects in different scales . because only the scaling actions significantly change the scale of the attended window while the local translation actions almost keep the scale the same as the previous one .however there is no guarantee that the scaling actions are often taken as the agent may tend to go for large objects which are easier to be covered with an iou larger than 0.5 , compared to scaling the window to find small objects .the optimal policy of maximizing the sum of the discounted rewards of running an episode starting from the whole image is learned with reinforcement learning .however , due to the high - dimensional continuous image input data and the model - free environment , we resort to the q - learning algorithm combined with the function approximator technique to learn the optimal value for each state - action pair which generalizes well to unseen inputs .specifically , we use the deep q - network proposed by to estimate the value for each state - action pair using a deep neural network .the detailed architecture of our q - network is illustrated in fig .[ fig : archi ]. please note that similar to , we also use the pre - trained cnn as the regional feature extractor instead of training the whole hierarchy of cnn , considering the good generalization of the cnn trained on imagenet . during training , the agent runs sequential episodes which are paths from the root of the tree to its leafs . more specifically , starting from the whole image , the agent takes one action from the whole action set at each step to obtain the next state .the agent s behavior during training is -greedy .specifically , the agent selects a random action from the whole action set with probability , and selects a random action from the two best actions in the two action groups ( _ i.e. _ scaling group and local translation group ) with probability , which differs from the usual exploitation behavior that the single best action with the highest estimated value is taken .such exploitation is more consistent with the proposed tree search scheme that requires the agent to take the best actions from both action groups .we also incorporate a replay memory following to store the experiences of the past episodes , which allows one transition to be used in multiple model updates and breaks the short - time strong correlations between training samples .each time q - learning update is applied , a mini batch randomly sampled from the replay memory is used as the training samples .the update for the network weights at the iteration given transition samples is as follows : where represents the actions that can be taken at state , is the learning rate and is the discount factor .we train a deep q - network on voc 2007 + 2012 trainval set for 25 epochs .the total number of training images is around 16,000 .each epoch is ended after performing an episode in each training image . during -greedy training, is annealed linearly from 1 to 0.1 over the first 10 epochs .then is fixed to 0.1 in the last 15 epochs .the discount factor is set to 0.9 .we run each episode with maximal 50 steps during training . during testing , using the tree search , one can set the number of levels of the search tree to obtain the desired number of proposals .the replay memory size is set to 800,000 , which contains about 1 epoch of transitions .the mini batch size in training is set to 64 .the implementations are based on the publicly available torch7 platform on a single nvidia geforce titan x gpu with 12 gb memory ..recall rates ( in % ) of tree - rl with different numbers of search steps and under different iou thresholds on voc 07 testing set .31 and 63 steps are obtained by setting the number of levels in tree - rl to 5 and 6 , respectively . [ cols="^,^,^,^,^",options="header " , ] & 85.9 & 79.3 & 77.1 & 62.1 & 53.4 & 77.8 & 77.4 & 90.1 & 52.3 & 79.2 & 56.2 & 88.9 & 84.5 & 80.8 & 81.1 & 51.7 & 77.3 & 66.9 & 82.6 & 68.5 & 73.7 + [ tab:12 ] [ [ visualizations ] ] visualizations : + + + + + + + + + + + + + + + we show the visualization examples of the proposals generated by tree - rl in fig .[ fig : visual ] . as can be seen, within only 15 proposals ( the sum of level 1 to level 4 ) , tree - rl is able to localize the majority of objects with large or middle sizes .this validates the effectiveness of tree - rl again in its ability to find multiple objects with a small number of windows .in this paper , we proposed a novel tree - structured reinforcement learning ( tree - rl ) approach to sequentially search for objects with the consideration of global interdependency between objects .it follows a top - down tree search scheme to allow the agent to travel along multiple near - optimal paths to discovery multiple objects .the experiments on pascal voc 2007 and 2012 validate the effectiveness of the proposed tree - rl .briefly , tree - rl is able to achieve a comparable recall to rpn with fewer proposals and has higher localization accuracy . combined with fast r - cnn detector , tree - rl achieves comparable detection map to the state - of - the - art detection system faster r - cnn ( resnet-101 ) .the work of jiashi feng was partially supported by national university of singapore startup grant r-263 - 000-c08 - 133 and ministry of education of singapore acrf tier one grant r-263 - 000-c21 - 112 .
existing object proposal algorithms usually search for possible object regions over multiple locations and scales _ separately _ , which ignore the interdependency among different objects and deviate from the human perception procedure . to incorporate global interdependency between objects into object localization , we propose an effective tree - structured reinforcement learning ( tree - rl ) approach to sequentially search for objects by fully exploiting both the current observation and historical search paths . the tree - rl approach learns multiple searching policies through maximizing the long - term reward that reflects localization accuracies over all the objects . starting with taking the entire image as a proposal , the tree - rl approach allows the agent to sequentially discover multiple objects via a tree - structured traversing scheme . allowing multiple near - optimal policies , tree - rl offers more diversity in search paths and is able to find multiple objects with a single feed - forward pass . therefore , tree - rl can better cover different objects with various scales which is quite appealing in the context of object proposal . experiments on pascal voc 2007 and 2012 validate the effectiveness of the tree - rl , which can achieve comparable recalls with current object proposal algorithms via much fewer candidate windows .
with the tremendous growth of mobile subscribers and mobile connected devices , mobile broadband traffic has exhibited unprecedented growth . according to the forecast by cisco , there will be 11.5 billion mobile - connected devices by 2019 , which suggests that globally mobile data traffic is expected to grow to 24.3 exabytes ( eb ) per month by 2019 nearly a tenfold increase over that in 2014 .the ever increasing mobile data traffic propels us to seek new techniques to handle the challenge . deploying complementary small - cell networks like femtocells and picocells in the place where the need for traffic is`` hot '' emerges as a promising solution in the 5 g wireless networks . in urban areas ,radio propagation is more complicated than rural areas due to high - rise buildings , trees , etc . through reflection , diffraction , and even blockage ,buildings not only attenuate the received signal power but also weaken the undesired signal power , i.e. , the interference . consequently it often occurs that the strongest signal does not come from the geographically nearest bs which renders location - based cell association scheme ineffective . in ,choi analyzed blockage effects of a millimeter - wave cellular system .different from traditional cellular signals , a mobile user ( mu ) can only communicate with bss with line - of - sight ( los ) connection in millimeter wave systems .therefore , outage probability obtained assuming millimeter wave communication will be higher than that expected in real cellular communications . analyzed the performance impact of large - scale blockage effects , which not only capture 2d shape of buildings , but also the height of buildings , in a microcell network . in their worka mu can only connect to its nearest los bs , which of course does not reflect the reality when a mu is in a central business district ( cbd ) with high - rise buildings or is located in an office building . in this paper, we develop a model to analyze the coverage performance of heterogeneous networks incorporating both los and nlos connections , which is considered typical for urban environment .different from related work in which a bs is connected to the geographically nearest bs , which does not reflect the reality in urban environment , in our work , we consider that a mu is associated with a bs that delivers the strongest received signal - to - interference - plus - noise ratio ( sinr ) where both los and nlos connections are considered in the analysis. the main contributions of this paper are summarized as follows . 1 .the coverage performance in a heterogeneous network is analyzed considering both nlos and los transmissions and that a mu is associated with the bs that delivers the strongest sinr .2 . an analytical expression for the coverage probability is derived .this is distinct from previous work where the theoretical analysis is conducted assuming that a mu is associated with its nearest los bs for analytical tractability .3 . through results ,we find in urban areas , that deploying more bss in different tiers is better than merely deploying all bss in the same tier in terms of coverage probability .the remainder of this paper is organized as follows .section [ sec2 ] describes the system model . while in section[ sec3 ] , the per tier coverage probability and the coverage probability of the entire network are derived .results and performance analysis are given in section [ sec4 ] .finally , section [ sec5 ] concludes this paper .we consider a -tier heterogeneous cellular network which consists of macrocells , picocells , femtocells , etc and focus on the analysis in downlink coverage performance .the bss of each tier are assumed to be spatially distributed on an infinite plane following independent homogeneous poisson point processes ( ppps ) denoted by , with intensities , .mus are located according to a homogeneous ppp denoted by with intensity .bss of the same tier transmit using the same power and share the same bandwidth .bss belonging to different tiers use different power and orthogonal bandwidth for transmission .therefore there is no inter - tier interference .furthermore , within a cell , mus use different frequency bandwidth for downlink and uplink transmission and therefore there is no intra - cell interference for downlink transmission analysis in our paper .however , bss of the same tier may interfere each other and generate the inter - cell interference which is the main focus of this paper .the assumption that both bss and mus are homogeneously distributed over space makes our analysis tractable with a minor loss of accuracy . taking both nlos and los transmission into consideration, the nearest bs of a tier may not be the best candidate bs in that tier to associate with .more specifically , the cell association decision can be divided into the following two major steps . 1 . if a mu requests to connect to a bs , it will firstly choose nearest bss from each tier to form the set of candidate bss .the candidate bs set is denoted by \right\ } ] , meaning that among the nearest bss in the -th tier , the maximum sir comes from the -th nearest bs in that tier .accordingly , the typical mu will connect to bs if we restrict that only bss in the -th tier are available . conditioned on event , power received from the -th nearest bs in the -th tier and distances } } \right\ } ] is equivalent to event }{\max}p_{i}^{\left(k\right)}=m\right\} ] , the distribution of received power , , is derived as follows where follows from the law of total probability , and are the pdf of received signal power conditioned on nlos and los transmissions , respectively . the pdf of received signal power conditioning on nlos transmission and the distance , i.e. , is given by \right|\cdot f_{\xi_{n}^{\left(k\right)}}\left[p_{j}^{\left(k\right)^{*}}\left(z\right)\right]\nonumber \\ & = \frac{1}{z\sigma_{sn}^{\left(k\right)}\sqrt{2\pi}}\exp\left[-\left(\ln z-\mu_{sn}^{\left(k\right)}\right)^{2}\left/2\left(\sigma_{sn}^{\left(k\right)}\right)^{2}\right.\right],\label{eq : p_nlos , r}\end{aligned}\ ] ] where is obtained by applying the change - of - variables rule on the density function of a normal distribution , denotes the inverse function of , thus , and .obviously , is log - normal distributed conditioned on nlos transmission and the distance , i.e. , .similarly , the pdf of received signal power conditioning on los transmission and the distance , i.e. , is given by ,\label{eq : f_p_los , r}\end{aligned}\ ] ] where and .plugging ( [ eq : p_nlos , r ] ) and ( [ eq : f_p_los , r ] ) into ( [ eq : f_p_1 ] ) , the distribution of received power , , i.e. , , can be obtained : ,\label{eq : f_p_2}\end{aligned}\ ] ] where is the error function .the pdf of the received power given the distance is obtained by taking the derivative of with respect to : \nonumber \\ & + \frac{\left(1-e^{-\kappa r_{j}^{\left(k\right)}}\right)}{x\sigma_{sn}^{\left(k\right)}\sqrt{2\pi}}\exp\left[\frac{-\left(\ln x-\mu_{sn}^{\left(k\right)}\right)^{2}}{2\left(\sigma_{sn}^{\left(k\right)}\right)^{2}}\right].\end{aligned}\ ] ] conditioned on ,j\neq m}{\bigcap}p_{j}^{\left(k\right)}\leq t ] , the pdf of is then given by \nonumber \\ & + \frac{\left(1-e^{-\kappa r_{j}^{\left(k\right)}}\right)}{x\sigma_{sn}^{\left(k\right)}\sqrt{2\pi}}\exp\left[\frac{-\left(\ln x-\mu_{sn}^{\left(k\right)}\right)^{2}}{2\left(\sigma_{sn}^{\left(k\right)}\right)^{2}}\right]\bigg\},0<x\leq t.\end{aligned}\ ] ] where is the complementary error function .thus the lt of conditioned on ,j\neq m}{\bigcap}p_{j}^{\left(k\right)}\leq t ] is derived by using its definition if conditioned on } } \right\ } ] , and , ] , follows from the probability generating functional ( pgfl ) of the ppp . at last ,the pdf of is obtained by taking an inverse lt of , i.e. , through derivations above , we get ,j\neq m}{\bigcap}p_{j}^{\left(k\right)}\leq t , p_{m}^{\left(k\right)}=t,\left\ { r_{i}^{\left(k\right)}=r_{i}^{\left(k\right)}\right\ } \right)\nonumber \\ & = \int_{0}^{\frac{t}{\gamma}}f_{i_{m}^{\left(k\right)}}\left(x\right)\textrm{d}x\end{aligned}\ ] ] the per tier coverage probability can be derived by de - conditioning with respect to ,j\neq m}{\bigcap}p_{j}^{\left(k\right)}\leq t ] . noticing the conditional independence of ( conditioned on their respective distances ) , the probability of ,j\neq m}{\bigcap}p_{j}^{\left(k\right)}\leq t ] is given by ,j\neq m}{\bigcap}p_{j}^{\left(k\right)}\leq t\right|p_{m}^{\left(k\right)}=t,\left\ { r_{i}^{\left(k\right)}=r_{i}^{\left(k\right)}\right\ } \right)\nonumber \\ & = { \prod_{j=1,j\neq m}^{n}}\pr\left(\left.p_{j}^{\left(k\right)}\leq t\right|r_{j}^{\left(k\right)}=r_{j}^{\left(k\right)}\right)\nonumber \\ & = { \prod_{j=1,j\neq m}^{n}}f_{p_{j}^{\left(k\right)}}\left(t\left|r_{j}^{\left(k\right)}=r_{j}^{\left(k\right)}\right.\right).\end{aligned}\ ] ] de - conditioning with respect to ,j\neq m}{\bigcap}p_{j}^{\left(k\right)}\leq t ] is the -th nearest distance to the typical mu in this subsections for notation simplification . ] . combing equations ( [ eq : fi_r ] ) and ( [ eq : fr ] ) ,the unconditional probability can be obtained by de - conditioning with respect to as follows the per tier coverage probability is the summation of all possibilities as follows and the coverage probability of the whole tiers is given by \nonumber \\ & = 1-{\prod_{k=1}^{k}}\left[1-{\sum_{m=1}^{n } } f_{i_{m}^{\left(k\right)}}\left(\frac{t}{\gamma}\right)\right].\end{aligned}\ ] ]this section presents results of previous sections , followed by discussions . suggest that usually and if we fix antenna types and heights .let , , and in a 2-tier network . , , and are set to 2.7 , 30.8 , 32.9 and 41.4 , respectively . , , , ).,title="fig:",width=272,height=226 ] + in a 2-tier network ( , , , ).,title="fig:",width=272,height=226 ] + fig .[ fig2 ] shows the coverage probability with respect to the sir threshold which varies from -20 db to 40 db .it is found that the coverage probability decreases with the increase of the sir threshold , as the higher the sir threshold , the more difficult for the received sir at a mu to be higher than the sir threshold .when the sir threshold is fixed , 2-tier networks perform better than 1-tier networks . besides, a comparison with which does not consider nlos and los transmissions is illustrated in the same figure . in ,the coverage probability is a monotonously increasing along with the path loss exponent . in our model ,the coverage probability of both 2-tier and 1-tier networks have a similar trend with networks configured with nlos path loss exponent , i.e. , or , which indicates that buildings and trees have a non - negligible impact on network performance . the coverage probability vs. 2nd bs density is given by fig .[ fig3 ] . through curvesabove , the coverage probability decreases with a slower and slower rate as 2nd bs density increases . while in ,the coverage probability is only a function of sir threshold and path loss exponent if we ignore terminal noise . comparing fig .[ fig2 ] with fig .[ fig3 ] , we find that in urban areas _ dense bs deployment _ do not always provide a better network performance . as fig .[ fig2 ] shows , coverage probability in 2-tier networks with dense bss is larger than that in 1-tier networks when the sir threshold is fixed .while in fig .[ fig3 ] , dense bss deployment weakens networks performance , which indicates that deploying more bss in different tiers is better than deploying all bss in the same tier in terms of coverage probability in urban areas .this is an effect caused by the co - existence of nlos and los transmission in our model .in this paper , we propose a heterogeneous network model considering los and nlos transmission to study the coverage performance . the coverage probability is derived and analyzed with the assumption that both visible and invisible bss are available for a mu , as long as the sir threshold is satisfied .we also compare our work with and obtain some interesting observations . as for our future work ,channel model shall be generalized and impacts of the number of candidate bss per tier should also be investigated .the authors would like to acknowledgement from the international science and technology cooperation program of china ( grant no .2015dfg12580 and 2014dfa11640 ) , the national natural science foundation of china ( nsfc ) ( grant no . 61471180 and 61210002 ) and the fundamental research funds for the central universities ( hust grant no .2015xjgh011 and 2015ms038 ) .guoqiang mao s research is supported by australian research council ( arc ) discovery projects dp110100538 and dp120102030 and nsfc ( grant no .61428102 ) .cisco , cisco visual networking index : global mobile data traffic forecast update 2014 - 2019 , " white paper .available : http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/white_paper_c11-520862.pdf , feb .2015 .n. blaunstein and m. levin , parametric model of uhf / l - wave propagation in city with randomly distributed buildings , " _ ieee antennas and propagation society international symposium _ , vol .3 , pp . 16841687 , 1998 .
in this article , a network model incorporating both line - of - sight ( los ) and non - line - of - sight ( nlos ) transmissions is proposed to investigate impacts of blockages in urban areas on heterogeneous network coverage performance . results show that co - existence of nlos and los transmissions has a significant impact on network performance . we find in urban areas , that deploying more bss in different tiers is better than merely deploying all bss in the same tier in terms of coverage probability . coverage probability ; heterogeneous networks ; nlos ; blockage
the classical theory of nucleated polymerisation describes the growth of filamentous structures formed through homogeneous nucleation .this framework was initially developed by oosawa and coworkers in the 1960s to describe the formation of biofilaments , including actin and tubulin .this theory has been generalised to include secondary nucleation processes by eaton and ferrone in the context of their pioneering work elucidating the polymerisation of sickle haemoglobin , and by wegner in order to include fragmentation processes into the growth model for actin filaments . for irreversible growth in the absence of pre - formed seed material and secondary nucleation pathways , in 1962oosawa presented solutions to the kinetic equations which were very successful in describing a variety of characteristics of the polymerisation of actin and tubulin .the other limiting case , namely where seed material is added at the beginning of the reaction and where no new growth nuclei are formed during the reaction , is also well known . in this paper, we present exact results which encompass all cases between these limiting scenarios , extending the results of oosawa for a system dominated by primary nucleation to the case where an arbitrary concentration of pre - formed seed material is present .we also discuss a range of general closed form results from the oosawa theory for the behaviour of a system of biofilaments growing through primary nucleation and elongation .we then compare the behaviour of systems dominated by primary nucleation to results derived recently for systems dominated by secondary nucleation .the theoretical description of the polymerisation of proteins such as actin and tubulin to yield functional biostructures was considered in the 1960s by oosawa . for a system that evolves through primary nucleation of new filaments , elongation of existing filaments , and depolymerisation from the filament ends , the change in concentration of filaments of size ,denoted , is given by the master equation : where , , are rate constants describing the elongation , depolymerisation and nucleation steps and is the concentration of free monomeric protein in solution .the factor of 2 in eq . originates from the assumption of growth from both ends . for the case of irreversible biofilament growth, the polymerisation rate dominates over the depolymerisation rate ; from eq ., the rate of change of the number of filaments , , and the free monomer concentration , , were shown by oosawa under these conditions to obey : combining eqs .and yields a differential equation for the free monomer concentration : . here ,we integrate these equations in the general case where the initial state of the system can consist of any proportion of monomeric and fibrillar material ; this calculation generalises the results presented by oosawa to include a finite concentration of seed material present at the start of the reaction . beginning with eqs . and , the substitution followed by multiplication through by yields : = \frac{d}{dt}e^{n_c z}\ ] ] integrating both sides results in : we obtain a separable equation for , which can be solved to yield : integration and exponentiation yields the expression for : ^{1/n_c}\ ] ] inserting the appropriate boundary conditions in terms of and fixes the values of the constants and , resulting in the final exact result for the polymer mass concentration : ^{\beta } \label{eq : oosawaseededm}\ ] ] where the effective rate constant is given by and , , for .we note that this expression only depends on two combinations of the microscopic rate constants , and .the result reveals that controls the aggregation resulting from the newly formed aggregates , whereas defines growth from the pre - formed seed structures initially present in solution . in the special case of the aggregation reaction starting with purely soluble proteins , , ,these expressions reduce to and , and eq .yields the result presented by oosawa and the single relevant parameter in the rate equations is .interestingly , generalisations of eq . which include secondary pathways ,maintain the dependence on and but introduce an additional parameter analogous to for each active secondary pathway .an expression for the evolution of the polymer number concentration , may be derived using eq . .direct integration of eq . gives the result for : eqs . and give in closed form the time evolution of the biofilament number and mass concentration growing through primary nucleation and filament elongation . ; a lag - phase exists when the initial gradient is not the maximal gradient .the numbers accompanying each curve are ; eq . predicts that a lag - phase only exists when this ratio is less than unity .( a ) : polymerisation in the presence of an increasing quantity of seed material of a fixed average length ( 5000 monomers per seed ) added at the beginning of the reaction .the seed concentrations given as a fraction of the total concentration of monomer present are right to left ) : 0 , 0.01 , 0.04 , 0.1 , 0.2 , 0.5 .( b ) : nucleated polymerisation in the presence of a fixed quantity ( 1% of total monomer in the system ) of seed material of varying average length . the average number of monomer per seed are ( right to left ) : n / a ( unseeded ) , 5000 , 1000 , 500 , 200 , 50 . the other parameters for both panelsare : , , , s .[ fig : seeded],scaledwidth=100.0% ] insight into the early time behaviour of the polymer mass concentration can be obtained by expanding eq .for early times to yield : ^2 /2+ \mathcal{o}(t^3)\ ] ] this expression recovers the characteristic dependence of the oosawa theory and has an additional term linear in time relating to the growth of pre - formed aggregates . in many cases , eq .describes a sigmoidal function with a lag phase .the time of maximal growth rate , , can be found from the inflection point of the sigmoid from the condition : ( \mu \beta^{-\frac{1}{2}}\lambda_0)^{-1}\ ] ] such that a lag phase exists only for : using the composition reduces this to the simple condition : in other words , a point of inflection exists if the growth through elongation from the ends of pre - existing seeds , , is less effective that the effective growth through nucleation and elongation of new material , .this result imples that an increased nucleation rate promotes the existence of an inflection point , whereas an increased elongation rate or an increased level of seeding tends to disfavour its existence . in particular, we also note that in the absence of nucleation , an inflection point can not exist in the polymer mass concentration as a function of time .interestingly , the result eq. is analogous to the criterion applicable for fragmentation dominated growth where a lag phase only exists when the parameters controlling fragmentation - related secondary nucleation is larger than .the maximal growth rate , , is given by : which occurs at a polymer mass concentration given from eq .: the lag time , , is then given by : ( \mu \beta^{-\frac{1}{2 } } \lambda)^{-1 } \end{split } \label{eq : oo2}\ ] ] interestingly , from eq ., we note that a point of inflection can never exist for for simple nucleated polymerisation .by contrast , when secondary pathways are active , an inflection point can frequently be present .> p5.0cm|>p3.0cm|>p3.0cm|>p4.7 cm & * primary nucleation*&*fragmentation*&*monomer - dependent secondary nucleationkinetic parameters*& , & , , & , , **early time growth**&polynomial&exponential&exponential**scaling behaviour + ( lag time , max growth rate)**&yes + with &yes + with &yes + with * positive feedback*&no&yes&yes many systems that evolve through nucleated polymerisation display characteristic scaling behaviour .this behaviour can be seen to be a consequence of the fact that under many conditions , the rate equations are dominated by a single parameters that corresponds to the dominant form of nucleation : for classical nucleated polymerisation and for polymerisation in the presence of secondary pathways .these parameters have the general form where corresponds to the nucleation process and is related to the monomer dependence of this process : , where is the critical nucleus size for primary nucleation , for fragmentation driven growth and , the secondary nucleus size in cases where monomer - dependent secondary nucleation is dominant .the dominance of a single combination of the rate constants implies that many of the macroscopic system observables will be correlated since they are dependent on the same parameter .a striking examples of this behaviour is provided by the very general correlation between the lag - time and the maximal growth rate , which is manifested in the present case in eqs . and as and .interestingly the rate equations describe sigmoidal curves both in the presence and in the absence of secondary nucleation processes . for more complex primary nucleation pathways the polynomial form for the early time solutionis maintained , but higher - order exponents are obtained . in the absence of secondary processes , however , the lag - phase is less marked since the early time rise is a slower polynomial relationship rather than the exponential onset characteristic of secondary pathways .this observation implies that the difference between a high - order polynomial and an exponential may not be apparent in experimental data in the presence of noise , and therefore a global analysis of the system under different conditions is required in order to obtain robust mechanistic information .in this paper , we have provided results for the time course of nucleated polymerisation for systems that are initially in a mixed state and contain both monomeric and fibrillar material .these results generalise the classical oosawa theory that describes the formation of biofilaments to cases where an arbitrary amount of pre - formed seed material is present in the system . furthermore, these results represent a reference to which polymerisation driven by secondary pathways can be compared .we are grateful to the schiff foundation ( siac ) , and to the wellcome ( mv , cmd , tpjk ) and leverhulme trusts ( cmd ) for financial support .cohen , s.i.a . ; vendruscolo , m. ; dobson , c.m . ; knowles , t.p.j .nucleated polymerisation with secondary pathways ii .determination of self - consistent solutions to growth processes described by non - linear master equations . .knowles , t.p.j . ; waudby , c.a . ; devlin , g.l . ; cohen , s.i.a . ; aguzzi , a. ; vendruscolo , m. ; terentjev , e.m . ; welland , m.e . ;dobson , c.m .an analytical solution to the kinetics of breakable filament assembly ., _ 326 _ , 15331537 .
we revisit the classical problem of nucleated polymerisation and derive a range of exact results describing polymerisation in systems intermediate between the well - known limiting cases of a reaction starting from purely soluble material and for a reaction where no new growth nuclei are formed .
let be a euclidean space , with inner product and induced norm , and let -\infty,+\infty\right]}} ] be convex , lower semicontinuous , and proper .suppose that , the set of minimizers of , is nonempty , and let .then the sequence generated by converges to a point in and it satisfies an ostensibly quite different type of optimization problem is , for two given closed convex nonempty subsets and of , to find a point in .let us present two fundamental algorithms for solving this convex feasibility problem .the first method was proposed by bregman .[ f : map ] let and set then converges to a point .moreover , the second method is the celebrated douglas rachford algorithm .the next result can be deduced by combining and .[ f : dra ] set , let , and set then is the set of fixed points of , and stands for the normal cone of the set at .] converges to some point in , and converges to .again , there are numerous refinements and adaptations of map and dra ; however , it is here not our goal to survey the most general results possible but rather to focus on the speed of convergence. we will make this precise in the next subsection .most rate - of - convergence results for ppa , map , and dra take the following form : _ if some additional condition is satisfied , then the convergence of the sequence is _ at least as good as _ some form of `` fast '' convergence ( linear , superlinear , quadratic etc . ) ._ this can be interpreted as a _ worst case analysis_. in the generality considered here , we are not aware of results that approach this problem from the other side , i.e. , that address the question : _ under which conditions is the convergence _ no better than _ some form of `` slow '' convergence ? _ this concerns the _ best case analysis_. _ ideally , one would like an _ exact asymptotic rate of convergence _ in the sense of below . _ while we do not completely answer these questions , we do set out to tackle them by providing a _ case study _when is the euclidean plane , the set is the real axis , and the set is the epigraph of a proper lower semicontinuous convex function .we will see that in this case map and dra have connections to the ppa applied to .we focus in particular on the case not covered by conditions guaranteeing linear convergence of map or dra . ; see ( * ? ? ?* theorem 3.21 ) for map and or ( * ? ? ? * theorem 8.5(i ) ) for dra .] we originally expected the behaviour of map and dra in cases of `` bad geometry '' to be similar .it came to us as surprise that this appears not to be the case .in fact , the examples we provide below suggest that dra performs significantly better than map . concretely , suppose that is the epigraph of the function , where .since , we have that and since , the `` angle '' between and at the intersection is .as expected map converges sublinearly ( even logarithmically ) to .however , dra converges faster in all cases : superlinearly ( when ) , linearly ( when ) or logarithmically ( when ) .this example is deduced by general results we obtain on _ exact _ rates of convergence for ppa , map and dra .the paper is organized as follows . in section [s : aux ] , we provide various auxiliary results on the convergence of real sequences .these will make the subsequent analysis of ppa , map , and dra more structured .section [ s : ppa ] focuses on the ppa . after reviewing results on finite , superlinear , and linear convergence , we exhibit a case where the asymptotic rate is only logarithmic .we then turn to map in section [ s : map ] and provide results on the asymptotic convergence .we also draw the connection between map and ppa and point out that a result of gler is sharp . in section[ s : dra ] , we deal with dra , draw again a connection to ppa and present asymptotic convergence .the notation we employ is fairly standard and follows , e.g. , and .in this section we collect various results that facilitate the subsequent analysis of ppa , map and dra .we begin with the following useful result which appears to be part of the folklore . ] .[ f : genstolz ] let and be sequences in such that is unbounded and either strictly monotone increasing or strictly monotone decreasing .then where the limits may lie in ] .then there exists such that .[ ex : jonetal ] let and be sequences in , let , and suppose that let . then there exists such that the following hold : 1 .[ ex : jonetal1 ] if , then .2 . if , then , where ,1\right[}} ] .consequently , the convergence of to is at best sublinear if and at best linear if .combine example [ ex : x^q ] with corollary [ c:1-sd2 ] .this section focuses on the proximal point algorithm .we assume that equation f -\infty,+\infty\right] ] and .proposition [ p : proxfinconv ] guarantees finite convergence of the ppa . indeed , either a direct argument or ( * ?* example 14.5 ) yields consequently , if and only if . [ r : rockfin ] in ( * ? ? ?* theorem 3 ) , rockafellar provided a very general _ sufficient _ condition for finite convergence of the ppa ( which works actually for finding zeros of a maximally monotone operator defined on a hilbert space ) . in our present setting, his condition is by proposition [ p : proxfinconv ] , this is also a condition that is _ necessary _ for finite convergence .thus , we assume from now on that , or equivalently ( since is even and by ) , that equation f(0)=0. in which case finite convergence fails and thus we now have the following sufficient condition for linear convergence .the proof is a refinement of the ideas of rockafellar in .[ p : sharper ] suppose that ,{\ensuremath{+\infty}}\right].\ ] ] then the following hold : 1 .[ p : sharper1 ] if , then there exists ] such that is lipschitz continuous at with every modulus . let . then there exists such that since by ( * ? ? ?* theorem 2 ) ( or ) , there exists such that .let .noticing that , we have it follows by that , we have now for every , employing and increasing if necessary , we can and do assume that let .combining and , we obtain this gives and hence holds .now assume that so that .since is strictly increasing on , we note that the choice yields .assume that is differentiable on ,\delta\right[ ] , then in . a sufficient conditionfor to exist is to assume that the function is monotone on which in turn happens when is either nonnegative or nonpositive on by using the quotient rule .although we wo nt need it in the remainder of this paper , we point out that the proof of proposition [ p : sharper ] still works in a more general setting leading to the following result : [ c : proxlin ] let be a real hilbert space , and let -\infty,+\infty\right]}} ] such that if , then eventually a result which can also be deduced from ( * ? ? ?* theorem 2 ) .we now discuss powers of the absolute value function .suppose that .then and hence .we see that the actual linear rate of convergence of the ppa is now consider proposition [ p : sharper ] and corollary [ c : proxlin ] . then clearly in and hence ] , title="fig:",width=268 ] .45 ] . according to example [ ex : mapball ] , the map sequence exhibits logarithmic convergence; in fact , we now turn to the dra sequence . by, we have for every consequently , since , we have and therefore , by , the dra sequence with rate .once again , the dra sequence converges much faster than the map sequence !let us conclude .the results in this paper suggest that , for the convex feasibility problem , dra outperforms map in cases of `` bad geometry '' ( such as the absence of constraint qualifications or a `` zero angle '' between the constraints at the intersection ) .since our proof techniques do not naturally generalize , it would be interesting to study these questions in higher - dimensional space and other classes of convex sets .hhb was partially supported by the natural sciences and engineering research council of canada and by the canada research chair program .he also acknowledges the hospitality and the support of universit de toulouse , france during the preparation of an early draft of this paper .mnd was partially supported by an nserc accelerator grant of hhb .hmp was partially supported by an internal grant of university of massachusetts lowell .bauschke , j.y .bello cruz , t.t.a .nghia , h.m .phan , and x. wang , the rate of linear convergence of the douglas rachford algorithm for subspaces is the cosine of the friedrichs angle , _ journal of approximation theory _ 185 ( 2014 ) , 63 - 79 .bauschke , p.l .combettes , and d.r .luke , phase retrieval , error reduction algorithm , and fienup variants : a view from convex optimization , _ journal of the optical society of america _ 19 ( 2002 ) , 13341345 .f. deutsch and h. hundal , arbitrarily slow convergence sequences of linear operators : a survey , in _ fixed - point algorithms for inverse problems in science and engineering _ , springer optimization and its applications 49 , 213242 , springer , 2011 .( this proof is taken from http://www.imomath.com/index.php?options=686 and included here for completeness as we were not able to locate a book or journal reference . ) the second inequality is obvious .we only prove the right inequality since the proof of the left inequality is similar . without loss of generality , we assume that and that .let \lambda,{\ensuremath{+\infty}}\right[ ] . clearly , is convex , and .the line described by goes through the same points and lies between these points above the graph of ( by convexity ) .hence altogether , we deduce that
many iterative methods for solving optimization or feasibility problems have been invented , and often convergence of the iterates to some solution is proven . under favourable conditions , one might have additional bounds on the distance of the iterate to the solution leading thus to _ worst case estimates _ , i.e. , how fast the algorithm must converge . exact convergence estimates are typically hard to come by . in this paper , we consider the complementary problem of finding _ best case estimates _ , i.e. , how slow the algorithm has to converge , and we also study _ exact asymptotic rates of convergence_. our investigation focuses on convex feasibility in the euclidean plane , where one set is the real axis while the other is the epigraph of a convex function . this case study allows us to obtain various convergence rate results . we focus on the popular method of alternating projections and the douglas rachford algorithm . these methods are connected to the proximal point algorithm which is also discussed . our findings suggest that the douglas rachford algorithm outperforms the method of alternating projections in the absence of constraint qualifications . various examples illustrate the theory . * 2010 mathematics subject classification : * primary 65k05 ; secondary 65k10 , 90c25 . * keywords : * alternating projections , convex feasibility problem , convex set , douglas rachford algorithm , projection , proximal mapping , proximal point algorithm , proximity operator .
recent years have seen the development of major infrastructure around the earth in order to increase considerably the performances of experiments in astroparticle physics and cosmology . unlike other fields in science wheremeasurements are made on a physical phenomenon created in laboratory , research in astroparticle physics has originality in detection techniques and infrastructure locations .experiments are operated over large desert areas as the cherenkov telescope array ( cta ) , the pierre auger observatory or very soon the lsst telescope , in oceans or ice with antares and icecube , respectively , or even in space with projects as the ams-02 experiment or soon the jem - euso telescope . in all these projects , and more than any other experience in subatomic physics , minimising statistical and systematic errors is a challenge because the physical phenomenon observed is not produced by man himself : scientists are just observers .thus , scientists build ever larger detectors to go further in the knowledge .however , owning a large detector is not a necessary and sufficient requirement to push the limits of our knowledge : the systematic error still lurks and demands from scientists an excellent understanding of their detector . the temptation to increase the duty cycle of the detector in order to reduce stillmore the statistical error should not obscure the need to control the associated increase in systematic error .therefore there is a point where these two errors become inseparable and where the optimisation of detector performance can be compared to the concept of _ yin _ and _yang_. in all these projects , the environment is turned into a detector medium or a target .the atmosphere is probably the environment component the most common in astroparticle physics , usually used as a giant calorimeter in cosmic ray experiments or as an irreducible detection volume in the case of ground - based astrophysics surveys . to minimise as much as possible the systematic errors associated to the atmosphere evolution in time, its properties have to be continuously monitored .it is to this end that extensive atmospheric monitoring programs have been developed by different collaborations in astroparticle physics .section [ sec : astropart_exps ] will list briefly the different experiments where the atmosphere is a part of the detector . in all cases , at some point , photons propagate into the atmosphere and they are affected by the medium before being detected . section [ sec : atmo_effects ] will describe the different physics phenomena affecting photon propagation in the atmosphere in order to remove their effect in measurements .then , in section [ sec : atmo_facilities ] , the main instruments used to monitor the atmospheric properties or the atmosphere components will be presented .astroparticle physics experiments , equipped with such infrastructures and located in unusual places , provide an opportunity to develop interdisciplinary activities , especially in atmospheric science and geophysics : this will be the purpose of section [ sec : interdisciplinary ] .astroparticle physics is a research field at the intersection of particle physics , astrophysics and cosmology .the term astro refers to the messengers from the universe and arriving on earth .this area has the particularity to express the relationship increasingly close between the infinitely large ( such as astrophysical objects in the universe ) and the infinitely small ( such as the study of the structure of matter ) .origins of the field bring us back one century ago with the discovery in 1912 by victor hess ( nobel prize in physics in 1936 , ) of cosmic rays , opening at that time a new window for particle physics .nowadays , astroparticle physics tries to answer three main issues : the role of high - energy phenomena in the universe , the composition of the universe only 5% of the universeare known , the rest being composed of 26% of dark matter and 69% of dark energy whose origin and nature remain to be determined , and the fundamental interactions at the ultimate energies .the main messenger used to probe the universe is the photon , with a wavelength spreading from about nm ( gamma - rays , x - rays ) to a few kilometres ( long - wavelength radio waves ) .other messengers from universe can also be detected as cosmic rays or neutrinos : in some cases , photons produced by the interaction of these primaries with the atmosphere are recorded to evaluate indirectly the messenger properties . whether it is direct or indirect messenger detection , photon propagation in the atmosphere is of principal interest in astroparticle physics .this section presents briefly the actual and future experiments using the atmosphere as part of their detector .the flux of ultra - high energy ( uhe , ) cosmic rays and very - high energy ( vhe , ) gamma rays is very low on earth . to enlarge the detection area of these messengers from the universe ,telescopes are directly installed on the ground and the earth s atmosphere acts as the calorimeter of the detector .when cosmic rays or gamma rays enter the atmosphere , they induce extensive air showers composed of secondary particles . among these particles ,photons are emitted : their properties provide a direct way to probe the characteristics of the primaries . in the following ,we describe the main experiments and their techniques employed to detect ultra - high energy cosmic rays or very - high energy gamma rays .the cosmic ray energy spectrum observed on earth extends from below to beyond , more than eleven orders of magnitude .this energy spectrum drops rapidly with energy .for the so - called ultra - high energy cosmic rays corresponding to the right - hand limit of the spectrum , fundamental properties such as their origin , their chemical composition and their acceleration mechanisms are still a mystery ( see for more details ) . at energies greater than ,their flux is lower than one particle per century and per square kilometre .this makes these events only detectable indirectly through extensive air showers .charged particles composing the air shower excite atmospheric nitrogen molecules , and these molecules then emit fluorescence light isotropically in the nm range .detection of ultra - high energy cosmic rays using nitrogen fluorescence emission is a well established technique , used in the past in the fly s eye and hires experiments , currently at the pierre auger observatory and telescope array , and in the future by the jem - euso telescope .the energy and the geometry of extensive air showers can be calculated from information on the amount and the arrival time of recorded light signals at the fluorescence detectors ( fd ) . after more than thirty years of development having led to a better understanding of this technique , the current hybrid observatories set their energy scale using fluorescence measurements .also , the air - fluorescence technique allows the determination of the depth of maximum of the extensive air shower in a direct way , providing an estimation of the uhecr composition . during the development of an extensive air shower , the production rate of fluorescence photons depends on the temperature , pressure and humidity in the air . for the greatest energies , fluorescence light from an air showercan be recorded at distances up to about km , traversing a large amount of atmosphere before reaching the detector : the atmospheric effects on the light propagation must hence be considered carefully ( see fig .[ fig : astropart_atmo](left ) ) .to fulfil this task , extensive atmospheric monitoring programs are developed by these collaborations . in uhecr experiments ,atmospheric data are not only used to reject periods with bad weather conditions but are directly applied in the reconstruction of extensive air showers at the observatory .evaluating carefully atmospheric effects lead to a higher duty cycle for these experiments and improve the systematic errors in air shower reconstruction , two essential parameters in this research field where events are rare and require very large detectors .very - high energy gamma rays are useful messengers to probe the populations of high - energy particles in distant regions or in our own galaxy .they are currently the easiest way to directly measure the most energetic phenomena in the universe such as supernova shock waves , pulsar nebulae , or active galaxy nuclei .knowledge in this energy range was considerably increased during the last twenty five years , mainly achieved with ground - based cherenkov telescopes .once a vhe gamma ray enters the atmosphere , secondary electrons , positrons and photons are generated .the propagation of electrons and positrons through the atmosphere at speeds greater than the speed of light produces a cherenkov radiation beamed with respect to the shower axis .then , this light is collected by imaging air cherenkov telescopes ( iact ) or air shower detectors the latter will not be discussed in the rest of this paper since the atmosphere is not a predominant effect on measurements .the main interests of this type of detector are a good rejection of the overwhelming cosmic ray background , the low energy threshold , and the angular and energy resolutions .this technique is currently used by the h.e.s.s . , magic and veritas collaborations and , in a near future , will be employed in the cta project to cover the energy range from a few tens of gev to a few hundreds of tev .cherenkov light is emitted through a cone where the cosine of its opening angle is equal to , where is the index of refraction and with being the velocity of the emitting particle .also , the cherenkov yield is directly linked to the refractive index of air .thus , during the development of an electromagnetic air shower , a good knowledge of the atmospheric vertical profiles is needed .figure [ fig : astropart_atmo](right ) shows the average cherenkov light density at ground for typical gamma - ray showers , for different profiles of atmosphere .once the cherenkov emission is well evaluated , atmospheric extinction is another source of concern affecting directly the energy threshold and biases any flux measurement of astrophysical sources since misreconstructed energies shift the entire spectrum to lower energies . whereas up to now data taking periods with bad atmospheric conditions are simply rejected in imaging air cherenkov experiments , there are some studies evaluating the feasibility to correct data recorded also during non - optimal atmospheric conditions .attempts for such a result applied to the next generation of imaging air cherenkov experiments cta are presented in : after having recorded a softer energy spectrum of an astrophysical source , measurements of atmospheric attenuation _ in situ _ would permit to obtain a correction factor to apply to the spectrum in order to come back to the original one . whereas this practice is now common in experiments detecting ultra - high energy cosmic rays ( see sect .[ sec : uhecr ] ) , reconstruction of air showers even in unclear atmospheric conditions is not yet widespread in ground - based gamma astronomy .the twentieth century has been the emergence of the standard model of cosmology to describe the evolution of the universe since the big bang . through multiple experimental probes such as the large scale distribution of galaxies , the study of supernovae or the cosmic microwave background , this model also called ( cold dark matter ) is imposed over the years .great advances in our understanding of the universe came from large scale sky surveys in many wavebands , such as sdss , snls , 2mass or first . despite its many successes in explaining measurements until today , only 5% of the composition of the universe are known , the rest consisting of 26% of dark matter acting only through the gravitational force and 69% of dark energy whose origin remains to be determined and is described by the cosmological constant .the next generation of instruments in the field aims to better understand the nature and the origin of dark matter and dark energy , using mainly the following cosmic probes : the weak lensing cosmic shear of galaxies , the baryon acoustic oscillations in the power spectrum of galaxy distribution , and the relationship between redshifts and distances for type ia supernovae .the simultaneous study of these probes on the same data set can check the consistency of different cosmological models describing the universe .the euclid satellite from space , i.e. without atmosphere and the ground - based large - area surveys such as the dark energy survey , pan - starrs and the large synoptic survey telescope hold the first places to fill this role . a ground - based telescope with a broad - band detector from nm to nm records the integral of the source specific flux density at the top of the atmosphere , weighted by the response function depending on the effects of the atmosphere and the instrumental optics .the science goals are achieved thanks to multiple images of the sky recorded over the course of many years , in less than ideal measurement conditions . in this case, it is challenging to obtain a calibration of broad - band photometry stable in time and uniform over the sky to precisions of 1% .this goal can be reached only if an extensive work is undertaken to monitor continuously the optical transmittance from the top of the atmosphere to the input pupil of the telescope and the instrumental system response from the input pupil to the detector .although the atmosphere is globally opaque to electromagnetic radiation , two wavelength ranges permit the detection of photons : from about nm to a few tens of micrometre ( ultraviolet / uv , visible and near - infrared / ir , where the nm is designed as optical in the rest of this paper ) , and from a few centimetres to about m ( centimetric to decametric radio waves ) . whereas the radio wavelength domain presents a very clean atmospheric transmission , this is not exactly the case in uv , visible or near - ir where some distortions are present in the spectrum of atmospheric transmission ( see fig . [fig : transmission_spectrum ] ( left ) ) .if the atmosphere is used as a giant calorimeter , it has been mentioned in sect .[ sec : airshowers ] that the molecular component affects also the yield of fluorescence and cherenkov lights .cherenkov and fluorescence light production at a given wavelength depends on the atmospheric variables pressure , temperature and vapour pressure . whereas the cherenkov light yield can be directly calculated from the refractive index of the atmosphere , the weather dependence of the fluorescence production is much more complicated to determine . among the effects being difficult to measure experimentally, we can cite the collisional quenching of fluorescence emission . in this phenomenon ,the radiative transitions of excited nitrogen molecules are suppressed by molecular collisions .also , water vapour contributes to collisional quenching leading to an additional dependence on the atmosphere humidity for the fluorescence yield .however , for all phenomena affecting the light production , a simple knowledge of the vertical profiles of temperature , pressure or vapour pressure is needed to well estimate the corresponding yields .attenuation of light from the production point to the detector can be expressed as a transmission coefficient , or optical transmittance , giving the fraction of incident light at a specified wavelength along the path length .if is the optical depth , the cross section and the density of the component along the line of sight , then is estimated using the beer - lambert law \\ & = ( 1 + h.o . ) \exp\,\left[-\prod_i \int_0^x \sigma_i(\lambda)\,n_i(x ) { \rm d}x \right ] , \end{split } \label{eq : beer}\ ] ] where , , , represent the different contributions to the light attenuation in the atmosphere .they can be grouped into four categories : the molecular absorption , the molecular scattering , the aerosol scattering and the cloud extinction .the optical depth expresses the quantity of light removed from a beam by _ absorption _ the radiant energy is transformed into other wavelengths or other forms of energy or _ scattering _ the energy received is re - radiated at the same wavelength usually with different intensities in different directions during its path through a medium .light is not only scattered out of the field of view of the detector and can be also scattered into it : represents this higher - order correction taking into account for single and multiple scattering into the field of view of the detector ( see sect . [sec : mult_scattering ] ) .the rest of this section consists in describing these four categories stressing different dependences on the atmosphere thickness and on the incident wavelength , and leading finally to the wavelength - dependent global optical transmission spectrum of atmosphere .the air is a medium with a mass composed of dinitrogen and dioxygen ( then , traces of argon ar , neon ne , helium he , dihydrogen and xenon xe ) .they correspond to the permanent gases composing the atmosphere .the resulting molecular mass for an air molecule is / mol for standard temperature and pressure at sea level . to take into account humidity effects, air must be added a factor corresponding to the water vapour . the final molecular mass with respect to the altitude above sea level ( asl ) is the sum of the two components , weighted by their volume fractions where the molecular masses for dry air and water vapour are / mol and / mol , respectively .the atmosphere is not only composed of these permanent gases and water vapour h , other variable gases are also present in a very small quantity in the atmosphere . among them ,the main gases are carbon oxides co and co , methane ch , ozone o , nitrogen oxides no and no , or sulfur dioxides so .we have to consider also the volatile organic compounds ( vocs ) which are a class of organic compounds , most of them being hydrocarbons. presently , theoretical models are not generally available to provide absorption cross sections for each species for different temperature and pressure values .the cross sections of the chemical species are measured in laboratory at fixed temperature and pressure , then empirical models are used to interpolate them to intermediate values of temperature and pressure . nowadays , the largest molecular spectroscopic database is the hitran ( high resolution transmission ) database serving as input for radiative transfer calculation codes .after a brief listing of the main gases present in the atmosphere , an estimation of their contribution to the optical transmission spectrum of atmosphere is done in the following .light absorption in the optical wavelength range is dominated by three atmospheric gases : molecular oxygen , water vapour h and ozone o .absorption by * molecular oxygen o * presents three narrow absorption bands in the transmission spectrum : the fraunhofer a band at nm , the fraunhofer b band at nm and the band at nm . at shorter wavelengths , the atmosphere is strongly opaque due to the bands of the schumann - runge system between nm and nm , and the weak herzberg dissociation continuum extending from nm to nm . since molecular oxygen is a well - mixed gas in the atmosphere , intensity of these bands depends only on the atmospheric density and it is axisymmetric with respect to the zenith . *ozone o * is a triatomic molecule far less stable than molecular oxygen . about 97% of the total ozone columnis found in the stratosphere , the so - called ozone layer , where it is produced naturally via the photo - dissociation of the molecular oxygen by uv radiation , followed by a recombination of oxygen molecules and oxygen atoms . a small fraction of ozone is also located at the earth s surface , i.e. in the troposphere , produced in the smog formed in large cities where the presence of sunlight generates photo - chemical reactions .the opacity of ozone is responsible of the total loss of atmospheric transmission below nm with the hartley and huggins bands . between nm and nm , the broad chappuis bands attenuate light of a few per cent .since ozone is mainly located in the stratosphere , its temporal and spatial variability is low .this characteristic is not at all the case for * water vapour h * in the atmosphere , a constituent not well - mixed in the air .water vapour is mainly found in the lowest part of the atmosphere . however , a ground - level value of relative humidity is not a right estimate of the total column height .it absorbs electromagnetic radiation in the optical wavelength range through different bands , where the prominent ones are at nm , nm and nm . due to the high variability in time and space of the water vapour component , intensity of these bands varies and a continuous monitoring is advised ( see sect . [ sec : exp_molecular ] ) .figure [ fig : transmission_spectrum](right ) depicts this phenomenon : whereas absorption bands related to molecular oxygen are similar in summer and winter , amplitude of bands due to water vapour varies .additional trace gases are also present in the atmosphere and could absorb a part of the electromagnetic radiation since their absorption cross section is not negligible in optical wavelength range .these gases are not present in a same quantity in every locations and require a specific measurement program to monitor them .these atmospheric trace gases come from both natural sources and human activities .examples of natural sources include wind picking up dust and soot from the earth s surface and carrying it aloft , volcanoes belching tons of ash and dust into our atmosphere , or forest fires producing vast quantities of drifting smoke . as human - induced sources ,one cites usually transportation , fuel combustion or industrial processes .these different gases had previously constant concentration in the atmosphere , since their origin was exclusively natural .but , nowadays , human activities increase the concentration of these gases called air pollutants . among trace gases present in the atmosphere but not affecting the transmission spectrum in the optical wavelength range , we can list carbon oxides co ( absorbing in near - infrared ) and co ( absorbing in near - ultraviolet ) , methane ch ( absorbing in near - infrared ) and nitric acid hno ( absorbing in near - ultraviolet ) . on the contrary ,other trace gases affect strongly the transmission spectrum if they are found in enough quantity in the atmosphere . *sulfur dioxide so * is a colourless gas coming primarily from the burning of sulfur - containing fossil fuels ( such as coil or oil ) .it can enter the atmosphere naturally during volcanic eruptions and as sulfate particles from ocean spray .its corresponding absorption band in the ultraviolet is spread from nm to nm , depicting very sharp rotational structures visible only at high spectral resolution . * nitrogen dioxide no * is a gas formed when some of the nitrogen in the air reacts with oxygen during the high - temperature combustion of fuel . although it is produced naturally , its concentration in urban environments is 10 to 100 times greater than in non - urban regions .no absorption covers a large part of the entire optical spectrum from near - ultraviolet to near - infrared , with a peak reached about nm . in moist air, nitrogen dioxide reacts with water vapour to form corrosive nitric acid hno , a substance that adds to the problem of acid rain .moreover , nitrogen dioxide is highly reactive and plays a key role in producing ozone and other ingredients of photo - chemical smog . to a lesser extent ,the * nitrate radical no * could also affect the transmission of electromagnetic radiation in visible via the strongest absorption feature at nm . in the troposphere , the main nighttime oxidant is the nitrate radical no formed by the relatively low oxidation of no by o .the molecular component of the atmosphere is described by the height - dependent profiles of its state variables pressure and temperature , where corresponds to the altitude above sea level .these vertical profiles can be provided with balloon - borne radiosonde flights or with atmospheric models from numerical weather predictions ( see sect . [sec : exp_molecular ] ) . as a first approximation, the air density in molecules per m as a function of height above sea level can be parameterised as where is the universal gas constant , the avogadro constant , and the scale height for molecular component ( km ) .standard air is defined by the temperature , the pressure and the molecular density m .water vapour h is not a well - mixed constituent of the molecular component and does not follow the same dependence on the altitude . of course , almost all the physical quantities given in eq. vary with time and require a continuous monitoring. in the same way , the refractive index of dry air for incident wavelengths greater than nm in function of the altitude is given by \times\frac{p(h ) / { \rm pa}}{96\,095.43}\\ & \times \frac{1 + 10^{-8}\left[0.613 - 0.009\,98\ , t(h)/{^\circ { \rm c } } \right]\ , [ p(h ) / { \rm pa}]}{1 + 0.003\,661 \ ,t(h)/{^\circ { \rm c } } } , \end{split } \label{eq : transmission_ray_1}\ ] ] where the refractive index for an atmosphere in the standard temperature and pressure conditions is defined by the formula molecular scattering from near - uv to near - ir wavelengths can be approximated using uniquely the elastic rayleigh scattering process , since it dominates inelastic raman scattering ( by around three orders of magnitude ) or absorption .rayleigh theory can be applied if scattering particles are much smaller than the light wavelength .the rayleigh scattering cross section per air molecule is given analytically by the following formula ^ 2 \frac{6 + 3\,\delta_{\rm n}(\lambda ) } { 6 - 7\,\delta_{\rm n}(\lambda)}\\ & = a\,\lambda^{-\left(b + c\,\lambda + d/\lambda\right ) } , \end{split } \label{eq : transmission_ray_3}\ ] ] where is the refractive index for standard air and the depolarisation factor taking into account the anisotropy of the air molecules .values for , , , can be found in . is determined by the asymmetry of n and o molecules , and equal to zero for point - like scattering centres .the depolarisation factor varies by approximately from the near - ir ( ) to the uv domain ( ) , introducing a corresponding variation with wavelength of around for the rayleigh scattering cross section .the value chosen leads to a shift for the wavelength dependence of the molecular scattering from the well - known behaviour to an effective value of .equations are given in the case of a dry air .however , a. bucholtz shown in that the quantity / varies of less than 1% for a typical water vapour density of / .thus , calculations of the rayleigh scattering cross section without taking into account water vapour content in the atmosphere does not lead to a wrong approximation . finally , the molecular optical depth integrated from the sea level up to an altitude and observed through a zenith angle is obtained with i.e. an attenuation axisymmetric about the zenith ( ) with a time dependence driven only by pressure and temperature variations . is the so - called molecular extinction coefficient . due to the limited field of view of detectors , a non - negligible fraction of photonsare detected after one or several scatterings and scattering properties of the atmosphere need to be well estimated .a scattering phase function is used to describe the angular distribution of scattered photons .it is typically written as a normalised probability density function expressed in units of probability per unit of solid angle . when integrated over a given solid angle , a scattering phase function gives the probability of a photon being scattered with a direction that is within this solid angle range .since scattering is always uniform in azimuthal angle for both aerosols and molecules , the scattering phase function is always written simply as a function of polar scattering angle .molecules are governed by rayleigh scattering that can be derived analytically via the approximation that the electromagnetic field of incident light is constant across the small size of the particle .the molecular phase function , symmetric in the forward - backward direction , is proportional to the factor . due to the anisotropy of the n and o molecules , a small correction factor included and is equal to about one per cent in the case of air , , \label{eq : angular_dependence_1}\ ] ] where is the polar scattering angle , the probability per unit solid angle , and the depolarisation factor is part of the new parameter .although atmosphere is mainly composed of molecules , a small fraction of larger particles such as dust , smoke , sea salt or droplets are in suspension .the aerosol component is defined with respect to the ground level instead of the sea level .these particles are called _ aerosols _ and their typical size varies from nm to about m .atmospheric particle size distribution is generally a continuous multi - modal function spanning up to ten or more decades of concentration , usually expressed in terms of superimposed lognormal distributions .each distribution represents a mode having a chemically distinct composition due to a specific source : the nucleation mode for aerosol sizes between nm to about m , the accumulation mode between m and m , and the coarse mode for sizes greater than m . due to the hygroscopic nature of atmospheric aerosols ,relative humidity affects their size : an increase in humidity will grow the aerosol size , especially for relative humidity values greater than 50% . whereas fine modes originate mainly from condensation sources and atmospheric gas - to - particle conversion processes , mechanical processes as wind driven soil erosion or seawater bubble - bursting produce the coarse mode .the latter mode has a considerably shorter atmospheric residence time than the sub - micrometre aerosol fraction .concerning their effect on radiative processes , the accumulation mode is the most important since it represents the size range in which light scattering is the most efficient .however , in regions impacted by high levels of coarse mode aerosols such as deserts soil dust or the marine boundary layer sea salt , coarse mode aerosols represent most of the total mass and affect in a non - negligible way the solar radiation scattering . on the contrary , the nucleation mode represents just a small part of the total aerosol mass and is inefficient on light scattering processes .since the size of these particles is no longer small with respect to the incident wavelength , the analytical rayleigh formulae can not be applied in this case .the mie scattering that they produce is much more complex : for instance , it depends on particle composition , particle shape and on the particle size distribution .also , aerosol populations , unlike the molecular component , change quite rapidly in time , depending on the wind and weather conditions .two main physical quantities have to be known to evaluate the effect of the aerosols on photon propagation in the atmosphere : the height - dependence of aerosol optical depth and the aerosol scattering phase function .dust , in particular soot , also absorbs light : the fraction of absorbed light is given by the so - called single scattering albedo , equal to the ratio of the scattering cross section over the extinction ( i.e. total ) cross section .it evaluates the probability that a photon is scattered , rather than absorbed , during an interaction with an aerosol particle .however , this probability is usually close to one and aerosols are approximated to scattering centres only .the scattering cross section by these particles is less easily described as the electromagnetic field of incident light can no longer be approximated as constant over the volume of the particle .the dependence of the total scattered intensity found by rayleigh is no longer valid for the mie solution , giving rise to a cross section scaling less strongly with wavelength than does molecular scattering ( to ) .mie scattering theory offers a solution in the form of an infinite series for the scattering of non - absorbing spherical objects of any size .the number of terms required in this infinite series to calculate the scattering cross section is given in .therefore height - dependent profiles of the vertical aerosol optical depth are usually measured to evaluate aerosol population in the atmosphere . in this way , an indirect measurement of the aerosol size distribution and of the aerosol concentration is obtained .aerosols are mainly found is the lowest part of the atmosphere , the so - called planetary boundary layer , whose the behaviour is directly influenced by friction and by heat fluxes with the earth s surface .the planetary boundary layer has a thickness quite variable in space and time , usually around km but it can vary from m to about km . a characteristic shape of the vertical profile of is the following : a more or less linear increase at the beginning , then a flattening since the aerosol concentration decreases quickly with the altitude . to evaluate the aerosol extinction for a given incident wavelength and observed through a zenith angle , a commonparameterisation used is a power law where is the wavelength used for the measurement of , is the so - called aerosol extinction coefficient and is known as the angstrm coefficient .this exponent depends on the size distribution of aerosols .when the aerosol particle size approaches the size of air molecules , should tend to ( mainly dominated by accumulation - mode aerosols ) , and for very large particles , typically larger than m , it should approach zero ( dominated by coarse - mode aerosols ) . usually , a is characteristic of a desert environment and the aerosol optical depth is more or less independent of the incident wavelength .thus , the exponent is an indirect measurement of the aerosol size .a much more direct estimation of the aerosol size is provided by the aerosol scattering phase function .it is well - known that the shape of the scattering phase function is dependent on the aerosol size .however , due to the mie formalism , it is difficult to get a basic relationship between the two quantities . a new approach , the so - called ramsauer approach , avoids this difficulty .the ramsauer effect was discovered in 1921 and it is a model known in atomic and nuclear physics .its main advantage is its intuitive understanding of an incident particle scattering over a sphere .originally applied to electron scattering over atoms or neutron scattering over nuclei , it can be also used for light scattering on non - absorbing spherical particles .this approach highlights the key role of the forward - scattering peak since the full - width at half maximum of this peak is proportional to the inverse of the aerosol size . also , the amount of light scattered in the forward direction ( ) is proportional to the target area , i.e. square of the sphere radius . in other words , at constant wavelength , a larger aerosol scatters more light in the forward direction .one of the most popular scattering phase functions is the henyey - greenstein ( hg ) function .henyey and greenstein first introduced this function in 1941 to describe scattering processes in galaxies .this is a parameterisation usually used to reproduce scattering on objects large with respect to the incident wavelength , valid for various object types and different media .if the backscattering can not be neglected , the hg function becomes a `` double hg '' and is given by , \end{split } \label{eq : apf}\ ] ] where is the asymmetry parameter given by and the backward scattering correction parameter . and vary in the intervals ] , respectively .the parameter is an extra parameter acting as a fine tune for the amount of backward scattering .most of the atmospheric conditions can be probed by varying the value of the asymmetry parameter : aerosols ( ) , haze ( ) , mist ( ) , fog ( ) or rain ( ) .changing from 0.2 to 1.0 increases greatly the probability of scattering in the very forward direction as it can be observed in fig .[ fig : henyeygreenstein](left ) .contrary to the rayleigh scattering phase function , henyey - greenstein phase functions depict a strong forward peak directly linked to the asymmetry parameter .this pronounced forward - scattering peak can easily be two orders of magnitude greater than at large scattering angles . using the ramsauer approach ,the mean radius of the aerosol size distribution can be estimated from the width of the forward peak , for a fixed incident wavelength .[ fig : henyeygreenstein ] ( right ) shows the relation between values and mean particle radius for two different incoming wavelengths .some of astroparticle projects measure this asymmetry parameter , giving a first estimation of the mean aerosol size present in the low part of the atmosphere ( see sect .[ sec : exp_aerosol ] ) .light coming from an isotropic source is scattered and / or absorbed by molecules and / or aerosols in the atmosphere . in the case of long distances or total optical depth values greater than about , the single light scattering approximation when scattered light can not be dispersed again to the detector and only direct light is recorded is not valid anymore .the multiple light scattering when photons are scattered several times before being detected has to be taken into account in the total signal recorded .whereas the first phenomenon reduces the amount of light arriving at the detector , the latter increases the spatial blurring of the isotropic light source .atmospheric blur is well - known for light propagation in the atmosphere and has been studied by many authors .a nice review of relevant findings in this research field can be read in .originally , these studies began with satellites imaging earth where aerosol blur is considered as the main source of atmospheric blur .this effect is usually called the adjacency effect since photons scattered by aerosols are recorded in pixels adjacent to where they should be .the problem of light scattering in the atmosphere has not analytical solutions .even if analytical approximation solutions can be used in some cases , monte carlo simulations are usually used to study light propagation in the atmosphere .a multitude of monte carlo simulations have been developed in the past years , all yielding to similar conclusion : aerosol scattering is the main contribution to atmospheric blur , atmospheric turbulence being much less important . a significant source of atmospheric blur is especially aerosol scattering of light at near - forward angles .the multiple scattering of light is affected by the optical thickness of the atmosphere , the aerosol size distribution and the height - dependent profiles of aerosol concentration .the contribution of multiply scattered light to total light recorded depends not only on intrinsic properties of the atmosphere , but also on the source extent and on the integration time of the detector .indeed , most scattered light arrives at the detector with a significant delay due to its detour .thus , depending on the experiment and on the physics phenomenon probed , the multiply scattered light will not affect measurements with the same strength . in the case of very - high energy gamma rays and the detection of cherenkovlight beamed with respect to the air shower axis , scattered light contribution is insignificant since integration times are very short ( typically much shorter than ) and air showers are observed only at distances below one thousand meters due to the small field of view . due to the short integration times , the near - forward scattering angles are responsible for most of the indirect light recorded by the camera , leading to a higher contribution of scattering on aerosols than scattering on molecular component .contrary to cherenkov radiation , fluorescence light is emitted isotropically and extensive air showers are mostly observed about ten kilometres from the telescope .light emission is extended and integration times are typically few hundreds of nanoseconds. therefore the reconstruction of the energy and the depth of maximum x of an extensive air shower is particularly affected by the multiply scattered light recorded by the fluorescence telescopes : it results in a systematic over - estimation of these two quantities if the multiple scattered fraction is not subtracted from the total signal .four main studies about multiple scattering effect on air shower reconstruction have been done during the last ten years and are currently used in uhecr observatories : three of them based on monte carlo simulations , and the last one using only analytical calculations .contrary to analytical solutions , monte carlo simulations allow to follow each photon or photon packet emitted by an air shower and provide their number of scatterings during the propagation , and their arrival direction and time at the detector .all of these works predict the percentage of indirect light recorded at the detector within its time resolution ( usually ) , for every shower geometry and aerosol conditions . however , only the parameterisation available in has explicitly a dependence on the asymmetry parameter , a parameter directly linked to the forward scattering peak .this work was triggered by recent results obtained in the case of an isotropic point source and having demonstrated the importance of the asymmetry parameter on the point spread function of a ground - based detector .an under - estimation of the aerosol size leads to a systematic over - estimation of the energy and the depth of maximum of the reconstructed air shower . in the case of astronomical ground - based surveys ,a particular attention is required to well estimate scattered light from extended sources such as starlight , zodiacal light or airglow in the earth s atmosphere .total zenithal optical depth values are usually assumed small , making possible analytical calculations in the single - scattering approximation .however , this is not true anymore in the near - uv where zenithal optical depth values can be greater than : a multiple scattering correction is usually applied by multiplying the single - scattered light by the factor .it takes into account only multiply scattered light by the molecular component , assuming a negligible effect from aerosols due to their low aerosol optical depth value . to our knowledge , this correction originally developed by j.v .dave in 1964 has never been updated to consider aerosol contribution in cases where the aerosol component is not negligible . with the rise of astronomical all - sky surveys and the very low systematic uncertainties required , effect of these aerosols on multiply scattered light contribution should be investigated .in addition to atmospheric effects depending on the incident wavelength , data analysis in astroparticle physics experiments requires also recognition and correction for scattering and absorption of light by water droplets and ice crystals in clouds .compared to aerosols found in the lowest part of the atmosphere , particles composing clouds are much larger in size , producing attenuation in the visible and nir bands that is wavelength independent , the so - called grey extinction .cloud cover is highly variable in both time and spatial direction .clouds are located in the troposphere , the lowest part of the atmosphere extending up to about km in altitude .clouds are usually categorised according to their base altitude range above earth s surface : low ( up to about km ) , middle ( from km to km ) and high ( above about km ) .low - cloud category includes mainly cumulus and stratus , mostly convective and non - convective , respectively .they can be also differentiated by the fact that cumulus are vertically extended and stratus are horizontally extended .optically thick clouds ( optical depth values greater than one ) are mainly located in this part of the atmosphere .in contrast , cirrus which represent most of the highest clouds are generally non - convective and optically thin ( optical depth values lower than ) . due to their corresponding altitude , they are made of ice crystals . whereas thick cloud cover is usually easy to detect experimentally and to take into account their effect in data analysis , thin clouds as cirrus are much more difficult to be recognised .these clouds can affect recorded data , resulting in a systematic bias in analyses if their presence is not detected .depending on the astroparticle physics experiment , clouds have not the same effect on recorded data . in the case of ground - based astronomical surveys ,it is crucial to estimate the amount and the structure of grey extinction on the recorded images .indeed , cloud structure can be intricate with significant spatial variations across the field of view of the telescope , and may vary during the time interval of the exposure . whereas optically thick clouds attenuate drastically the amount of light from astrophysical objects resulting in a useless survey , thin clouds cut just a part of the light .if this cloud attenuation is well evaluated , these corresponding surveys can be used for physics analyses , permitting to extend the telescope observing time . in such a correction, the spatial structure function of clouds has to be known since the light attenuation due to the cloud cover will not be similar over the whole field of view of the camera .this effect is especially important to correct for cirrus since their structure function is more complex .such an effect , associated to the fact that clouds move in the field of view during the time exposure , is currently investigated in .the problematic is not the same for the detection of extensive air showers .clouds can either reduce the transmission of light from air showers or enhance the recorded light due to scattering in this over - density of matter . in these cases ,clouds are easily detected and can be removed during the analysis .however , this is not anymore true for optically thin clouds which might be unnoticed in observations but could have an impact on the shape of the longitudinal profile and on the aperture of the detector , leading to wrong estimations for the energy spectrum of very - high energy gamma rays and ultra - high energy cosmic rays . to avoid as much as possible these errors and to optimise observing time, collaborations install _ in situ _ auxiliary instruments to characterise the cloud cover above their observatories .the different techniques will be developed in sect .[ sec : exp_cloud ] .measurements of absorption and scattering properties of the atmosphere can exploit either natural sources to probe the atmosphere , or man - made illumination observing the atmosphere through backscattering .the atmospheric characteristics deduced are then used either in an event reconstruction software of the associated experiment or in a detailed atmospheric radiative transfer model as modtran , a monte carlo simulation developed by the us air force research laboratory .this section enumerates the different facilities and indirect methods to probe molecular component , aerosol component and cloud cover above and in the surroundings of an astroparticle physics experiment .a main part of this section comes from a series of workshops discussing the atmospheric effects and how to estimate them in the case of astroparticle experiments , and spanning a decade of questionings and developments between the first workshop in 2003 and the last one in 2014 . before describing the different facilities available to probe the atmosphere properties , collaborations in astroparticle physicshave always developed some methods to deduce these properties directly from the measurement of the physics phenomenon studied , i.e. the air showers in the case of very - high gamma ray and ultra - high energy cosmic ray observatories , and celestial objects in the case of astronomical survey telescopes .however , they present systematic errors on atmospheric parameters greater than the ones obtained by a facility fully dedicated to atmospheric monitoring . if some of these methods are still employed to monitor the atmosphere , others became obsolete since their corresponding experiment requires nowadays a much better precision on their measurements .this is exactly the case with this method that evaluates the aerosol optical depth using the fluorescence light emitted by extensive air showers themselves in the measurement of ultra - high energy cosmic rays .it has been developed by the hires collaboration and is based on the measurement of air showers in stereo , i.e. recorded by two fluorescence telescopes : the part of the shower viewed in common by two detectors at different distances permits to constrain the aerosol content in the atmosphere . assuming an atmosphere modelled only by molecular scattering and ozone absorption , the remaining attenuation attributed to aerosols can be determined .this technique requires no additional equipment and is insensitive to the absolute photometric calibration of the telescope camera .its main limitation is statistical due to the low flux of cosmic rays in this energy range ( ) , making impossible an estimation of the aerosol attenuation on a hourly basis .the systematic uncertainty on the aerosol optical depth is about , twice larger than the one obtained currently at the pierre auger observatory with a technique using a dedicated laser ( more details in sect .[ sec : exp_aerosol ] ) .scientists using imaging air cherenkov telescopes have also developed different techniques to calibrate their detector or estimate atmospheric conditions using only their instrument and air showers induced by very - high energy gamma rays .if they are in the surroundings of the telescope ( m ) , high energy muons ( ) generated by air showers can be detected : when they pass throughout or close to a telescope , they emit a cone of cherenkov light observed as a ring on the camera .the distribution of cherenkov light in this muon ring is a function only of the muon distance from the telescope , atmospheric attenuation being assumed very small .therefore this technique permits to obtain a conversion factor on the number of photoelectrons recorded by the detector for each cherenkov photon hitting the telescope .this idea was firstly proposed by a.m. hillas and j.r .patterson in 1990 , then applied to the whipple telescope and it is still the main method for absolute calibration of current imaging air cherenkov telescopes . a completely different method based on the trigger rate recorded by telescopesevaluates atmospheric effects from clouds or aerosols .the maximum of the cherenkov emission from air showers induced by very - high energy gamma rays is usually at an altitude from to km .most of clouds and aerosol layers , located below this altitude range act as attenuators on cherenkov light from the whole shower or part of it , resulting in a decrease of the trigger rate .whereas small clouds pass through the field of view of the telescope and reduce the trigger rate only on a short timescale ( a few minutes ) , long - term atmospheric attenuators as aerosol layers or large cloud covers affect the trigger rate continuously .a new quantity based on this trigger rate , the so - called cherenkov transparency coefficient , has been developed by the h.e.s.s .collaboration to characterise atmospheric attenuation during data acquisition .this coefficient is currently used only as a data quality parameter in the h.e.s.s .experiment , consisting in rejecting periods of data acquisition where the attenuation is too high .the next step would be to associate a correction factor to this coefficient in order to increase the duty cycle of the h.e.s.s .telescopes and to record events even in worse atmospheric conditions ( as it is already the case in current ground - based cosmic ray observatories ) . concerning ground - based astronomical optical surveys ,photometric data are usually calibrated using sets of standard stars whose brightness is known precisely from previous measurement campaigns .the first reference work is certainly the landolt s catalogue providing magnitudes of several hundred stars near the celestial equator and measured with a photomultiplier tube on the cerro tololo 16 inch and the cerro tololo 1.5 m telescopes .these relative photometric calibrations were achieved in five broad optical bandpasses , the so - called johnson kron cousins photometric system _ ubvri _ , and reached an accuracy lower than .then , p.b .stetson extended this catalogue to fainter magnitudes and released a catalogue of about stars with accurate magnitudes .unfortunately , all these evaluations of brightness have been realised with a specific instrumental setup .therefore a systematic uncertainty needs to be added when passing from the landolt s system to the telescope considered in the analysis . to avoid this additional systematic uncertainty ,the sdss collaboration has designed its own photometric system based on the _ ugriz _ bands , extending the landolt s and stetson s works to fainter levels and increasing the number of studied stars to over one million .this _ ugriz _photometric system is now widespread in the community and used now in photometric calibration efforts for other telescopes as snls or pan - starrs .all of them apply the same calibration algorithm , the so - called bercalibration method , which simultaneously solves for the calibration parameters and relative stellar fluxes using overlapping observations .this is a self - calibration method minimising the error dispersion in all observations and for all reference stars .it consists in separating the problem in a relative calibration establishing an internally consistent system and an absolute calibration providing the conversion factor between relative values recorded and physical fluxes .finally , the absolute calibration is fully characterised with just a few parameters as the so - called zero point .it has been demonstrated that this method , combined with several observations of the same part of the sky , permits to obtain measurements of star brightness with an accuracy of about . however , scientific programs of next imaging surveys composed of gigapixel ccd arrays forming a wide field of view will demand even more precise photometric calibrations to break through the barrier . indeed , observing the sky at larger zenith angles will change drastically the depth of atmosphere between the telescope and the source .this basic fact indicts that variations of atmospheric attenuation should be one of the main limitations to the precision of the next ground - based all - sky surveys .the main idea to improve still the accuracy of next ground - based photometric measurements consists in separating the instrument and the atmosphere explicitly in calibrations . indeed ,for now , unmodelled variations of the atmosphere are responsible for almost all the calibration error budget .the concept would be to directly measure the atmospheric transmission using an instrumentation dedicated to this task as an auxiliary ground - based telescope or specific instruments commonly used in atmospheric sciences .through these three examples , it is interesting to observe how instruments fully dedicated to the atmosphere are more and more included in astroparticle experiments . with the increasingly demand on precision , variations of the atmosphere become the limiting factor . whereas experiments in ultra - high energy cosmic rays have already developed all this specific atmospheric monitoring by installing weather stations , ir cameras for cloud detection , lidars for aerosol detection or using global atmospheric models , collaborations in imaging air cherenkov telescopes or ground - based astronomical surveys are just designing the optimised procedures to well characterise the atmosphere .these works would certainly lead to install _ in situ _ specific instruments developed and used in atmospheric sciences since their current telescopes provide only a global estimation of atmospheric effects with no information about vertical structure of the atmosphere .the two main goals will be to improve the precision on measurements smaller systematic uncertainties and to relax the quality cuts on data a higher duty cycle of the telescope data taking .the purpose of the next sub - sections is to present these different instruments . as explained in sect .[ sec : molecular_absorption ] and sect .[ sec : molecular_scattering ] , the molecular component is characterised by its well - mixed gases ( dinitrogren , dioxygen , etc ) and variable gases as water vapour or ozone present in much lower quantity . whereas permanent gases fix the general properties of the atmosphere as its height - dependent profiles of state variables , variable gases follow different horizontal and vertical spatial distributions in the atmosphere , requiring a dedicated monitoring .weather stations are used to follow the evolution of atmospheric state variables at ground level .they are usually powered by solar panels and composed of temperature , pressure , humidity and wind ( speed and direction ) sensors , transmitting measured values every a few seconds .precisions on measurements are usually about for temperature , for pressure and about for relative humidity . to reduce as much as possible systematic uncertainties in measurements , ground - based weather stations have to be placed correctly .for instance , concerning wind measurements , the best way is to install an anemometer in the top of a m mast in order to respect international standards .in addition to ground - based weather stations , measurements of the height - dependent profiles of state variables are needed to well estimate atmospheric effects up to the top of the atmosphere .the most widespread technique is to use helium - filled weather balloons to launch meteorological radiosondes , providing values of temperature , pressure , relative humidity and wind speed every about m from ground level to about km .measurement errors are similar to what we have with weather stations , except for the relative humidity where they are slightly larger .meteorological data and gps position are sent continuously during the flight to a station located at ground . whereas horizontal directions of the balloon are mainly governed by the wind , vertical movementfollow the balloon buoyancy .all these local measurements show that it can exist large daily fluctuations in temperature , pressure or humidity ( and wind speed ) . butoperations associated to radio soundings represent a large burden , both in terms of funds and manpower . a possibility to avoidthis charge is to use data from global atmospheric models .the latter are based on data assimilation , i.e. a technique used in numerical weather prediction where calculations take into account the real - time conditions of the atmosphere as boundary condition .atmospheric models provide the atmospheric state at a given time and at a given position on a latitude / longitude grid ( as a good approximation , horizontal uniformity of state variables can be assumed if the earth s surface is more or less plane ) . for a given position on the grid , values of a state variable are given at different constant pressure levels .it consists in collecting all the available meteorological data from weather stations , meteorological balloons , satellites , aircrafts , etc .then , for a given time , the value of a state variable is know from observations , but also predicted by the atmospheric model : the data assimilation consists in combining observations and forecasts to estimate a 3-dimensional image of the atmosphere .this algorithm is then repeated for a later time . a sketch illustrating the concept of data assimilation is given in fig . [ fig : instru_molecular](left ) .the main meteorological data assimilation projects around the world are era - interim developed by ecmwf ( european centre for medium - range weather forecasts ) , gdas by ncep ( national centres for environmental prediction ) and goes-5 by gmao ( global modelling and assimilation office ) .an analysis conducted by the pierre auger collaboration has validated the gdas data when compared to local measurements made at the observatory , and has demonstrated that air shower reconstruction was improved by incorporating gdas data in the process instead of using spare measurements operated at the observatory ( see fig . [fig : instru_molecular](right ) ) .inspired by these results , the jem - euso collaboration is investigating the possibility to apply a similar way to get atmospheric state variables . whereas experiments in ultra - high energy cosmic rays are already familiar with these global atmospheric models , systematic uncertainties required up to now in ground - based gamma astronomy or ground - based astronomical surveys have not needed such models . however , with the new goals in precision currently discussed , this practice would certainly change in a near future and applying these models seems to be a natural solution . even if precipitable water vapour can be extracted from global atmospheric models or meteorological balloon radio soundings , it would lead to large systematic errors since it is highly variable in space and time with fluctuations of to per hour possible . yetwater vapour remains one of the most poorly characterised meteorological parameters . as already said in sect .[ sec : molecular_absorption ] , water vapour is not a well - mixed atmospheric constituent and its value measured at ground level does not reproduce the behaviour of its total column height . in the last thirty years , two additional remote sensing methods to retrieve water vapour continuously and automatically have become available .the first method is based on the radiometry of air molecules in the spectral range and especially the thermal emission of water vapour near the ( cm ) spectral line .indeed , in addition to absorption lines in optical wavelength range , water molecules absorb also electromagnetic waves in the microwave and radio domains .it consists in measuring electromagnetic emission from transitions between different states of rotational energy .the line width of these observed spectral emission lines is affected by different broadening processes , the dominating one in the low part of the atmosphere being the pressure broadening produced by collisions between the target molecules ( in this case , h ) and other air molecules . since it exists a relation between pressure and altitude , pressure broadening of emission linesis used to estimate the altitude of probed molecules through inversion methods , with a resolution of km .usually , water vapour radiometers are fully steerable in both azimuth and elevation , providing a full sky coverage .ground - based microwave radiometers are able to operate continuously for the retrieval precipitable water vapour with a high temporal resolution , but providing measurements not reliable during rainfall . the second method is based on the global positioning system ( gps ) .it allows derivation of the water vapour path as well as its vertical distribution from the path delay of the gps signals .microwave radio signals transmitted by gps satellites to earth - based receivers are delayed ( refracted ) by the atmosphere .part of this delay is due to the presence of water vapour , its corresponding value being nearly proportional to the quantity of water vapour integrated along the signal path . given the development of gps satellites , a basic earth - based gps receiver permits to monitor the distribution of water vapour with a large time coverage and makes it a solution that can be considered . among variable gases present in the atmosphere , ozone is probably the most fluctuating in concentration and spatial distribution after water vapour . since ozone is a key variable needed for understanding climate processes and change , the number of instruments to measure it was intensively increased during the last decades .total ozone column is measured from ground using dobson or brewer spectrophotometers .they record ultraviolet light from the sun at two to six different wavelengths from to nm .the measuring principle uses the fact that ozone absorption depends on the wavelength : whereas light is strongly absorbed at nm , this is not anymore the case at nm .the ratio between the two recorded light signals gives a direct estimation of column ozone in the light path from the sun to the spectrophotometer .the brewer instrument is based on the same measuring principle but using five different wavelengths between and nm . regarding the ozone vertical profile , it is measured using ozone sondes embedded in weather balloons and probing the vertical structure of the atmosphere from ground to about km .ozone sondes are composed of a pump that pulls air into a chamber with potassium iodine , producing a chemical reaction converting the potassium iodine into iodine .information about potassium iodine reaction with ozone is transmitted via radio waves to the ground .the corresponding vertical resolution is about m due to the response time of the electrochemical sensor .even if it is less widespread , ozone profile can also be measured using lidar and microwave instruments , with a particular interest in the stratosphere and mesosphere .indeed , they typically cover the altitude ranges km and km , respectively , with associated vertical resolutions of about m and km .finally , reactive gases as carbon monoxide co , volatile organic compounds vocs , oxidised nitrogen compounds nox or sulphur dioxide so have to be monitored .all of them play an important role in the chemistry of the atmosphere concerning climate or the formation of aerosols .each component requires a specific procedure and setup to measure it . since their concentration is usually very low and depends strongly on the location , explaining in detail the procedure for each chemical component is beyond the scope of this review .we refer the reader to different reports to get further information on experimental procedures concerning each component : carbon monoxide , volatile organic compounds or oxidised nitrogen compounds .these procedures are usually practiced on two types of monitoring platforms : _ in situ _ monitoring at atmospheric observatories allowing for long - term and frequent sampling , or mobile platforms as aircrafts , ships or trains providing unique opportunities to probe the horizontal and vertical distributions of chemical components .atmospheric aerosols play an important role in climate change or air quality .aerosols have many possible sources as sea spray , mineral dust , or chemical reactions of gases in the atmosphere .complexity of these mechanisms is so great that it leads to large uncertainties in our quantitative understanding of the aerosol role in climate change or air quality .therefore it is not surprising to see that a large range of detectors has been developed during the past decades to measure atmospheric aerosols , i.e. their concentration , their shape , their size , their chemical composition , etc .whereas sampling techniques are the best way to characterise aerosols , they probe only a small volume of atmosphere and are often limited in statistics complete reviews of these methods have been written by j.c .chow and p.h .mcmurry .thus , _ in situ _ measurements of radiative properties are usually a good opportunity to estimate indirectly and continuously aerosol properties .the rest of this subsection presents briefly most of these techniques .aerosol measurements in sampling techniques can be categorised following the s.k .friedlander s suggestion : measurements providing a single piece of information integrated over size and composition , and those giving more detailed resolution with respect to size and time . in both categories, the design of aerosol sampling inlet requires a careful consideration .the purpose of the inlet is to provide an aerosol sample representative of ambient air , i.e. a system minimising local influences , or having an aerosol transmission efficiency that does not vary with wind direction or wind speed . in other words, the ideal inlet would collect 100% of aerosols in a specified size range .as already said in sect .[ sec : aerosol_scattering ] , aerosols are hygroscopic , especially in nucleation and accumulation modes : water typically constitutes more than half of these aerosol modes at relative humidity greater than roughly . humidity control andsize cuts are the best ways not to get aerosol data biased by water .filter samplers are often used to store aerosols in the aim to analyse them later in laboratory , remaining the most robust method up to now . sincefinal results are expressed in terms of air concentration , air volume for each aerosol sample is also determined by integrating airflow rate over the sampling duration .this duration varies with locations , sampling rates or analytical sensitivities but typically ranges from several hours to a day or more under clean atmospheric conditions .instruments integrating aerosols over a given size range are often used for their simplicity .mass concentration of aerosols is a fundamental parameter : the international air quality standards require measurement of mass concentration of particles smaller than m ( pm ) or m ( pm ) aerodynamic diameter .its measurement is often done gravimetrically , where it is determined from the net aerosol mass on a filter , divided by the volume of air sampled . during this estimation ,relative humidity and temperature need to be fixed at reference values not to bias comparison with other aerosol samples .analytical precisions for gravimetric analyses are currently about g .limitations in filter measurements are gas adsorption on substrates ( typically organic gases on quartz filters ) , evaporation of semi - volatile components , and chemical reactions between collected particles and substrates . to avoid these limitations and to reduce the manpower charge ,different automated techniques for continuous or semi - continuous aerosol mass concentration measurements have been developed , as beta - meters , piezoelectric crystals or harmonic oscillating elements . while there are obvious advantages of employing these automated instruments , there are still some issues with using these instruments for long - term measurements .aerosol size distribution is made of several modes , ranging from a few nanometres to a few tens of microns .these different aerosol modes have not the same origin or the same chemical composition. therefore size - resolved measurements are useful to understand the behaviour of specific aerosol size ranges .the most widespread type of instruments is the single - particle optical counters ( also called aerosol spectrometers ) measuring in real time the amount of light scattered by an individual particle when it traverses a tight focused beam of light .a fraction of the scattered light is recorded by a photo - detector and converted into a voltage pulse .this is the amplitude of this pulse that estimates the particle size , using a calibration curve obtained from measurements of spherical particles of known size and composition .even if they are commonly used nowadays in aerosol studies since they are cheap and easy to use , some limitations remain : they tend to heat aerosols leading to a systematic smaller size for hygroscopic aerosols , their calibration curve is obtained for a specific chemical composition which is not always representative of aerosols probed , and aerosols with irregular shapes will false their size estimation .another kind of instruments more sophisticated permits to solve these issues , especially giving valuable information about the shape and/or refractive index of atmospheric particles : the multi - angle aerosol spectrometer probe ( masp ) , measuring light scattered by individual particles for polar angles of and .other techniques based on the aerodynamic particle size ( sizes greater than m ) , particle electrical mobility ( nm ) , particle diffusivity ( particles smaller than m ) or particle growth by condensation ( nm ) are also available to estimate aerosol size distribution . concerning chemical composition of aerosols ,this analysis is most of the time done in laboratory .two aerosol categories are mainly analysed wit this procedure : ionic species and mineral dust .ionic species including sulphate , nitrate , chloride , sodium , ammonium , potassium , magnesium or calcium represent a major part of aerosol mass .their presence is usually evaluated applying ion chromatography to aerosol filter samples : sulfate is the most studied chemical element and ubiquitous in aerosols , nitrate is mainly produced by reaction of nitric acid vapour with alkaline components in aerosols , and sea salt ionic components dominate the mass of the coarse mode over oceans and coastal areas .sea salt components , as mineral dust components ( aluminium , silicium , iron , titanium , scandium ) and trace components ( nickel , copper , zinc , lead ) , can be also analysed by destructive methods ( atomic absorption spectroscopy ( aas ) or inductively coupled plasma mass spectroscopy ( icpms ) ) or non - destructive methods such as instrumental neutron activation analysis ( inaa ) , proton induced x - ray emission ( pixe ) , x - ray fluorescence analysis ( xrf ) or scanning electron microscope ( sem ) equipped with an energy dispersive x - ray system ( edx ) .the latter is the most widespread technique for individual particle analysis , providing particle morphology and elemental composition for atomic numbers greater than 11 ( sodium na ) .the main limitation for this technique is that obtaining data with an enough statistical significance becomes considerably time consuming .the edx system is used to avoid volatilisation from aerosol samples when they are exposed to vacuum conditions and are heated by the electron beam .even if real - time measurements of chemical composition are available , they are still in development . the most advanced techniques concern particulate carbon analysers , and particulate sulfur and nitrogen species analysers ( see for more details ) .even if all these aerosol sampling techniques are very well - known in atmospheric sciences , this is not the truth in astroparticle physics yet . to our knowledge, aerosol sampling has been operated only at the pierre auger observatory during one half a year where aerosols have been collected using filters .then pixe and sem / edx techniques have been applied in laboratory to get the chemical composition of aerosols .aerosol sampling have continued one year later using this time an aerosol spectrometer to estimate precisely the aerosol size distribution .all these measurements are an opportunity to understand better the origin of aerosols present at the observatory .sampling techniques to measure aerosols are numerous .even if they provide a precise characterisation of atmospheric particles , they probe only the air volume just around the detector and do not inform us about height - dependent aerosol properties .last but least , some of them as filter samplings require manpower .an alternative to these sampling techniques is to estimate directly aerosol radiative properties .indeed , in the case of spherical particles at least , aerosol radiative properties are linked to aerosol properties via the mie scattering theory .moreover , this method does not seem incongruous when we know that astroparticle physics experiments discussed in this review record light produced by extensive air showers or coming from celestial objects .instruments measuring aerosol radiative properties can be divided into two categories : passive techniques exploiting natural sources to probe the atmosphere , and active techniques recording light produced by an associated laser . whereas the former are cheapest , only the latter provide a full description of the height - dependent aerosol properties .the purpose of this subsection is to list briefly the different instruments with their associated deliverables and limitations .sun - photometers are probably the most common instrument in atmospheric sciences to monitor _ in situ _ column integrated aerosol optical properties in the category of passive techniques .it consists in pointing the sun through the day thanks to a tracking system , and measuring solar radiation at different spectral bands within the visible and near - infrared spectrum .since original solar radiance is well - known , differences observed in sun - photometer measurements are due to the atmosphere . for each spectral bandstudied , it is possible to estimate the total aerosol optical depth ( aod ) using the beer - lambert law ( see eq . ) , i.e. the total extinction of solar radiation by aerosol scattering and absorption between the top of the atmosphere and the ground - based detector . also , simultaneous aod measurements at several wavelengths permit to estimate the angstrm coefficient which gives an indication of the aerosol size distribution .since ozone or water vapour have specific absorption bands in the atmospheric transmission spectrum , measurement at one of these specific wavelengths permits to constrain the total atmospheric column of these constituents . in a less direct way , it is possible to retrieve inversion aerosol products as the single scattering albedo or the aerosol phase function from almucantar scans of radiance combined to inversion algorithms .however , sun - photometers can not be operated during nights , i.e. exactly the periods where astroparticle physics experiments record data .a work is currently in progress to replace the sun by the moon , the difficulty being that the variation of the moon illumination is inherent to the lunar cycle .once this prototype would be validated , it should increase drastically our knowledge on aerosols during nighttime .another solution consisting in observing stars during nighttime is also investigated by several groups : following standard stars during their path in the sky via a tracking mode , it is possible to measure their luminosity and to estimate the atmosphere transparency by inversion algorithms .this method is similar in a simplified approach to techniques applied in ground - based astronomical survey telescopes to monitor the atmosphere using a star catalogue . as in the case of sun - photometers ,they record signals from to about nm with several filters centred at different wavelengths and estimate the aod value and the angstrm coefficient through the night .the two main instruments exploiting this idea are the uvscope instrument based on a multi anode photomultiplier tube and the f/(ph)otometric robotic atmospheric monitor ( fram ) based on a cassegrain - type telescope coupled to a ccd camera .preliminary and promising results are already available for these two facilities .we mention here also a work in progress to estimate aerosols using an all - sky scanning infrared radiometer : even if this type of instruments is first dedicated to detect clouds , monitoring of the aerosol component seems to be possible .indeed , large aerosols as pollen or sand grains are expected to contribute up to / m to the sky radiance in the m atmospheric window .obviously , sun / star / lunar - photometry is better suited for the study of widespread hazy conditions than for the study of smoke plumes .a smoke plume tends to be very dense and is very localised . in the case of large aerosol size ,typically greater than roughly m , an x - band doppler radar system can be used to measure the terminal settling velocities of aerosol particles as volcanic ashes or water droplets . the famous model named pludixis first dedicated to the characterisation of rainfalls within a sampling volume surrounding it .falling objects crossing the antenna beam ( ) generate power echoes backscattered to the radar with a frequency shift related to the object velocity . the received signal is then analysed with different algorithms to obtain an aerosol size distribution into 21 bands of mean diameter between and mm .the main limitation for this technique is that chemical composition of particles studied needs to be known in the analysis algorithms .other techniques based on the measurement of the aerosol phase function exist to estimate the aerosol size at ground .the most famous is the integrating nephelometre available commercially .it consists in illuminating a volume of air with a diffuse light source , at one or several wavelengths depending on the instrument model . a photon - counting detector with its axis perpendicular to the light source records a part of the scattered light from this illuminated air volume .ranges of angular integration are typically for the total scatter coefficient and for the back scatter coefficient .even if instruments estimating directly the asymmetry parameter do not exist , the ratio of these two scatter coefficients can be linked to the parameter , giving an easy and cheap opportunity to estimate roughly the aerosol phase function .collaborations in ultra - high energy cosmic rays have developed also a technique based on the measurement of the aerosol phase function to estimate the aerosol size .aerosol phase function monitors , in conjunction with uv telescopes , are used to measure the asymmetry parameter on an hourly basis during data acquisition .the light sources emit a near - horizontal pulsed light beam in the field of view of their nearby uv telescope .each monitor contains a collimated xenon flash lamp source , firing an hourly sequence of nm and nm shots .the aerosol phase function is then reconstructed from the intensity of the light observed by the uv telescope as a function of the scattering angle , for angles between and .after corrections for geometry , attenuation and collection efficiency of each pixel , the binned signal observed is subjected to a 4-parameter fit , \label{eq : apf_monitor}\ ] ] where are the fit parameters .the first two fit parameters can be used to estimate the molecular extinction and the aerosol extinction , respectively ; while and are used to estimate the aerosol size distribution .recently , a new method based on very inclined laser shots fired by a steerable system and recorded by a uv telescope was also developed and is still in progress .all the instruments presented up to now provide no information about the height - dependence of aerosol properties .but it is recognised that measuring vertical profile of aerosols is a natural complement to total column aerosol observations made by ground - based sun photometers or satellites ( see sect .[ sec : exp_satellite_networks ] ) .the aim is to identify aerosol layers , aerosol optical properties backscatter and extinction coefficients at given wavelengths , angstrm coefficient , and aerosol microphysical properties concentration , size distribution , refractive index .ground - based laser facilities can monitor continuously the structure of the planetary boundary layer ( the lowest part of the atmosphere ) , its height and its variability with time ( e.g. diurnal mixing ) . also ,retrieval of microphysical properties for elevated aerosol layers is an important point regarding the development of extensive showers in the atmosphere and is feasible only for advanced ground - based laser facilities .the lidar ( light detection and ranging ) technique consists in emitting pulses of light up through the atmosphere and in recording light scattered back by an optical receiver on the ground as a function of time . the difference in light - travel time to different altitudes provides a method for probing the vertical structure of the atmosphere .aerosol optical properties can also be obtained using multi - elevation - angle measurements by a scanning lidar , allowing more accurate estimation of vertical and horizontal spatial extensions .light pulses emitted into the atmosphere can be scattered elastically light re - emitted at the same wavelength or inelastically light re - emitted with a wavelength shift due to excitement of internal degrees of freedom in the scattering particle .since inelastic scattering cross section is much smaller than elastic cross section , more intense light source , longer detection time and larger detector aperture are necessary .the lidar return signal is given by the so - called lidar equation where is the scattered photon distance from the optical receiver , the laser pulse duration , the collection area of the telescope , the overlap factor between the telescope and the laser cone ( equal to one in the ideal case ) , and the total atmospheric transmission factors , and the total backscatter coefficient with ( a : aerosol / m : molecular ) . in the case of elastic scattering only , , and the lidar equation is reduced to the second part of eq . , where is the total optical depth equal to with the total extinction coefficient ( ) .we distinguish between lidar systems detecting only elastically scattered light from both aerosols and molecules , called elastic - backscatter lidars , and those detecting the molecular scattering separately from the aerosol scattering thanks to backscattered light from roto - vibrational excitation of atmospheric molecules ( n / o ) , called raman lidars , or via rayleigh scattering with a high spectral resolution lidar ( hsrl ) . in every technique, instruments can be operated at multiple wavelengths simultaneously .accurate retrieval of extinction and backscatter profiles without making assumptions on the aerosol properties is only possible with the measurement of two independent signals . the lidar system with the lowest complexity and the most widespread is the elastic - backscatter one measuring the backscatter signal at one wavelength . in its basic form, an elastic - backscatter lidar is called a ceilometer , an optical facility that can be purchased .elastic - backscatter lidars are useful to probe the vertical structure of the atmosphere .a great interest concerns the height of the planetary boundary layer which can be estimated if the overlap factor of the system is known .after having estimated the molecular parameters , using weather radio soundings or global atmospheric models , it remains two unknowns to be determined in eq .leading to an undetermined system : the aerosol backscatter coefficient and the aerosol extinction coefficient .inversion algorithms assuming a typical lidar - ratio profile ( ) are then applied to estimate the aerosol backscatter coefficient . in most cases , they provide an aerosol backscatter coefficient with an associated error of 10% and a quite uncertain aerosol extinction coefficient with a typical error of 50% .these errors reach their highest values at short wavelengths . to reduce the uncertainty on the aerosol extinction coefficient to , more complex lidar facilities consisting in measuring two signal profilesare required : one channel records the total backscattered signal and the second channel a pure molecular backscattered signal , i.e. without any need of information about the molecular density profile from weather radio soundings or global atmospheric models .thus , the profiles of the aerosol backscatter coefficient and of the aerosol extinction coefficient can be determined independently from each other , and the lidar - ratio profile is directly deduced .raman lidars are based on the vibrational or rotational raman scattering from nitrogen or oxygen . even if rotational raman scattering has a cross section about thirty higher than vibrational raman scattering ,the latter is usually employed since the wavelength shift is larger , i.e. easier to detect .if the raman lidar is operated during daytime , a filter with a width of a few tenths of nanometre has to be added , reducing considerably the light collection efficiency .regarding the hsr lidars , they can be operated equivalently at day and night .they are based on the separation of the molecular scattering from the aerosol scattering using the doppler frequency shift produced when photons are scattered in random thermal motion . whereas molecular velocities are described by a maxwellian distribution with an associated width of about m/s, aerosols move with velocities fixed by the wind ( m/s ) and turbulence ( m/s ) .the resulting frequency distribution of light backscattered from the atmosphere is a narrow spike near the frequency of the laser caused by aerosols riding on a much broader distribution produced by the molecular components .the use of an ultra - narrowband filter permits to isolate the two scattering origins .even if this technique gives better results than raman lidars in theory , it involves a much more complex system to develop and to maintain .therefore a raman lidar is preferred over an hsr lidar .for instance , several raman lidars with different designs are now in construction to fulfil the requirements of measurement precision for the next ground - based cherenkov telescope array . in all lidar techniques listed above, they can be used at multiple wavelengths , supplemented by polarisation channels or coupled to a sun - photometer to obtain a better estimation of particle microphysical properties : the distinction of clouds from aerosol layers is a possible application ( see sect . [sec : cloud_lidar ] for further details ) .even if lidars are now well - known techniques , one of their foundations still raise questions .it has been shown in sect .[ sec : aerosol_scattering ] that aerosols scatter most of the light in the forward direction , with an amplitude depending on the aerosol size .the behaviour of this forward peak is easily understood phenomenologically using e.g. the ramsauer approach .it has been demonstrated that largest aerosols affect the scattering phase function only in the near - froward peak . on the other side , much less lightis backscattered and none of the models , except the mie scattering theory , is able to explain this backward scattering peak .nonetheless , it is the latter that is used to probe aerosol population in the atmosphere .this fact is not based on physics but only on technical considerations : it is easier to install a photo - detector at ground close to the laser facility to record backscattered light that moving it to the top of the atmosphere to record light scattered in the forward direction .experiments in ultra - high energy cosmic rays have developed the side - scatter technique including a laser facility and a uv telescope to estimate the vertical aerosol optical depth profile in nighttime ( see fig . [fig : clf_geometry ] ) .the main role of the laser facility is to produce calibrated laser test beams . typically , the beam is directed vertically .when a laser shot is fired , the uv telescope collects a small fraction of the light scattered out of the laser beam .the scattering angles of light from the beam observed by the telescope are in the range of to .two methods have been developed , both assuming an horizontal uniformity for the molecular and aerosol components .the first method , the so - called data normalised analysis ( dna ) , is an iterative procedure comparing hourly average light profiles to a reference clear night where light attenuation is dominated by molecular scattering . using a referenceclear night avoids an absolute photometric calibration of the laser .the second method , the so - called laser simulation analysis ( lsa ) , is based on the comparison of measured laser light profiles to profiles simulated with different aerosol attenuation conditions defined using a two - parameters model . in the latter , the vertical profile for the aerosol component is assumed to be described by a decreasing exponential with an associated scale height .the corresponding formula for each method is given by , \end{split } \label{eq : clf_vaod}\ ] ] where is the amount of light from the laser beam reaching the detector at the elevation angle and is its value in the case of an aerosol - free night .tests are planned in a r&d project based in colorado with a similar uv telescope and a steerable raman lidar replacing the laser facility .results will be interesting to crosscheck the validity of each aerosol characterisation technique .recently , this side - scatter technique requiring just a steerable laser has been proposed as an end - to - end calibration procedure for the jem - euso telescope on the international space station and the imaging air cherenkov telescopes of the cta experiment .whereas a basic central laser facility would be installed for the latter , it is a worldwide network of ground - based stations called global light system ( gls ) which is planned for the former .more than 10 stations would be installed and would be operated remotely to generate benchmark optical signatures in the atmosphere with characteristics similar to the optical signals from extensive air showers .every year , the jem - euso telescope would overfly each station about 300 times with good atmospheric conditions and no moon . in these two examples , using lasers does not probe the atmospheric conditions but estimate the reconstruction performances of the detector ( energy reconstruction , angular reconstruction , trigger efficiency , etc ) .clouds are composed of water droplets or ice crystals attenuating the transmission of optical radiation through the atmosphere .different techniques can be applied to detect the cloud presence : recording the cloud infrared thermal emission , observing stars in the optical wavelength range , or using lidars and detecting backscattered light by clouds .lidar technique described in sect .[ sec : aerosol_radiative ] can be also applied to detect the presence of clouds in the atmosphere . in the same way that aerosol detection ,clouds are identified as strong light scatter regions in the backscattered light profiles recorded . a cloud detection algorithm based on the first and second derivative analysis of the signal permits to retrieve the cloud altitude and the cloud thickness .this method has been , is or is planned to be applied using an elastic - backscatter lidar in many ground - based astroparticle physics experiments .however , this technique remains poor to distinguish , for instance , between an aerosol layer and a cirrus : both are optically thin and can be at high altitude .a solution would be to measure the shape of scatters since ice particles composing cirrus have a shape much different than aerosols or water droplets .the depolarisation technique could solve this problem : when the emitted laser light is linearly polarised , the backscattered signal recorded can have a different polarisation depending on the shape of scatter centres .typically , the depolarisation ratio is close to zero in the case of spherical particles , about for dust particles , and greater than in the case of ice particles . from a technical point of view, it consists in recording the backscattered light in two polarisation channels which are parallel co - polarised and perpendicular cross - polarised with respect to the laser polarisation . to our knowledge, this technique has never been tested in an astroparticle physics experiment , certainly because of the higher complexity compared to elastic - backscatter lidars .even if the lidar technique provides useful information on the spatial extension and the height of the cloud cover , the spatial structure function associated can not be probed .nonetheless , this parameter can become important as , for instance , in ground - based astronomical surveys .the two next methods permit to measure it .observing stars in the optical wavelength range is one of the two main methods to estimate the cloud cover and its associated spatial structure function .it consists in investigating the presence of stars in the field of view of a camera : using star catalogues available to know their location in the sky and their visual magnitude , it is possible to measure the atmospheric attenuation between a ground - based camera and the considered star .if a star is not observed , it is deduced that parts of a cloud hide it .thanks to all - sky ccd cameras , it is possible to observe several hundreds of stars in the same image and to compute the spatial structure function of clouds ( assuming the time exposure short compared to the distance travelled by clouds ) .corrections are applied to take into account the decrease of the camera sensitivity with the zenith angle due to the extinction in air masses. limitations of this method are mainly due to local weather conditions as snow or rain falling on the camera , and the presence of moon in the field of view saturating the image and increasing the background recorded .this technique has been applied by the cta consortium to evaluate the night sky brightness and the cloud fraction ( i.e. the percentage of the sky covered by clouds ) for each cta candidate site .this setup was capable of detecting a star with visual magnitude up to 6 mag in zenith . in a much efficient way, the same technique can be applied in ground - based astronomical all - sky survey telescopes where the higher spatial resolution of the cameras required is balanced by lower limiting star magnitudes probed by these telescopes .atmospheric emission is the result of infrared emission emitted by certain gases as water vapour , carbon dioxide or ozone when heated by earth s and solar radiations .such a radiation is predominant in the infrared m band . in the case of a cloud - free sky, atmospheric emission can be approximated to a grey or black body . due to their high water vapourcontent , clouds radiate also as a black body in the infrared and microwave ranges of the electromagnetic spectrum and their horizontal spatial structure can be measured thanks to their higher effective temperature compared to cloud - free sky .figure [ fig : cloudsir](left ) gives the atmospheric emission in cloud - free conditions for a dry atmosphere , a typical atmosphere and an atmosphere in wet conditions .contrary to wavelengths lower than m and greater than about m , emission spectrum in this wavelength range does not look like a black body due to absorption or emission by water vapour , carbon dioxide and ozone . in wet atmospheric conditions ,the background emission is increased .the long - wave infrared window from m to m is well suited for observing clouds with an upward - viewing system , avoiding bands related to ozone centred at m and carbon dioxide centred at m .figure [ fig : cloudsir](right ) compares the spectrum emitted by a typical cloud - free atmosphere to the ones produced by three different types of clouds : a cirrus at km asl , a stratus at km asl and a cumulus at km asl . in presence of clouds ,the long - wave atmospheric emission is increased .cases with stratus or cumulus , i.e. optically thick clouds close to the earth s surface , are very well different with respect to a cloud - free atmosphere in the wavelength range m , making easy the detection of these clouds by an infrared camera .this is not true anymore for cirrus where the radiometric contrast with the cloud - free atmosphere case is much smaller , mainly due to the fact that cirrus are optically thin and situated at high altitude , increasing the air mass between the observer and the cloud .because of the emissivity of water vapour , it remains an ambiguity in distinguishing between cirrus and thin fog .one method to avoid this difficulty would be to monitor the water vapour content in the atmosphere in order to suppress its effect during the data analysis .another technique would be to use at least two filters , one centred on the ozone band at m emissivity independent of the water vapour content and a second filter between and m .ir cameras recording the cloud infrared thermal emission and associated to a large field of view are usually employed to monitor the cloud cover in astroparticle physics experiments .contrary to lidar technique , the cloud altitude can not be estimated directly .however , some algorithms based on the radiance recorded by ir camera on - board a satellite or the international space station have been developed by the jem - euso collaboration and should be tested soon with the euso - balloon project , a pathfinder of the jem - euso mission . in any other purpose ,the pierre auger collaboration has checked the agreement between cloud data from satellite and its own measurements of cloud cover using a ground - based laser facility : both methods agree and cloud probability maps covering the region of the observatory are now available every 15 minutes .it has been shown in sect .[ sec : exp_molecular ] that it is often better to use data coming directly from global atmospheric models than measuring _ in situ _ atmospheric state variables by weather radio soundings or ground - based weather stations .this idea is not specific for state variables and can be extrapolated to aerosol data or cloud data provided by ground - based atmospheric monitoring networks or satellites . however, contrary to state variables , precision on these data is not better than measuring aerosol or cloud properties _ in situ_. thus , the usefulness is entirely different here since the main goal is to get a precise knowledge of atmospheric conditions before installing an astroparticle physics experiment on a site and , consequently , to be able to estimate atmospheric effects expected on physics measurements ( e.g. systematic uncertainties , duty cycle of the detector , etc ) .this work is for instance currently done for future projects as the jem - euso telescope regarding the cloud cover , the next imaging air cherenkov telescope cta to choose the site candidate offering the best atmospheric conditions , or the lsst telescope to design the best atmospheric monitoring program and to check its corresponding performances . to a lesser extent, data from ground - based networks or satellites can be used directly in a real - time atmospheric monitoring program of an astroparticle physics experiment . however , this method is valid only if the distance of the ground - based weather station or the time and space resolutions of the satellite are in adequacy ( e.g. ozone ) .ground - based atmospheric monitoring networks represent the first input in global atmospheric models regarding ground - based measurements .networks impose standardisation of instruments to avoid biases in global data analyses .whereas stations composing the networks are irregularly dispersed in time and location , global atmospheric models permit to get a well uniform data set . however , if the experiment site is close to one of the elements composing this network , the latter is a wealth of information , much more than a global atmospheric model .the biggest atmospheric network gathering measurements of the chemical composition of the atmosphere is probably the global atmospheric watch ( gaw ) programme of the world meteorological organisation ( wmo ) .its main goal is to develop a network of measurement stations all around the world .currently , this network is composed of more than 400 surface - based stations , including 29 elements called global stations where are operated all the measurements required in the gaw programme ( see fig . [fig : networks](left ) ) .the atmospheric components monitored by these stations are aerosols , greenhouse gases ( e.g. carbon dioxide co , methane ch ) , reactive gases ( e.g. surface ozone o , carbon monoxide co , vocs , oxidised nitrogen compounds nox , sulphur dioxide so ) , ozone , etc . in these networks ,aerosol data are of great value since aerosols are sampled and analysed chemically , giving a precise characterisation of aerosol properties .regarding aerosol radiative properties , several global aerosol networks are also available .the most famous is definitely the aerosol robotic network ( aeronet ) composed of more than 700 sun - photometers distributed over the whole of continents as depicted in fig .[ fig : networks](right ) .this network , managed by the nasa and the cnrs , monitors the total aerosol optical depth , the precipitable water , but also the aerosol size distribution , the single scattering albedo or the aerosol scattering phase function using inversion algorithms on almucantar scans of solar radiance .whereas the aeronet provides only properties related to the total aerosol column within the atmosphere , other smaller networks measure the vertical aerosol distribution .the gaw aerosol lidar observations network ( galion ) is a global aerosol lidar network including several sub - networks as the european aerosol research lidar network ( earlinet ) the first aerosol lidar network established in europe in 2000 and which counts currently 27 stations distributed over europe or the micro - pulse lidar network ( mplnet ) a federated network managed by the nasa and counting about 30 micro - pulse lidar systems over the world .regarding clouds , we can cite the cloudnet project still in operation in europe and monitoring in a few sites the cloud coverage and its vertical structure .networks of surface - based sensors provide a great amount of data for understanding components of the atmosphere , but they are far from the only source of information .satellites are also available to offer accurate measurements in regions not covered yet by ground - based weather stations .satellites are usually divided into two categories : geostationary satellites situated at about km from the earth and observing always the same area , and polar satellites being on orbit much closer ( from km to km ) and flying a same part of the earth once to twice a day .whereas the former are useful to understand time variation of atmospheric quantities , the latter provide a much better spatial resolution of the atmosphere .satellites embed instruments which can be operated in a passive mode to detect , for instance , cloud emissivity in infrared or sunlight scattered by aerosols or clouds , or in an active mode as lidars to illuminate the atmosphere and to record backscattered light .plenty of instruments on - board a satellite have been , are or will be in operation all around the earth to observe the atmosphere and , consequently , only a non - exhaustive list is given here .we can cite the polder satellite , the parasol satellite , the calipso satellite , the modis instrument on the terra and aqua satellites , the airs instrument on the aqua satellite or the tovs instrument on the tiros satellite for cloud coverage , water vapour or aerosol optical depth over the earth ( polar ) , the goes satellites for cloud coverage on a specific part of the earth ( geostationary ) , the gome instrument on the ers-2 satellite for total ozone , nitrogen dioxide and related cloud information , or the toms instrument on nimbus-7 , meteor-3 and earth probe satellites for total ozone mapping or tropospheric aerosols , etc .data from satellites are usually available publicly one year later or once the mission is ended and therefore can be used easily to evaluate atmospheric conditions of a site candidate for an astroparticle physics experiment .throughout this review , it has been demonstrated the close link between performance objectives of an astroparticle physics experiment and atmospheric sciences . this is not an isolated case and the same statementcould be done with other research fields as geophysics , biology , etc .recent years have seen the development of major infrastructures around the earth in order to considerably increase the competitiveness and the sensitivity of astroparticle physics experiments .this trend has seen the emergence of large international collaborations . unlike other areas of science wheremeasurements are operated mostly in laboratory , research in astroparticle physics has originality in detection techniques and in infrastructure locations .although research in astroparticle physics is primarily intended to answer questions in particle physics , astrophysics and cosmology , it has a very close relationship with other research fields through its detection techniques and infrastructures . with this diversity of infrastructures offered in astroparticle physics ,unique detectors are available in the world to better understand the earth , its biodiversity and its environment .the medium in which is located the sensor has its properties varying over time and has to be continuously and precisely monitored .it is this monitoring which offers synergies with earth sciences .it is in this context that the european agency aspera ( now appec ) organised a conference from the geosphere to the cosmos in december 2010 at the palais de la dcouverte in paris ( france ) .the goal of these two days was to promote and to encourage the development of links between large international collaborations in astroparticle physics and scientists from any other research fields .initiatives can come directly from the experiments as it is the case with the pierre auger collaboration which organised the public workshop is at cambridge ( uk ) in april 2011 to develop interdisciplinary sciences at the pierre auger observatory . during this meeting ,scientists from a variety of disciplines talked about the potential of the observatory site and exchanged ideas for exploiting it further . among them, we can cite the possible connection between clouds , thunderstorms and cosmic rays , the observation of elves in the high atmosphere , or the deployment of a seismic array at the observatory .the main idea is that we should promote the link between astroparticle physics and earth sciences in order to better understand today s measurements to improve the design of future detectors .recent years have seen the emergence of large international projects in the field of astroparticle physics .amounts of money involved are so great that we must take full advantage of these new major infrastructures .also , the level of precision required in these projects is so high that the external environment has to be still better understood and monitored .even if some embryonic initiatives exist now , we have to think in depth to a collaboration between the two communities .once a new project in astroparticle physics is planned , the question of an interdisciplinary platform should be asked .the best way would be to contact scientists from other fields and to develop together the best design to complete two aims : * optimising the monitoring of the external environment ( land , ocean , atmosphere ) * , since challenges in the next major astroparticle physics projects require measurements with ever greater precision , and * developing a multidisciplinary platform * , to host scientists from other research areas to offer them an infrastructure to develop their own studies .also , a public release of all data related to atmospheric monitoring or oceanographic monitoring should be planned in the experiments .indeed , they are sometimes situated in areas where weather stations are not numerous leading to a lack of geophysics data in these parts of earth .astroparticle physics experiments can play a role in atmospheric sciences too . using the example of the pierre auger observatory ( , , and m asl , argentina ) and the h.e.s.s .experiment ( , , and m asl , namibia ) , let us see their potential in atmospheric sciences since both projects have accumulated a large database of atmospheric measurements and have developed original techniques to monitor the atmosphere . without being aware probably , astroparticle physicists have installed these two observatories in similar places for a scientist in atmospheric sciences .figure [ fig : biomass](left ) gives the map of aod values for the months of september , october and november over five years obtained by the parasol satellite .the highest values of aerosol concentration during austral spring are found in china and india because of urban pollution and industry , and in indonesia , central africa and amazonia because of the phenomenon of biomass burning .it is now well - known that wildfire emissions in these regions , occurring mainly during the dry season , strongly affect a vast part of the atmosphere in south hemisphere via long - term transportation of air masses .the impact of emissions from fires on global atmospheric chemistry , and on atmospheric burden of greenhouse gases and aerosols are recognised even if it remains to quantify it . to illustrate this purpose , figure [ fig : biomass](right )gives the distribution of backward trajectories of air masses at the h.e.s.s .site .it has been obtained using the hysplit tool , a well - known air - modelling programme in atmospheric sciences for calculating air mass displacements from one region to another .a part of air masses comes directly from the northern region of namibia , typically shen biomass burning is observed .this assumption done studying the air mass origin was confirmed later by measurements of aerosol optical depth at the h.e.s.s .site . because the south hemisphere is mainly constituted of oceans , the only possible dust ( i.e. atmospheric mineral aerosols ) sources are , argentina , south africa and australia , making another similitude for the two astroparticle physics experiments .atmospheric dust is one of the major vectors feeding open ocean surface waters with trace metals . even at extremely low concentrations , trace metals are micro nutrients necessary for the growth of phytoplankton . by this way, the trace metals are linked to climate since they affect the capability of the marine biomass to trap co .the austral region ranging from about and s is one of the major co sink .this region is also very remote from continents and thus atmospheric dust exhibits very low concentrations .this oceanic area is a hnlc region ( high - nutrients low - chlorophyll ) and dust deposition could be a severe limiting factor for the primary production . given the key role of the austral ocean on global climate , scientists in atmosphere sciences have initiated studies characterising mineral aerosols in patagonia and more generally in south america since argentina is suspected to be the major dust source for the oceanic region ranging between s and s .astroparticle physics experiments require still greater precision in measurements to answer questions in particle physics , astrophysics and cosmology . some of these experiments use the atmosphere as a part of their detector . in order to reduce as much as possible systematic uncertainties related to the atmosphere ,extensive atmospheric monitoring programs have been , are or will be developed by collaborations .it has been shown that all the astroparticle physics experiments are not at the same stage in atmospheric monitoring : whereas collaborations in ultra - high energy cosmic rays use already techniques developed in atmospheric sciences to probe atmospheric properties and correct its effect in their measurements , scientists in very - high energy gamma rays or ground - based astronomical surveys are still in a stage where atmospheric measurements are used only as a quality cut on data selection . however , this would not be true anymore in the next major projects where the challenge of environment monitoring will be a key element for developing instrumentation . in order to better carry out these future projects , it makes sense to collaborate with scientists in earth sciences to choose the best methods and techniques to reach scientific goals . concerning atmospheric measurements, it has been shown that many instruments and techniques developed in atmospheric sciences are available .depending on the atmospheric component monitored molecular , aerosol or cloud same instruments will not be used . before installing any instrument on site ,it is necessary to know the effect of the chemical component planned to be measured in the wavelength range studied and its time and spatial variations . since in the near future astroparticle physics experimentswill require extensive atmospheric monitoring programs with many instruments , the idea to join a worldwide atmospheric network has to be considered .indeed , experiments are sometimes situated in places with a few weather stations available and joining such networks could represent an opportunity for both research fields .the author would like to thank his colleagues of the pierre auger collaboration , and especially l. wiencke , b. keilhauer , b. dawson , v. verzi , m. will , m.i .micheletti and m. unger for fruitful discussions during these last six years .also , k.l .thanks m. urban for having given him the opportunity to integrate the atmospheric monitoring task inside the pierre auger collaboration during his phd thesis .finally , k.l . would like to mention encounters having inspired him these last years with d. veberi , r. losno and s. benzvi .
astroparticle physics and cosmology allow us to scan the universe through multiple messengers . it is the combination of these probes that improves our understanding of the universe , both in its composition and its dynamics . unlike other areas in science , research in astroparticle physics has a real originality in detection techniques , in infrastructure locations , and in the observed physical phenomenon that is not created directly by humans . it is these features that make the minimisation of statistical and systematic errors a perpetual challenge . in all these projects , the environment is turned into a detector medium or a target . the atmosphere is probably the environment component the most common in astroparticle physics and requires a continuous monitoring of its properties to minimise as much as possible the systematic uncertainties associated . this paper introduces the different atmospheric effects to take into account in astroparticle physics measurements and provides a non - exhaustive list of techniques and instruments to monitor the different elements composing the atmosphere . a discussion on the close link between astroparticle physics and earth sciences ends this paper . cosmic ray , gamma ray , extensive air shower , astronomical survey , atmospheric effects , systematic errors .
many complex real - world systems from chemical reactions inside cells to technological systems such as the world wide web may be represented as networks .understanding the topological structure of these networks helps understanding the behaviour of the system on which they are based .thus , there is considerable interest in elucidating the origin and form of common structural features of networks .previous reports have identified a variety of features which are common to a range of disparate networks including : the power - law distribution of vertex degrees ; the ` small - world ' property ; and network construction from motifs amongst others .many common network features derive from common ways in which real - world networks are formed and evolve .so , for instance , growth with preferential attachment naturally leads to a power - law vertex degree distribution .as another example , common replicative growth processes , such as growth with duplication , naturally endow networks with a certain degree of structural redundancy .thus , structural redundancy in which multiple vertices play an identical topological role is common in real - world empirical networks . in terms of system behaviour ,structural redundancy can be beneficial since it naturally reinforces against attack by providing structural ` backups ' should network elements fail . thus , network redundancy is related to system robustness . intuitively , two vertices are topologically equivalent if they may be permuted without altering network structure .a permutation of the vertices of a network which does not affect network adjacency is known as an _ automorphism _ and the set of network automorphisms forms a group under composition of permutations .thus , our intuitive notion of structural equivalence may be formally investigated using the mathematical language of permutation groups .crucially , symmetric networks ( those with a nontrivial automorphism group ) necessarily contain a certain amount of structural redundancy . in accordance with the observation that common growth processesnaturally lead to structural redundancy , many empirical networks have richly structured automorphism groups . in this paperwe shall use the automorphism group to investigate the effect of redundancy on network eigenvalue spectra .since graph eigenvalues are well - known to be related to a multitude of graph properties there has been considerable recent interest in studying the spectra of real - world complex networks and their associated models .these studies have highlighted the fact that the spectral densities of real - world networks commonly differ significantly from those of the classical ensembles of random matrix theory .for example , in the spectral density of barabsi - albert ` scale - free ' networks and watts - strogatz ` small - world ' networks are considered .barabsi - albert networks are found to have a spectral density which consists of a ` triangle - like ' bulk with power - law tails ; while watts - strogatz small world networks are found to have multiple strong local maxima in their spectral densities which are related to the blurring of singularities in the spectral density of the highly ordered -ring structure upon which the watts - strogatz model is based .similarly , although they are not usually highly ordered , the spectral densities of real - world networks also often contain singularities .for instance singularities at the 0 and eigenvalues are common .previous discussions have related the singularity at to local multiplicities in vertices of degree ( stars ) , and the singularity at to complete subgraphs ( cliques ) although these explanations are not exhaustive .for example , the graphs in fig .[ examplegraphs ] have high multiplicity and eigenvalues which are not due to the presence of stars or cliques respectively . in general , since the relationship between a network and its spectrum is nontrivial , determining general conditions for the presence and strength of singularities in the spectral density is an open analytic problem . based upon the observation that high - multiplicity eigenvalues commonly associate with graph symmetries we examined the relationship between network symmetry and spectral singularities .since symmetry can take many forms cliques , stars and rings are all symmetric , for example symmetry provides a flexible framework for interpreting the effect of a wide variety of redundant network structures on eigenvalue spectra .the structure of the remainder of the paper is as follows : in section [ sec : background ] we introduce some necessary background material on network symmetry . in particular, we examine the relationship between network topology and automorphism group structure and show how certain subgroups of the automorphism group can be related to specific network motifs . in section [ sec : spec ]we consider how a network s automorphism group interacts with its spectrum , and discuss how specific eigenvalues and eigenvectors associate with these motifs .we study in detail the most frequent of these motifs and their contribution to the network s spectrum .finally , we close with some general conclusions .a network may be thought of as a graph , , with vertex set , ( of size ) , and edge set , ( of size ) where vertices are said to be _ adjacent _ if there is an edge between them .an _ automorphism _ is a permutation of the vertices of the network which preserves adjacency .the set of automorphisms under composition forms a group , , of size see fig .[ basicexample ] for an example .we say that a network is _ symmetric _( respectively _ asymmetric _ ) if it has a nontrivial ( respectively trivial ) automorphism group . since automorphisms permute vertices without altering network structure , a network s automorphims group compactly quantifies the degree and nature of the structural redundancy it carries .this correspondence between network symmetry and redundancy forms the basis of the analysis we present in this discussion .the _ support _ of an automorphism is the set of vertices which moves , .two sets of automorphisms and are _ support - disjoint _ if every pair of automorphisms and have disjoint supports .additionally , we say that the automorphism subgroups and generated by and are _ support - disjoint _ if and are . if this is the case , for all and and hence for all and .thus , if and are support - disjoint then we may think of them as acting independently on the network .this notion of independent action gives us a useful means to factorize the automorphism groups of complex networks into ` irreducible building blocks ' .in particular , let be a network with automorphism group generated by a set of generators .partition into support - disjoint subsets such that each can not itself be decomposed into support - disjoint subsets .call the subgroup generated by .since each commutes with all others , we can construct a direct product decomposition of from these subgroups : this decomposition splits the automorphism group into smaller pieces , each of which acts independently on the network .if the set of generators satisfies two simple conditions , the decomposition of eq .[ decomp ] is unique ( up to permutation of the factors ) and irreducible . in this case, we call each factor a _ geometric factor _ and the direct product factorization given in eq .[ decomp ] the _ geometric decomposition _ of .the motivation for this naming is that this factorization relates strongly to network geometry : each factor may be related to a subgraph of , as follows .the _ induced subgraph _ on a set of vertices is the graph obtained by taking and any edges whose end points are both in .we call the induced subgraph on the support of a geometric factor a _ symmetric motif _ , denoted .thus moves the vertices of while fixing the rest of the vertices of , and is the smallest subgraph with this property . fig .[ example ] shows an example network constructed from a variety of symmetric motifs commonly found in real - world networks , and its associated geometric decomposition .table [ auttable ] shows how the factors in the geometric decomposition of this network s automorphism group relate to distinct symmetric motifs in the network .examples of geometric decompositions of real - world networks can be found elsewhere .note that for simplicity we consider networks as undirected graphs ; a directed version of this decomposition is straightforward .since large ( erds - rnyi ) random graphs are expected to be asymmetric , symmetric motifs are commonly over - represented in real - world networks by comparison with random counterparts .thus , they may ( loosely ) be thought of as particular kinds of motifs ( although undirected ) as studied by milo and co - workers .however , our definition is much more restrictive than that of milo and co - workers since we single out motifs preserved by any ( global ) symmetry of the network .although this restriction means that we consider only a small subset of possible network motifs , it is useful since the presence of symmetric motifs may be directly linked to network spectra in a way which is not possible for general motifs ., title="fig:",scaledwidth=45.0% ] , title="fig:",scaledwidth=35.0% ] c c c sym .motif & geom .factor & eigenvalues + + & & [ + & and & [ + & & [ + & & [ + & & [ + & & [ +the presence of singularities in the eigenvalue spectra of real - world networks has been previously observed and reasons for certain of these peaks has been discussed . in this sectionwe aim to extend these previous results by outlining a formal framework in which to consider general spectral characteristics of redundancy .we do so by considering interactions between a network s automorphism group and the eigenvalues of its adjacency matrix .the _ adjacency matrix _ of a simple network is the symmetric matrix the eigenvalues of are the eigenvalues of its adjacency matrix , and the set of eigenvalues is the _ network sspectrum_. for undirected networks , the matrix is symmetric and therefore all eigenvalues are real and there is an orthonormal basis of eigenvectors . for the remainder of this discussion we shall focus on simple undirected networks .the spectral density of a simple network is the density of its eigenvalues , which can be written as a sum of dirac delta - functions where is the largest eigenvalue of .consider , a permutation of the vertices of , which can be represented by a permutation matrix where the relationship between network symmetry and eigenvalue spectra depends centrally upon the fact that if and only if and commute .thus , if is an eigenvector of corresponding to the eigenvalue then is also an eigenvector of corresponding to .since and are generally linearly independent , this means that network symmetry ( and thus redundancy ) naturally gives rise to eigenvalues with high multiplicity and therefore singularities in the spectral density . in the following sections we shall develop this result a little further andshow how certain network eigenvalues may be associated directly with symmetric motifs .first we need to recall the notion of quotient graph . since automorphisms permute vertices without altering network structure , a network s automorphism group may be used to partition its vertex set into disjoint strutural equivalence classes called _ orbits _ ( see fig .[ basicexample ] ) . for every vertex , the set of vertices to which maps under the action of the automorphism group called the -orbit of , written or simply .more formally , similarly , if is a subgroup of , the -orbit of a vertex is the set since vertices in the same orbit may be permuted without altering network structure , they are structurally indistinguishable from each other ( that is , they play precisely the same topological role in the network ) .thus , a network s orbit structure efficiently quantifies the degree of structural redundancy the network carries .for example , the vertices in fig .[ example ] are coloured by orbit . sincethe vertices in each orbit are structurally equivalent , they may be associated with each other to form the basis of a network coarse - graining known in the context of algebraic graph theory as the _ quotient graph_. more specifically , let be the system of orbits which the vertices of are partitioned into under the action of .let ( ) be the number of edges starting from a vertex in and ending in vertices in .since the orbits partition the vertex set into disjoint equivalence classes , depends on and alone .the _ quotient _ of under the action of is the multi - digraph with vertex set and adjacency matrix .we refer to the network as the _ parent _ of .crucially , the quotient of retains the unique structural elements of yet , by associating structurally equivalent elements , factors out all redundancy .previous reports have shown that quotients of many empirical networks can be as small as 20% the size of their parent networks yet preserve precisely key network properties which determine system function .note that we can similarly define the quotient of under the action of any subgroup of ( hence factoring out just a fraction of the redundancy ) . a key result for the present discussion is that , for any given graph , set of eigenvalues of its quotient are a subset of those of . given a graph with orbits and an eigenpair of the quotient graph ,then is also an eigenvalue of the parent network with an eigenvector consisting on an identical value on all the vertices of .thus , a network s automorphism group may be used to construct a factorization of its characteristic polynomial , via its quotient . additionally ,since quotients carry less repetition than their parent networks , we find that quotient spectra generally contain less degeneracy than their parent networks .[ spectra ] illustrates this point by giving the spectral densities of 6 representative ( biological , social and technological ) networks and their quotients .as expected , in each case the spectral density of the parent network contains peaks which are significantly reduced in the spectral density of its quotient . in the following section we will make this relation more explicit by associating specific network eigenvalues with specific symmetric motifs .these , together with the eigenvalues coming from the quotient , describe the entire spectrum of the network .( color online ) . in all cases ,the spectral density of the parent is in dark grey ( blue online ) while that of the quotient is in light grey ( red online ) . _( a ) _ the _ c. elegans _ genetic regulatory network _ ( b ) _ the www.epa.gov subnetwork _ ( c ) _ a media ownership network _ ( d ) _ a network between phd students and their supervisors _ ( e ) _ the us power grid _ ( f ) _ the yeast protein - protein interaction network .note that the y - axis is on a logarithmic scale : in each case , the differences in redundant eigenvalue multiplicities between the parent network and its quotient are significant ( see table [ spectratable]).,scaledwidth=50.0% ] there have been some previous attempts to spot the eigenvalues of key subgraphs in network spectra .however subgraph eigenvalues are not usually contained in network spectra , and in general they only interlace those of the network .nevertheless , certain eigenvalues associated with symmetric motifs are retained in network spectra .we call them _ redundant _ eigenvalues , and they are described as follows .recall the physical meaning of a eigenvalue - eigenvector pair of an undirected graph .consider a vector on the vertex set of a graph , and write for the value at a vertex .write on each vertex the sum of the numbers found on the neighbours of that vertex .if the new vector is a multiple of , say , then is an eigenvector with eigenvalue .we shall say that an eigenvalue - eigenvector pair of a symmetric motif ( considered as an induced subgraph ) is _ redundant _ if , for each -orbit , the sum . for example , in table [ starstable ] the redundant eigenvectors are starred : the coordinates are separeted by orbits and the sum over each orbit is zero .indeed , it can be shown that if has vertices and -orbits , there is an orthonormal basis of eigenvectors of such that of them are redundant , and the remaining are constant on each orbit ( see appendix [ appendixa ] ) ..[starstable ] [ cols="^,^,^",options="header " , ] we say that an eigenvalue is _ redundant with multiplicity _ if there are up to linearly independent eigenvectors such that all the pairs are redundant .for example , the eigenvalue in the fourth motif of table [ auttable ] has multiplicity 4 but redundant multiplicity 3 .the crucial property is that redundant eigenvalues are retained , with their _ redundant _ multiplicity , in the network spectrum : if is a redundant eigenvalue - eigenvector pair of a symmetric motif then is an eigenvalue - eigenvector pair for the whole network , where is formed by setting for all and setting for all .is adjacent to either all or none of the vertices of each -orbit , since the action of permutes transitively the vertices of while fixing , hence the value at remains zero and satisfies the eigenvector - eigenvalue relation .] we call such an eigenvector _ -local _ : it is constructed from a redundant eigenvector of a symmetric motif by setting entries to zero on the vertices outside .the non - redundant eigenvalues of will not , in general , be retained in the network spectrum but rather will change depending on how the motif is embedded in the network ( more precisely , on the topology of the quotient graph ) for instance , see the examples in tables [ auttable ] and [ starstable ] ._ remark : _ the argument above applies naturally to symmetric motifs but not necessarily to single orbits : a redundant eigenvector of an orbit will not necessarily give an eigenvector of the whole network ( see for instance the closing remark on table [ bsm2 ] ) .the reason is that it may not be possible to treat one orbit on its own if the action is not ` independent ' on this orbit . the smallest independent actions ( and their associated subgraphs ) are precisely given by the geometric factorization of eq .[ decomp ] .the symmetric motifs are the smallest subgraphs whose redundant eigenvalues survive to the spectrum of the whole network .on the other hand , consider the quotient graph of a network . recall that if is an eigenpair of the quotient then is an eigenpair of , where is obtained setting the identical value on all the vertices of the orbit .we say that the eigenvector of the parent network is _ lifted _ from the eigenvector of the quotient .the key result is that these two procedures explain completely the whole spectrum of : if has vertices and orbits , we can find a basis of eigenvectors such that the first are lifted from a basis of eigenvectors of the quotient and the remaining come from the redundant eigenvectors of the symmetric motifs of .see appendix [ appendixb ] for full details and table [ starstable ] for examples .finally , note that the s are constant on each orbit and the s are redundant on each orbit ( the sum of the coordinates is zero ) .recall that the spectrum of the quotient graph is a subset of the spectrum of the parent network .the redundant eigenvalues are exactly the ones ` lost ' in the spectrum of the quotient graph ( appendix [ appendixa ] ) .hence the proportion of a network s spectrum due to redundancy is obtained by comparing the size of the parent graph to the size of its quotient .this varies from network to network but can be as small as 20% .thus this phenomena is non - trivial and can account for up to 80% of the network spectrum . until now we have been counting repeated eigenvaluesseparately ( that is , we have considered eigenpairs after fixing an appropriate basis of eigenvectors ) .what can we say about the multiplicity of these redundant eigenvalues ?there is no general principle beyond the general rule of thumb that the multiplicity is directly correlated to the size of the automorphism group .for example , if a network has an orbit of vertices such that all the permutations of these vertices are allowed ( ie . acts naturally on the orbit ) then there will be a redundant -local eigenvalue with multiplicity at least , ( see appendix [ appendixc ] ) .conversely , a graph with only simple eigenvalues has an automorphism which is a subgroup of .one obvious question remains : what are the possible redundant eigenvalues associated with symmetric motifs ? in principle , there is no restriction so we should rephrase the question as : what are the most commonly ocurring redundant eigenvalues in ` real - world ' networks ?we now address this question by focussing on the most commonly ocurring symmetric motifs .most symmetric motifs ( typically more than 90% ) found in real - world networks conform to the following pattern : they consist of one or more orbits of vertices ( ) with a natural symmetric action , that is , the geometric factor ( the subgroup of symmetries permuting only vertices of the motif ) consists of _ all _ the permutations of the vertices of each orbit and hence .therefore , each -orbit is either the empty graph , or the complete graph , on vertices .every vertex not in the motif is a fixed point with respect to and hence is joined to either all or none of the vertices of each orbit .moreover , two orbits may be joined in one of only four possible ways shown in table [ tablejoints ] ( for a proof see ) . for example , the graphs in table [ auttable ] would be , in this notation , , , , and , while the last graph does not follow this pattern .c c c orbits & graphic notation & written notat .+ + & & + + & & + + & & + + & & + we call a symmetric motif as above a _ basic symmetric motif _( bsm ) , while all others which do not conform to this pattern we call _ complex_. complex motifs are rare and their spectrum can be studied separately . however , since they have a constrained shape , it is possible to systematically analyze all the possible contributions that bsms make to the spectra of the whole network .in particular , specific network eigenvalues may be directly associated with bsms .we have carried this analysis out for bsms up to 3 orbits . in all cases ,each redundant eigenvalue of a bsm will have multiplicity a multiple of ( appendix [ appendixc ] ) .there are two symmetric motifs with one orbit , and , and they are both basic . their spectrum is shown in table [ bsm1 ] .we use the notation for the ( redundant ) vector with non - zero entries 1 in the first position and on the position ( ) , and * 1 * for the vector with constant entries 1 .as predicted , each motif has a redundant eigenvalue of multiplicity , which survives as an eigenvalue of the same multiplicity in the spectrum of _ any _ network containing such a subgraph as a symmetric motif .this amounts to the usual association of the and eigenvalues to cliques and stars respectively , as discussed in previous publications .however , our general setting now allows us to go further .c c c c c notation & sym .motif & eigenvalue & multiplicity & eigenvectors + + & & & & + + & & & & + before moving on we make two brief observations .firstly , note that the following bsms can not appear in practice .call a bsm _ reducible _ if it has an -orbit joined to all other -orbits by joints of type ` ' or ` ' ( table [ tablejoints ] ) , that is , or for all . in this casewe would obtain an independent geometric factor of type just permuting the vertices of .for example , the second motif of table [ auttable ] ( a bifan ) has and as geometric factors .such motifs are included in our analysis as two separate symmetric motifs .secondly , consider the _ complement _ of a graph , that is , the graph with same vertex set and complement edge set ( two vertices are joined in if and only if they are not joined in ) .note that the complement of a bsm is also a bsm , replacing by , by , by , and viceversa .if is an eigenvalue of a bsm with multiplicity then is an eigenvalue of the complement bsm with the same multiplicity .there are 12 bsms with two orbits of vertices : 6 of these are non - reducible and it is sufficient to compute 3 cases , since the other 3 are their complement .table [ bsm2 ] summarizes the results .the first two motifs have complementary spectra while the third is self - complementary .observe that and arise again as redundant eigenvalues ( and hence survive in the network s spectrum ) , however , this time they are associated with motifs other than stars or cliques .c c c c c notation & sym . motif & eigenvalue & red .mult . & eigenvectors + + & & & + & + + & & & + & + + & & & + & + + define as the set of redundant eigenvalues of basic symmetric motifs up to orbits .we have shown so far that where is the golden ratio .for orbits , exactly the same analysis may be conducted .however , the number of possible different bsms with orbits increases dramatically with .we have nevertheless computed the redundant eigenvalues of most bsms with 3 orbits , as shown in table [ bsm3 ] permutations of the third orbit for the triangle - shaped bsms , where is the number of vertices of each orbit . ] .observe that the 20 non - complementary bsms organise themselves into 7 different redundant spectrum types .we have therefore shown that : it would be interesting to find out all the possible eigenvalues of bsms of any number of orbits , if there is a pattern . however this is a purely mathematical problem since their relevance ( i.e. frequency ) in real - world networks decays rapidly with the number of orbits .c c c c c c notation & & red . + & & & + & + & + & + + + & + + & + + & + + + & & & + & + & + & + + & & & + + & + + & + + & + + + & & & + & & & + + & + + & + + & + + + & & & + + & + + & + + & + + + & & & + + & + + & + + & + + in order to place these abstract results in a more concrete setting , we have computed spectral characteristics of redundancy in the real - world empirical networks of whose spectra are given in fig [ spectra ] .all high - multiplicity eigenvalues of these networks are listed in table [ spectratable ] .note that , with the exeception of in the spectrum of the network of ties between phd students and their supervisors which comes from the complex motif shown in fig [ motifsqrt5 ] each redundant eigenvalue is in our set .l | c r r network & & & + + c.elegans gr & & 147 & 6 + & 0 & 212 & 45 + epa.gov & & 23 & 0 + & 0 & 2532 & 518 + & 1 & 8 & 4 + media & & 2 & 0 + & & 13&6 + & & 32 & 6 + & 0 & 3621 & 119 + & 1 & 33 & 7 + & & 13 & 6 + phd & & 2 & 1 + & & 3 & 3 + & & 6 & 0 + & & 27 & 4 + & 0 & 507 & 51 + & 1 & 27 & 4 + & & 6 & 0 + & & 3 & 3 + & & 2 & 1 + us power & & 2 & 2 + & & 5 & 0 + & & 13 & 3 + & & 73 & 15 + & 0 & 593 & 241 + & & 5 & 0 + & 1 & 40 & 14 + & 1.1552 & 2 & 2 + & 1.4068 & 2 & 2 + & & 14 & 4 + yeast ppi & & 2 & 0 + & & 28 & 9 + & 0 & 564 & 154 + & 1 & 9 & 2 + & & 2 & 0 + this complex motif appears in the network of ties between phd students and their supervisors . the redundant eigenvalues of this motif ( starred ) survive in the spectrum of the network as a whole.,title="fig : " ] + $ ]due to the forces which form and shape them , many real - world empirical networks contain a significant ammount of structural redundancy .since structurally redundant elements may be permuted without altering network structure , redundancy may be formally investigated by examining network automorphism groups . by considering the relationship between network topology and automorphism group structure ,we have shown how specific automorphism subgroups may be associated with specific network motifs .furthermore , we have shown that certain network eigenvalues may be directly associated with these symmetric motifs .thus , we have explained how the presence of a variety of local network structures may be seen in network spectra and have shown that the portion of a network s spectrum associated with symmetric motifs is precisely the part of the spectrum due to redundancy .in addition we have computed the redundant spectrum of the most common symmetric motifs up to 3 orbits and any number of vertices and demonstrated their presence in a variety of real - world empirical networks .although the theoretical details are somewhat involved , in practice it is extremely easy to find the redundant portion of a networks spectrum and its associated symmetric motifs , even for large networks , using the ` nauty ` algorithm and a computational group theory package such as gap . in summary ,the symmetry approach we have outlined in this paper confirms previous results connecting network spectra with simple local network structures .additionally , since symmetry can take many forms , this approach also extends these results by providing a general means to relate network eigenvalues to a variety of disparate network structures in a simple , flexible algebraic manner . however , our results are limited by the very nature of the automorphism group : only global symmetries are taken into account , and they fail to measure other internal symmetries ( as opposed to the purely combinatorial motifs of milo and coworkers ) , since they are very sensitive to the addition of new vertices .it would be interesting to relax the group notion to that of a groupoid to see if these results can be extended in this more general setting .this work was funded by the epsrc and by a london mathematical society scheme 6 grant .let be a graph with vertices and adjacency matrix .suppose that the action of on has -orbits .we show that there is an orthogonal basis of eigenvectors such that are constant on each -orbit and are redundant ( the sum of the coordinates at each -orbit is zero ) .the proof follows is a consequence of well - known results in graph theory ( see for instance chapters 8 and 9 in ) . a partition of the vertex set of is called _ equitable _ if the number of neighbours in of _ any _ vertex in is a constant . for example , the orbits of any subgroup of gives an equitable partition .the _ quotient _ of by an equitable partition , denoted , is the directed multigraph with vertices and adjacency matrix .characteristic matrix _ of a partition is the matrix such that if the vertex of belong to and 0 otherwise .that is , in column notation with the vector with 1 s in the vertices of and 0 elsewhere .we have that is the characteristic matrix of an equitable partition if and only if since the -entry of either matrix is the number of neighbours of the vertex in . a subspace is called _ -invariant _ if for all .note that ( [ eq:1 ] ) is equivalent to saying that the space spanned by the columns of is -invariant .one can show that every non - zero -invariant subspace has an orthogonal basis of eigenvectors .furthermore , the orthogonal complement of an -invariant subspace if also -invariant .consequently , we can write and find an orthogonal basis of eigenvectors of and another for .finally note that : suppose that is a graph with vertices and -orbits , where .consider the associated geometric decomposition , and corresponding symmetric motifs .suppose that has vertices and -orbits ( which then coincide with the -orbits ) .call to the number of fixed points in . then we have for each motif we can apply the result in appendix [ appendixa ] to find an orthogonal basis of eigenvectors such that of them , say , are redundant .hence they give -local eigenvectors of , for each .note that the s are pairwise orthogonal and hence in particular are linearly independent .now choose an orthogonal basis of eigenvectors of the quotient , .each is an eigenvector of , constant on each orbit .then is an orthogonal system of vectors , that is , an orthogonal basis of eigenvectors of .+ let be a graph with an orbit of vertices such that all permutations of the vertices are automorphisms of . we demonstrate that there is a redundant eigenvalue of redundant multiplicity at least .we can assume that .let be a redundant eigenpair ( there is at least one , by appendix [ appendixa ] ) .suppose that are the entries of at .recall that any permutation of the s ( fixing the other entries ) gives an eigenvector of the same eigenvalue . since is redundant, it can not be constant on the orbit , thus we can assume without loss of generality that .let be a permutation interchanging the first and second coordinates while fixing the other entries in the orbit .thus is a multiple of the vector with values on the s . further permuting the coordinatesgives linearly independent eigenvectors of , as required .k. norlen , g. lucas , m. gebbie , and j. chuang , _ eva : extraction , visualization and analysis of the telecommunications and media ownership network _ , proceedings of international telecommunications society 14th biennial conference ( its2002 ) ( 2002 ) .
many real - world complex networks contain a significant amount of structural redundancy , in which multiple vertices play identical topological roles . such redundancy arises naturally from the simple growth processes which form and shape many real - world systems . since structurally redundant elements may be permuted without altering network structure , redundancy may be formally investigated by examining network automorphism ( symmetry ) groups . here , we use a group - theoretic approach to give a complete description of spectral signatures of redundancy in undirected networks . in particular , we describe how a network s automorphism group may be used to directly associate specific eigenvalues and eigenvectors with specific network motifs .
we present a theoretical and numerical study of imaging remote sources in random waveguides , using an array of sensors that record acoustic waves .the waveguide effect is caused by the boundary of the cross - section , which traps the waves and guides the energy along the range direction , as illustrated in figure [ fig : schem ] .we restrict our study to two - dimensional waveguides , because the numerical simulations become prohibitively expensive in three dimensions .the results are similar in three - dimensional waveguides with bounded cross - section .we refer to for an analysis of wave propagation and imaging in three - dimensional random waveguides with unbounded cross - section .scattering at the boundary creates multiple traveling paths of the waves from the source to the receiver array .mathematically , we can write the wave field ( the acoustic pressure ) as a superposition of a countable set of waveguide modes , which are solutions of the homogeneous wave equation . finitely many modes propagate in the range direction at different speeds , and the remaining infinitely many modes are evanescent waves that decay exponentially with range. we may associate the propagating modes with planar waves that strike the boundaries at different angles of incidence .the slow modes correspond to near normal incidence .they reflect repeatedly at the boundary , thus traveling a long path to the array .the fast modes correspond to small grazing angles and shorter paths to the array . in ideal waveguides with straight boundaries andwave speed that is constant or varies smoothly with cross - range , the wave equation is separable and the modes are uncoupled . in particular , each mode has a constant amplitude which is determined by the source excitation .we study perturbed waveguides with small and rapid fluctuations of the boundaries and of the wave speed , due to numerous weak inhomogeneities .such fluctuations are not known and are of no interest in imaging .however , they can not be neglected because they cause wave scattering that accumulates over long distances of propagation . to address the uncertainty of the boundary and wave speed fluctuations , we model them with random processes , and thus speak of random waveguides .the array measures one realization of the random field , the solution of the wave equation in one realization of the random waveguide .that is to say , for a particular perturbed boundary and medium .when cumulative scattering by the perturbations is significant , the measurements are quite different from those in ideal waveguides .furthermore , if we could repeat the experiment for many realizations of the perturbations , we would see that the measurements change unpredictably , they are statistically unstable . the expectation ( statistical mean ) ] and mitigate the unwanted reverberations ] .the multiplication by the carrier oscillatory signal centers the support of the fourier transform of the pulse at , therefore , the angular frequency , the dual variable to , belongs to the interval ] decays exponentially with , on the length scale called the _ scattering mean free path _ of the mode .it is given by , \label{eq : scmfp_b}\ ] ] in terms of the power spectral density , the fourier transform of the covariance .we know that by bochner s theorem , so all the terms in the sum are nonnegative . aside from the exponential decay , the mean amplitudes also display a net phase that increases with on the mode - dependent length scales .we recall ] its expression from \\ & + \frac{\pi^2(j-1/2)^2}{d^2 \beta_j(\om)}\cr_\nu(0)\left\ { - \frac{3}{2 } + \sum_{l \ne j , l = 1}^n \frac{\left[\beta_l(\om ) + \beta_j(\om ) \right ] ( l-1/2)^2}{\beta_l(\om)(j+l-1)(j - l ) } \right\ } \\ & + \frac{\cr_\nu''(0)(j-1/2)^2}{\ell^2\beta_j(\om ) } \left\ { \frac{\pi^2}{6 } + \sum_{l \ne j , l = 1}^n \frac{\left[\beta_j(\om)-\beta_l(\om)\right ] ( l-1/2)^2}{\beta_l(\om)(j+l-1)^2(j - l)^2 } \right\ } + \kappa_j^{(e)}(\om ) , \label{eq : phase}\end{aligned}\ ] ] where and is due to the interaction of the evanescent waves with the propagating ones .it is given by \right .\nonumber \\ &\left.-\frac{(l-1/2)^2}{\beta_j^2(\om ) + \beta_l^2(\om ) } \right\ } - \frac{2 \ , \cr_\nu''(0)\ , ( j-1/2)^2}{\ell^2 \beta_j(\om)}\sum_{l = n+1}^\infty \frac{(l-1/2)^2 } { ( l - j)^2(l+j-1)^2},\end{aligned}\ ] ] where we used integration by parts to simplify the formulas derived in .the mean square mode amplitudes are \approx \frac{1}{4 b^2 } \left|{\widehat}f \left(\frac{\om-\om_o}{b } \right ) \right|^2 \sum_{l=1}^n \frac{\phi_l^2(x_o)}{\beta_l(\om ) } t_{jl}(\om , z_\ca ) \ , , \label{eq:2ndm}\ ] ] with matrix and symmetric matrix defined by , \quad j \neq l,\nonumber\\ \gamma_{jj}^{({c})}(\omega ) = & - \sum_{l\neq j , l=1}^n \gamma_{jl}^{({c } ) } ( \om ) , \quad j = 1,\ldots , n .\label{eq : gamma}\end{aligned}\ ] ] let be the eigenvalues of , in descending order , and its orthonormal eigenvectors .we have from the conservation of energy that so the limit of the matrix exponential is determined by the null space of . under the assumption that the power spectral density does not vanish for any of the arguments in ( [ eq : gamma ] ) , is a perron - frobenius matrix with simple largest eigenvalue .the leading eigenvector is given by and as grows , thus , the right handside in ( [ eq:2ndm ] ) converges to a constant on the length scale called the _equipartition distance_. it is the range scale over which the energy becomes uniformly distributed over the modes , independent of the source excitation .equations ( [ eq : meana ] ) , ( [ eq:2ndm ] ) and ( [ eq : limeq ] ) give that the snr ( signal to noise ratio ) of the amplitude of the mode satisfies = \frac { \left| \ee [ { \widehat}a_j(\om , z_\ca ) ] \right| } { \sqrt{\ee \left [ \big| { \widehat}a_j(\om , z_\ca ) - \ee [ { \widehat}a_j(\om , z_\ca ) ] \big|^2 \right ] } } \sim \exp \left[-\frac{z_\ca}{\cs_j(\om ) } \right].\ ] ] therefore , the mode loses coherence on the range scale , the scattering mean free path .the scaling ( [ eq : defeps ] ) of the amplitude of the fluctuations implies that so the loss of coherence can be observed at ranges of the order , as stated in ( [ eq : lr ] ) .the boundaries in these waveguides are straight , but the wave speed is perturbed as .\ ] ] here is a mean zero , statistically homogeneous random process of dimensionless arguments , with integrable autocorrelation .\label{eq : rmu}\ ] ] as in the previous section , we model the small amplitude of the fluctuations using the small dimensionless parameter defined by the scaling by the correlation length of both arguments of indicates that the fluctuations are isotropic .we assume like before that and use the same long range scaling ( [ eq : scalerb2 ] ) to study the loss of coherence of the waves due to cumulative scattering in the random medium .the model of the array data , the mean and intensity of the mode amplitudes look the same as ( [ eq : randp ] ) , ( [ eq : meana ] ) and ( [ eq:2ndm ] ) , but the scattering mean free paths , the net phases and the matrix are different .we recall their expression from ( * ? ? ?* chapter 20 ) .the scattering mean free path of the mode is given by ,\ ] ] where is the power spectral density of the stationary process with autocorrelation .\ ] ] the net phase of the mode is + \kappa_j^{(e)}(\om ) , \label{eq : phrm}\ ] ] where and the last term is due to the interaction of the evanescent modes with the propagating ones .\ ] ] the matrix is symmetric , with entries given by , \quad j \neq l , \nonumber \\\gamma_{jj}^{({c})}(\omega ) & = - \sum_{l\neq j , l=1}^n \gamma_{jl}^{({c})}(\om ) , \quad j=1,\ldots , n .\label{eq : gammarm}\end{aligned}\ ] ] as before , we denote its eigenvalues by , and its orthonormal eigenvectors by , for .moreover , assuming that the power spectral density does not vanish at any of the arguments , we obtain from the perron - frobenius theorem that the null space of is one - dimensional and spanned by the long range limit of the matrix exponential is as in ( [ eq : lr ] ) , and the equipartition distance is given by , in terms of the largest non - zero eigenvalue of .it is not difficult to see by inspection of formulas ( [ eq : scmfp_b ] ) and ( [ eq : scrm ] ) that the scattering mean free paths and the net phase range scales decrease monotonically with the mode index . to obtain a quantitative comparison of the net scattering effects of boundary and medium perturbations , we consider here and in the numerical simulations two examples of autocorrelations of the fluctuations and .the conclusions drawn below extend qualitatively to all fluctuations , but obviously , the scales depend on the expressions of and , the depth of the waveguide and the correlation length relative to .we take henceforth , so that .the autocorrelation of the boundary fluctuations is of the so - called matrn form with power spectral density ^ 4}.\ ] ] the correlation length is , and the amplitude of the fluctuations is scaled by . the characteristic scales , and the equipartition distance are plotted in figure [ fig:1 ] .the medium fluctuations have the gaussian autocorrelation with correlation length and amplitude scaled by .the characteristic scales , and the equipartition distance are plotted in figure [ fig:2 ] . [ cols="^ " , ] to motivate the figure of merit ( [ eq : imh7 ] ) and illustrate the effect of the optimization on the image , let us set and consider a gaussian pulse with bandwidth .we display the absolute value of the image in the right plot of figure [ fig : cr ] . for comparison ,we show in the left plot of figure [ fig : cr ] the image with the uniform weights .it has prominent fringes in the cross - range , which are mitigated by the optimization over the weights .we do not get the best cross - range resolution with the weights ( [ eq : imh8 ] ) .the optimal for focusing in cross - range has components it maximizes the ratio of the peak of the image and its mean square along the cross - range line at , and gives the image .\ ] ] we show it in the middle plot of figure [ fig : cr ] , and indeed , it has smaller fringes along the axis .however , the range resolution is worse than that given by the optimal weights .it is easy to see that the optimal for focusing in range , the maximizer of has the components with constant when we have for all , and therefore .for all other we have , and the image is given by \ ] ] our optimization finds a compromise between cross - range and range focusing , which is achieved at the maximum of the figure of merit .we can determine explicitly the cross - range and range resolution of under the assumption that ( that is , ) .then , we can replace the sum over the modes by an integral over the variable ] ) .we can not use coherent imaging for such data no matter how we weight its components .the imaging function follows from equations ( [ eq : randp ] ) , ( [ eq : im3 ] ) and ( [ eq : modedec ] ) , for the full aperture array we compute its mean and intensity using the moment formulas ( [ eq : meana ] ) and ( [ eq:2ndm ] ) .we have \approx \frac{1}{4}\sum_{j=1}^n \frac{w_j}{\beta_j^2(\om_o ) } \phi_j(x ) \phi_j(x_o ) \cf_j(z ) \exp\left[-\frac{z_\ca}{\cs_j(\om_o ) } - i \frac{z_\ca}{\cl_j(\om_o)}\right ] \,,\ ] ] with mode pulses defined in ( [ eq : imh3 ] ) .this expression is similar to that of the imaging function in ideal waveguides given by ( [ eq : imh2 ] ) , except that the contribution of the mode is damped on the range scale and is modulated by oscillation on the range scale .this oscillation must be removed in order to focus the image , which is why we should allow the weights to be complex .the intensity of the image is \approx & \frac{1}{4 } \sum_{j , j'=1}^{n } \frac{w_j \overline{w_{j'}}}{\beta_j^{3/2}(\om_o)\beta_{j'}^{3/2}(\om_o ) } \int_{-\infty}^\infty \frac{d \om}{2 \pi } \int_{-\infty}^\infty \frac{d \om'}{2 \pi } \ee\left [ \overline{{\widehat}{a}_j(\omega , z_\ca)}{\widehat}a_{j'}(\om',z_\ca ) \right ] \nonumber \\ & \times \ , \phi_j(x ) \phi_{j'}(x)e^{i\left[\beta_{j'}(\om')- \beta_j(\omega)\right ] z}\end{aligned}\ ] ] and its square norm is given by = \int_0^d dx \int_{-\infty}^\infty dz \ee\left[\left|\ci(\bx ; \bw ) \right|^2\right ] \nonumber \\ & \hspace{0.3in}=\frac{1}{4 } \sum_{j=1}^{n } \frac{|w_j|^2 } { \beta_j^{3}(\om_o ) } \hspace{-0.02in}\int_{-\infty}^\infty \frac{d \om}{2 \pi } \hspace{-0.02in}\int_{-\infty}^\infty \frac{d \om'}{2 \pi } \ee\left [ \overline{{\widehat}{a}_j(\omega , z_\ca)}{\widehat}a_{j}(\om',z_\ca ) \right ] \hspace{-0.02 in } \int_{-\infty}^\infty \hspace{-0.1in}dz \ , e^{i\left[\beta_{j}(\om')- \beta_j(\omega)\right ] z } \\ & \hspace{0.3in}\approx \frac{1}{4 } \sum_{j=1}^{n } \frac{|w_j|^2 } { \beta_j^{3}(\om_o)\beta'_j(\om_o ) } \int_{-\infty}^\infty \frac{d \om}{2 \pi}\ , \ee\left [ \left|{\widehat}{a}_j(\omega , z_\ca)\right|^2 \right]\ , , \end{aligned}\ ] ] because of the orthonormality of the eigenfunctions .recalling the moment formula ( [ eq:2ndm ] ) and using equation ( [ eq : modespeed ] ) , we obtain & \approx \frac{c_o \|f\|^2}{16 k_o b } \sum_{j=1}^n \frac{|w_j|^2}{\beta_j^2(\om_o ) } \sum_{l=1}^n \frac{\phi_l^2(x_o ) t_{jl}(\om_o , z_\ca)}{\beta_l(\om_o ) } .\end{aligned}\ ] ] the weights must compensate for the oscillations in ( [ eq : meani ] ) in order for $ ] to peak at the source location .thus , we let , \qquad w_j^+ = |w_j| , \label{eq : weightsmod}\ ] ] and maximize \right|^2 } { \ee \left [ \| \ci(\cdot;\bw ) \|^2 \right ] } \sim \frac { \left [ \displaystyle \sum_{j=1}^n \frac{w_j^+ \phi_j^2(x_o)}{\beta_j^2(\om_o ) } \exp\left(-\frac{z_\ca}{\cs_j(\om_o)}\right ) \right]^2}{\displaystyle \sum_{j=1}^n \frac{(w_j^+)^2}{\beta_j^2(\om_o)}\displaystyle \sum_{l=1}^n \frac{\phi_l^2(x_o ) \ , t_{jl}(\om_o , z_\ca)}{\beta_l(\om_o ) } } ,\label{eq : thmt}\end{aligned}\ ] ] over the vectors with non - negative entries , and euclidian norm . the symbol denotes approximate , up to a multiplicative constant , as before .the optimal weights are given by with positive constant determined by the normalization .they are damped exponentially with range on the scale given by the mode dependent scattering mean free paths .the optimization detects the modes that are incoherent , i.e. , the indexes for which , and suppresses them in the data .in this section we present numerical simulations and compare the results with those predicted by the theory .the setup is as described in section [ sect : scat ] , with autocorrelation functions ( [ eq : rnu ] ) and ( [ eq : rmug ] ) of the perturbations of the boundary and of the wave speed , in a waveguide of depth .all lengths are scaled by the central wavelength , and the bandwidth satisfies .for example , we could have the central frequency and the unperturbed wave speed / s , so that m and . to illustrate the cumulative scattering effect on the imaging process, we consider several ranges of the array , from to .the details on the numerical simulations of the array data are in appendix [ sect : ap ] we begin in figure [ fig : homo ] with the results in an ideal waveguide , with array at range .we plot on the left the image with the optimal weights and on the right the theoretical weights ( [ eq : imh8 ] ) ( in red ) and the numerically computed weights ( in blue ) .the weights are computed by minimizing , with defined in ( [ eq : fig_merrit ] ) .the optimization is done with the matlab function _ fmincon _, over weights , with constraints for , and normalization the image is very similar to that predicted by the theory ( the right plot in figure [ fig : cr ] ) , and the optimal weights are in agreement , as well . .left : image with the numerically computed weights .the abscissa is in and the ordinate is the cross - range in .right : theoretical weights ( in red ) and numerical ones ( in blue ) vs. mode index.,width=226 ] the analysis for random waveguides in section [ sect : random ] is based on the theoretical figure of merit ( [ eq : thm ] ) , which is close to only when the image is statistically stable .the theory in predicts that stability holds for the given bandwidth , in the asymptotic limit .we have a finite , and to stabilize the optimization so that we can compare it with the theory , we need to work with a slight modification of the figure of merit ( [ eq : fig_merrit ] ) , where is a local spatial average of the image around . in our regimethe theory predicts that for all the modes that remain coherent , as shown in figures [ fig:1 ] and [ fig:2 ] .therefore , we can neglect the phase factors in ( [ eq : weightsmod ] ) , and optimize directly over positive weights .the optimization is done with the matlab function _ fmincon _, as before , but we regularize it by asking that the weights be monotone decreasing with the mode index .that is to say , we work with the constraints in waveguide with perturbed boundary and array at range ( left ) and in waveguide with perturbed medium and array at range ( right ) .the weights are uniform , for .the abscissa is in and the ordinate is the cross - range in ., title="fig:",width=226 ] in waveguide with perturbed boundary and array at range ( left ) and in waveguide with perturbed medium and array at range ( right ) .the weights are uniform , for .the abscissa is in and the ordinate is the cross - range in ., title="fig:",width=226 ] without weight optimization the images are noisy , with spurious peaks .we illustrate this in figure [ fig : kmimages ] , where we plot with uniform weights , for .the image in the left plot is in a waveguide with perturbed boundary and array at range .the image in the right plot is in a waveguide with perturbed medium and array at range .both images are noisy .the results in figure [ fig:1 ] predict that half of the modes remain coherent at in the waveguide with perturbed boundaries ( for ) .therefore the image is not bad , and can be improved further by the optimization , as shown below .the results in figure [ fig:2 ] show that all the modes are almost incoherent at in the waveguide with perturbed medium ( for ) .the image is noisy , with prominent spurious peaks , and can not be improved by optimization , as shown below . .left : image with the numerically computed weights .the abscissa is in and the ordinate is the cross - range in .right : theoretical weights ( in red ) and numerical ones ( in blue ) vs. mode index.,width=226 ] . left : image with the numerically computed weights .the abscissa is in and the ordinate is the cross - range in .right : theoretical weights ( in red ) and numerical ones ( in blue ) vs. mode index.,width=226 ] . left : image with the numerically computed weights .the abscissa is in and the ordinate is the cross - range in .right : theoretical weights ( in red ) and numerical ones ( in blue ) vs. mode index.,width=226 ] . left : image with the numerically computed weights .the abscissa is in and the ordinate is the cross - range in .right : theoretical weights ( in red ) and numerical ones ( in blue ) vs. mode index.,width=226 ] . left : image with the numerically computed weights .the abscissa is in and the ordinate is the cross - range in .right : theoretical weights ( in red ) and numerical ones ( in blue ) vs. mode index.,width=226 ] we show in figures [ fig:50bdry]-[fig:150bdry ] the results of the optimization in a waveguide with perturbed boundary and array at ranges , and .the local average of the image in ( [ eq : fig_num ] ) is over an interval of length in range and of length , and in cross - range , respectively .the weights obtained with the numerical optimization are in reasonable agreement with those predicted by the theory .the resolution of the images deteriorates as we increase because more of the higher indexed modes become incoherent .figures [ fig:25int]-[fig:50int ] show the results in a waveguide with perturbed medium and array at ranges and . herethere is no trade - off between resolution and robustness of the image , because most modes lose coherence on roughly the same range scale .coherent imaging can be done at range , and the numerical weights agree with those predicted by the theory .however , at range the optimization fails to improve the image .we have carried out a comparative theoretical and numerical study of wave scattering in two types of random waveguides with bounded cross - section : waveguides with random inhomogeneities in the bulk medium and waveguides with random perturbations of the boundary .the wave field is a superposition of waveguide modes with random amplitudes .coherent imaging relies on the coherent part of the amplitudes , their expectation .however , this decays with the distance of propagation due to cumulative scattering at the random inhomogeneities and boundary perturbations .the incoherent part of the amplitudes , the random fluctuations gain strength and become dominant at long ranges .the characteristic range scales of decay of the coherent part of the mode amplitudes are called scattering mean free paths .they are frequency and mode - dependent , and they decrease monotonically with the mode index . in waveguides with random boundariesthe mode dependence is very strong .thus , we can image with an adaptive approach that detects and suppresses the incoherent modes in the data in order to improve the image .the high indexed modes are needed for resolution but they are the first to become incoherent . thus , there is a trade - off between the resolution and robustness of the image , which leads naturally to an optimization problem solved by the adaptive approach .it maximizes a measure of the quality of the image by weighting optimally the mode amplitudes .such mode filtering does not work in waveguides with random media because there the modes have similar scattering mean free paths .all the modes become incoherent at essentially the same propagation distances and incoherent imaging should be used instead .there is a large range interval between the scattering mean free paths of the modes and the equipartition distance , where incoherent imaging can succeed .the equipartition distance is the characteristic range scale beyond which the energy is uniformly distributed between the modes , independent of the initial state .the waves lose all information about the source at this distance and imaging becomes impossible .incoherent imaging is not useful in waveguides with random boundaries because the equipartition distance is almost the same as the scattering mean free paths of the low indexed modes .once the waves become incoherent all imaging methods fail .we would like to thank dr adrien semin for carrying out the numerical simulations with montjoie .the work of l. borcea was partially supported by the afsor grant fa9550 - 12 - 1 - 0117 , the onr grant n00014 - 12 - 1 - 0256 and by the nsf grants dms-0907746 , dms-0934594 .the work of j. garnier was supported in part by the erc advanced grant project multimod-267184 .the work of c. tsogka was partially supported by the european research council starting grant project adaptives-239959 .in the numerical simulations the source is supported in a disk of radius , and it emits a pulse modulated by the carrier signal .the array has receivers located at , with , .the wave propagation in waveguides with perturbed media is simulated by solving the wave equation as a first order velocity - pressure system with the finite element method described in .it is a second order discretization scheme in space and time , and in the simulations we used spatial mesh size in cross - range and range , and time discretization step determined by the cfl condition with the maximal value of the speed of propagation in the medium .the wave propagation in waveguides with perturbed pressure release boundary is simulated by solving the wave equation as a first order velocity - pressure system with the code montjoie ( http://montjoie.gforge.inria.fr/ ) . in the simulations we used order finite elements in space and order finite differences in time , with spatial mesh size and time discretization step .w. kohler and g. papanicolaou , wave propagation in randomly inhomogeneous ocean , in lecture notes in physics , vol .70 , j. b. keller and j. s. papadakis , eds . ,wave propagation and underwater acoustics , springer verlag , berlin , 1977 .
we present a quantitative study of coherent array imaging of remote sources in randomly perturbed waveguides with bounded cross - section . we study how long range cumulative scattering by perturbations of the boundary and the medium impedes the imaging process . we show that boundary scattering effects can be mitigated with filters that enhance the coherent part of the data . the filters are obtained by optimizing a measure of quality of the image . the point is that there is an optimal trade - off between the robustness and resolution of images in such waveguides , which can be found adaptively , as the data are processed to form the image . long range scattering by perturbations of the medium is harder to mitigate than scattering by randomly perturbed boundaries . coherent imaging methods do not work and more complex incoherent methods , based on transport models of energy , should be used instead . such methods are nor useful , nor needed in waveguides with perturbed boundaries . we explain all these facts using rigorous asymptotic stochastic analysis of the wave field in randomly perturbed waveguides . we also analyze the adaptive coherent imaging method and obtain a quantitative agreement with the results of numerical simulations .
during the last years some biological features of real neurons have been incorporated into the hopfield model in order to make it more realistic and trying to improve its performance .suitable modifications of the original model taking into account biological ingredients such as thermal noise , dilution , asymmetry , dynamical delays , among others , have been vastly analized in the literature .although they usually deteriorate the retrieval ability , it has been shown they enable the implementation of new tasks , such as recognition of temporal sequences and categorization . one crucial biological element originally absence in the hopfield model is the so called _ refractory period _ .real neurons take about 1 - 2 milliseconds to complete a cycle from the emission of a spike in the pre - sinaptic neuron to the emission of a spike in the post - sinaptic neuron . after this time , the neuron need again about 2 milliseconds to recover , and during this time , called _ absolute refractory period _ ( arp ) , it is insensitive to afferent activiy ( i.e.,it can not emit a second spike , no matter how large the post - synaptic potential ( psp ) may be ) . following this short arp ,the neuron enters in a new regime of about 5 - 7 milliseconds , in which it partially recovers the capacity of emitting spikes , but now with a greater excitation threshold which decreases with time .this is called the _ relative refractory period _ ( rrp ) . following this somewhat longer rrp, the threshold tends to return to its rest value and the neuron can fire again with typical intra - network potentials .the simplest way one can introduce these periods into the dynamics of the hopfield model is by means of a time dependent threshold acting as an external field , which depends on the recent history of the neuron .since we want this threshold to mimic the effects of fatigue observed in real neurons , it should act only after the cell has emitted an electric signal .so , we expect that the threshold depends on the mean activity of the neuron in the previous time .the main effect on the dynamics of the model is to introduce a tendency to destabilize the fixed point attractors , allowing the appearance of oscillatory behaviors . in the last yearsdifferent threshold functions have been studied , showing that they enable the system to wander through the phase space , eventually visiting different basins of attraction and simulating the process by which the brain recognizes temporal sequences of patterns . on the other hand ,oscillating and chaotic trajectories in the phase space seem to be more realistic than fixed points attractors from a biological point of view ( see and references therein ) . in this workwe analyze , using a mean field approach and through numerical simulations , the behavior of the hopfield model for associative memory when the effect of these refractory periods are taken into account in the dynamics of the system . instead of considering a fatigue like threshold function that would depend on the large term history of the neuron , we introduce a threshold that depends only on the state of the neuron in the previous time , i.e. , it is activated only when the neuron fires a spike . in the section [ model ] , we introduce the model and describe how the refractory periods are incorporated into its dynamics . in section [ fixed ] , we obtain an equation for the value of the superposition between the state of the system and one of the memories ( which is only valid for fixed points dynamics ) , from which we can study the retrieval properties of the model in this region . in section[ numerical ] , we obtain a complete phase diagram and identified the regions of fixed points , cyclic orbits and chaotic orbits .we have used a synchronous parallel updating , which allows an efficient use of modern parallel - processing computers .finally , in section [ conclution ] , we discuss the main results .as in the little and hopfield models we consider a network of binary neurons , each one modeled by an ising variable which take the values , representing the passive and active states , respectively . in order to take into account the effect of the refractory period in the neuron add a threshold that depends on the time , but only through the value of the state of the neuron .so the post - synaptic potential at time is given by : where is the usual hopfield post - synaptic potential : here is the hopfield synaptic matrix connecting the pre and post - synaptic neurons and and whose elements have the form : the are random independent variables which take the values with the same probability and the n - bits words stand for the stored configurations ( ) .the dynamics of the network is governed by a monte carlo heat bath dynamics : where all the neurons are updated simultaneously ( like in the little model ) .the parameter measures the noise level of the net and in the noiseless limit ( ) we recover the deterministic dynamics : from this expression we can easily understand the effect of this extra field : if the neuron fires a spike at time ( ) , it will requires an extra contribution to the psp in order to fire again . on the other hand , if this neuron was at rest at time ( ) , then it will work like an usual hopfield neuron .observe that this model does not distinguish between absolute and relative periods neither includes any fatigue like effect ( long time history ) . as usual , we will characterize the recognition ability by calculating the long time behavior of the overlap between the state of the system and the stored patterns , defined as : where means a thermal average at temperature .we say that the system recognizes a pattern every time it evolves to an attractor for which only one overlap is non - zero and all the others vanish as ( ) .the two relevant parameters in our model are then and ( the ratio between the number of stored patterns ( ) and the total number of neurons of the network ( ) ) . in the following sections we analyze the behavior of the model on the ( , ) plane .following the statistical method developed by geszti ( see also ) , we give in this section a heuristic derivation of the critical capacity as a function of the parameter for the stochastic version of the model . by taking the limit obtain a noiseless phase diagram in the ( , ) plane which will be compared with numerical simulations in the next section .let us suppose that the initial state of the system is such that is the only macroscopically non - zero overlap and so for any .furthermore , we will assume that although the threshold tends to destabilize the fixed point attractors , its effect is not strong enough to anable the system to visit different basins of attractors .so , since initially only the first overlap was non zero , let us suppose that this will be valid for any time .this a priori assumption will be justified in the next section by the numerical simulation , where we will find that in the region where the system recognizes ( that is , where ) the dynamics of the model is dominated by fixed point attractors .we then start considering the overlap between the state of the system and the first pattern , that can be rewritten as : since we are storing an extensive number of pattern , we can not neglect any more the effect of the others overlaps : in order to make an self - consistent treatment for the overlap we need to introduce two other parameters , namely : where is the edwards - anderson order parameter and is indentified as the mean square overlap of the system configuration with the nonretrieved patterns .after some standard calculations we get the following set of equation for the values of , and _ in the attractor _ : where and notice that for the particular case we recover the equations obtained for the hopfield model which also agree with those obtained by amit et al through a thermodynamical mean - field study ( which unlike this method requires the use of the replica trick ) .we start analyzing the noiseless case for which we have performed numerical simulations . in this limitour equations take the following form : in fig .1 we display as function of for several values of . for any value of therealways exists a critical value below which the system recovers the stored patterns with a non - zero fraction of errors . at systems undergoes a discontinuos transition from the retrieval phase ( in which the dynamics is governed by the fixed point attractors ) to a non - retrieval phase where our analytical approach is no longer valid , since the self - consistent equation does not predict a fixed point attractor ( which was our original assumption ) .observe that decreases as increases .as the fraction of errors at the transition goes to accordingly to the following expression : we have also analyzed the fixed point equations in the presence of noise . in fig .2 we present the phase diagram for different values of .for we recover the phase diagram obtained in . along the lines the system undergoes a discontinuos transition from the retrieval phase ( below ) to the non - retrieval phase ( above ) .notice that the recognition phase decreases as increases ; i.e. , the main effect of introducing this refractory periods seems to be a degradation of the retrieval properties of the model . in fig .3 we present the critical line versus for . for system undergoes a second order transition while for the transition is discontinuos ( the point separates both lines ) . in the insetwe show the behavior the retrieval overlap around the critical point as function of .in this section we present a numerical study of both recognition ability and dynamical properties of the model at and compare it with the analitical results obtained in the previous section .the simulations were performed on systems of , and neurons and the network was updated synchronously . setting the initial configuration as the first stored pattern , we let the system evolve until it reaches the attractor . in order to characterize the dynamical behavior we first determined whether the system was in a periodic orbit or not , by waiting until it returned to a given configuration that was stored after a transient .depending on the value of the parameters and on the size of the system it could also happen that the system did not return to the initial configuration after a given period of time ( typically 100 monte carlo steps ) . in such cases , we said that the system follows a chaotic orbit , although we have not performed a through analysis in order to determine whether these were really chaotic orbits or orbits with large periods . to analyze the recognition ability we calculated for each sample a temporal average between the stored patterns and the state of the system in the attractor .if the system reached a cyclic orbit of period , we measured ( in the attractor ) the following quantity : since the initial state was chosen to be always the first memory , we say that the network recognizes when in order to make a configurational average of , for any value of the parameters we repeated this procedure over different samples using different memories , initial configurations and random number sequences . to characterize the dynamical behavior we present the frequency with which each kind of attractor appears and also the mean activity , defined as the average number of active neurons , in the attractor . in fig .4 we display the phase diagram vs. for . for system presents only fixed points ( fp ) . for fixed , as increases we found that : 1 . for low values of the dynamicsis governed only by fixed points attractors .the full circle indicates where this kind of behavior disappears ; 2 .the region between the two full triangles indicate the region where cycles of order two ( c2 ) appear ; 3 .the hollow circle indicates the value of above which chaotic orbit ( ch ) emerges .observe that there are many region of coexistence of attractors .in fact , between the c2 and the ch we have also found cyclic orbits ( oc ) of order greater than two , but they are not indicated in the diagram . independently of the dynamical behavior , we have also studied the critical recognition capacity .the dashed line separates the recognition phase ( below ) from the non - retrieval phase ( above ) obtained numerically and the full line corresponds to the analytical results obtained in the previous section .the simulation curve fits very well the analytical result only for small values of . in order to understand why the analytical and the numerical curves do not agree ,we have carefully analyzed the behavior of the system along two cuts with fixed , namely and . in fig .5 we plot both ( top ) and the frequency with which each kind of orbits appears ( bottom ) as a function of .the first thing we note is that the fp region coincides with the retrieval phase , and that the c2 region corresponds to the non - retrieval phase . in such cases , where the systems only recognizes through fp ,the analytical curve predicts very well the transition . on the other hand , in fig .6 we present the same curves for .notice now that the recognition phase presents two different dynamical behaviors : for small values of the system evolves to fp while for intermediate values it goes to c2 . unlike the case , now the theoretical curve does not predict correctly the retrieval non - retrieval phase transition , but the fp to c2 transition . in both caseswe have studied the finite - size effects by working with three different sizes , namely , , and . in figs . 5 and 6 we present the overlaps as function of for all these system sizes .note that as increases the numerical simulation tends to display a more abrupt decay of at the transition , resembling the first order transition found in the analytical calculation . finally , in fig .7 we show the mean activity as function of for and . we can notice that where there are fixed points and periodic orbits with recognition within , the mean activity remains around the value ( random variables ) and it only decreases in the transition to non - retrieval phase .this shows that the parameter not only damages the recognition ability but also destabilizes the tendency of the system to evolve to fixed point attractors , allowing the appearence of more complicated retrieval attractors .in this work we study analytically and through of numerical simulations a model for associative memory where we have incorporated in the dynamics of the network a new kind of threshold that simulate the effect of the refractory period .the main result is that the parameter that activates this threshold yields to the appearing of chaotic and periodic attractors . nevertheless , the system seems to recognizes only through fixed point and cycles of order two . only in a small regionthe system recognizes with higher order cycles and with chaotic trajectories , but this behavior appears just in the boundary between the retrieval and the non - retrieval phases. it would be interesting to make a more detailed study to elucidate whether this kind of trajectories are due to finite size effects or not .as much as we could see , as increases they do not seem to dissapear , so we suspect that they will exist also in the thermodynamical limit . in the recognition phase ( small values of ) , the psp is strong enough to drive the system to stable attractors , fp and periodic orbits , where the average overlap in each regime is of the order . for large values of the performance is drastically damaged , and in these regions the dynamics is dominated by very large cycles or chaotic trajectories .the numerical simulation fits very well the analytical results only for small values of , where the transition occur from fixed point fp to cycle order two c2 .actually , the analytical curve seems to fit only the line where the fixed point behavior disappears .we also observe that in the transition the mean activity decreases with the increase of .3 we acknowledge to d. a. stariolo and f. s. de menezes for fruitful discussion .we thank the supercomputing center of the universidade federal do rio grande do sul ( cesup - ufrgs ) for use of the cray ymp-2e .this work was supported by brazilian agencies cnpq and finep .plot of versus at for different values of . at system undergoes a discontinuous transition from the recognition phase to non - retrieval phase . +* figure 2 .* phase diagram versus for and .below of the critical lines the system recognizes with fixed point and the transition to non - retrieval phase is discontinuos . +* figure 3 . * the critical line for . for transition is of second order ( full line ) while for the transition is discontinuos ( dashed line ) . is a critical point . +* figure 4 . * the numerical phase diagram versus at and , showing the regions fp ( below full circles ) , periodic ( between the two full triangles ) and ch ( above hollow circles ) .the simulation ( dashed line ) and analytical ( full line ) curves separetes the recognition phase ( below ) from non - retrieval phase ( above ) . +* figure 5 .* plot of ( top ) and of the frequency ( bottom ) in the which each kind of orbits appears as a function of for .the full line corresponds to the analytical curve . + * figure 6 .* plot of ( top ) and of the frequency ( bottom ) in the which each kind of orbits appears as a function of for .the full line corresponds to the analytical curve .+ * figure 7 . * the mean activity vs. for and .j. hopfield , _ proc .usa _ * 79 * , 91 ( 1982 ) .d. j. amit , h. gutfreund and h. sampolinsky , _ phys . rev .a. _ * 32 * , 1007 ( 1985 ) .d. j. amit , h. gutfreund and h. sampolinsky , _ phys . rev ._ * 55 * , 1530 ( 1985 ) .d. amit , j. gutfreund and h. sompolinsky , _ ann .phys . _ * 173*,30 ( 1987 ) .b. derrida , e. gardner and a. zippelius , _ europhys .lett . _ * 4 * , 167 ( 1987 ) .t. l. h. watkin and d. sherrington , _ j. phys . a : math .* 24 * , 5427 ( 1991 ) .h. gutfreund and m. mezard , _ phys .* 61 * , 235 ( 1988 ) .h. sompolinsky and i. kanter , _ phys ._ * 57 * , 2861 ( 1986 ) .j. buhmann k. schulten , _ europhys .lett . _ * 4 * , 1205 ( 1987 ) .david kleinfeld , _ proc .usa _ * 83 * , 9469 ( 1986 ) .d. horn and m. usher , _ phys .rev . a _ * 40 * , 1036 ( 1989 ) ; d. horn , _ physica a _ * 200 * , 594 ( 1993 ). f. zertuche , r. lpes - pea and h. waelbroech , _ j. phys . a : math .* 27 * , 5879 ( 1994 ) .j. f. fontanari , _j. physique i _ * 51*,2412 ( 1990 ) .d. a. stariolo and t. a. tamarit , _ phys .* 43 * , 5249 ( 1992 ) . c. r.da silva , f. a. tamarit , n. lemke , j. j. arenzon and e. m. f. curado , _ j. phys . a : math ._ * 28 * , 1593 ( 1995 ) .d. j. amit _ modeling brain function_,(cambridge university press , cambridge,1989 ) . j. buhmann k. schulten , _ biol . cybern . _ * 54 * , 319 ( 1986 ) .k. airaha , t.takabe and m. toyoda , _ phys.lett .a _ * 144 * , 333 ( 1990 ) .j. moreira and d. auto , _ europhys .* 21 * 693 ( 1993 ) .i. opris , _ phys .e _ * 51 * , 2619 ( 1995 ) .k. e. krten , _ phys.lett . a _ * 129 * , 157 ( 1988 ) . w. a. little._math .* 19 * , 101 ( 1974 ) .t. geszti , _ physical models of neural network _ , ( singapure : world scientific , 1990 ) .p. peretto , _ j. phys. france _ * 49 * , 711 ( 1998 ) .j. hertz , a. krogh and r. g. palmer _ introduction to the theory of neural computation _( reading , ma : addison - wesley 1991 ) .f. zertuche , r. lpes and h. waelbroech , _ j. phys . a : math .gen . _ * 27 * , 1575 ( 1994 ) c. m. marcus , f. r. waugh and r. m. westervelt , _ phys .a _ * 41 * , 3355 ( 1990 ) .
we study both analytically and numerically the effects of including refractory periods in the hopfield model for associative memory . these periods are introduced in the dynamics of the network as thresholds that depend on the state of the neuron at the previous time . both the retrieval properties and the dynamical behaviour are analized , and we found that depending on the value of the thresholds and on the the ratio between the number of stored memories ( ) and the total number of neurons ) , the system presents not only fixed points but also chaotic or ciclic orbits . keywords : neural networks , refractory periods , chaotic orbits pacs : 87.10+e 64.60c 75.10hk - 0.5 in
entanglement is a quantum phenomenon in which states ( represented by density operators ) of a composite system composed of several quantum subsystems can not be written as a convex combination of tensor products of the states of the subsystems .such entangled states have , in recent decades , been of much interest as a resource for quantum information applications , such as for quantum communication . in particular ,einstein - podolski - rosen ( epr)-like entanglement , generated in the continuous variables such as the amplitude and phase quadratures of a gaussian optical field , has evoked considerable interest over discrete - variable entanglement , such as entanglement in finite - level systems like qubits , because epr entangled pairs can be prepared easily and rapidly in quantum optics . in this paper , we are interested in epr entanglement between two propagating continuous - mode gaussian fields .such a kind of entanglement is more accessible compared to epr entanglement between a pair of single - mode fields produced in , say , inside an optical cavity .epr entanglement between continuous - mode gaussian fields can be realized by two - mode squeezed states produced as the output of a nondegenerate optical parametric amplifier ( nopa ) . by pumping a strong coherent beam ( which can be regarded as an undepleted classical light ) to a crystal inside the cavity of the nopa , two vacuum modes of the cavityinteract with the pump beam , and photons escaping the cavity through its partially transmissive mirrors generate two output beams that are squeezed in amplitude and phase quadratures .if the two outgoing fields are squeezed below the quantum shot - noise limit , they are considered as epr entangled beams .the input / output block representation of a nopa ( ) is shown as fig .[ fig : single - nopa ] .the nopa has four ingoing fields and four outgoing fields . among the inputs , and are amplification losses , caused by unwanted vacuum modes coupled into the cavity . as the two outputs corresponding to the loss fields and are not of interest in this work, they are not shown in the figure .note fig .[ fig : single - nopa ] only presents the ingoing and outgoing noises of interest , and does not show the pump beam . in a previous work ,we have proposed a novel dual - nopa coherent feedback system to produce epr entangled propagating gaussian fields , as shown in fig .[ fig : dual - nopa - cfb ] .it was shown that this scheme can produce better epr entanglement between the propagating gaussian fields and ( in the sense of producing more two - mode squeezing between quadratures of the fields ) for the same amount of total pump power used in the two nopas , and displays more tolerance to transmission losses in the system , as compared to a conventional single nopa and a cascaded two - nopa system . in a subsequent work , we presented a linear quantum system consisting of two nopas connected to a static passive linear network , realizable by a network of beam splitters , mirrors and phase shifters , that are connected in a more general coherent feedback configuration , see fig .[ fig : system_paper2 ] . here, the system is ideally lossless , that is , there are no transmission and amplification losses influencing the system .hence , each nopa is simplified to have only two ingoing fields , without amplification losses , as shown in fig .[ fig : system_paper2 ] . the transformation implemented by the passive network in this configurationis represented by a complex unitary matrix . by employing a modified steepest descent algorithm , with the matrix corresponding to the dual - nopa coherent feedback network shown in fig .[ fig : dual - nopa - cfb ] as a starting point , we optimized the epr entanglement at frequency , with respect of the transformation matrix of the passive network . in this paper , we employ the steepest descent method to optimize a coherent feedback system shown in fig . [fig : system ] .the system contains two nopas and a static passive linear network described by a complex unitary matrix .this system is a more restricted class of configuration than the one as shown in fig .[ fig : system_paper2 ] ; it can be seen that the configuration in fig .[ fig : system ] is a special case of the configuration in fig .[ fig : system_paper2 ] . moreover , different from our previous work in , in which the system shown in fig .[ fig : system_paper2 ] is considered lossless , here we take the effect of transmission losses along channels and amplification losses of nopas into account .however , we neglect time delays in transmission . the effect of delays on epr entanglement generated from related systems can be found in our previous works .in addition , unlike the work in , the system is considered ideally static , that is , we consider the limit where the nopas are approximated as static devices with an infinite bandwidth .the merits of studying this infinite bandwidth limit are twofold : ( i ) it allows a simplified analysis of the system , and ( ii ) calculations in the infinite bandwidth setting gives a very good approximation to the epr entanglement in the low frequency region , discussed further in section [ sec : nopa ] . in this infinite bandwidth setting , we show explicitly that the choice of the scattering matrix in the scheme of is in a certain sense locally optimal with respect to all possible choices of scattering matrices in the coherent feedback configuration of fig .[ fig : system ] , under certain values of the effective amplitude of the pump laser driving the nopa .note that there may exist another scattering matrix as a local minimizer that yields better epr entanglement than the network shown in fig .[ fig : dual - nopa - cfb ] .searching for such a scattering matrix can be a topic for future research .the structure of the rest of this paper is as follows .we begin in section [ sec : prelim ] by giving a brief review of linear quantum systems , epr entanglement between two continuous - mode fields , and linear transformations implemented by a nopa in the infinite bandwidth limit .section [ sec : system - model ] describes the system of interest . in section [ sec : optimization ] , we discuss the optimization of the system .finally , we draw a short conclusion in section [ sec : conclusion ] .the notations used in this paper are as follows : and denotes the real part of a complex quantity .the conjugate of a matrix is denoted by , denotes the transpose of a matrix of numbers or operators and denotes ( i ) the complex conjugate of a number , ( ii ) the conjugate transpose of a matrix , as well as ( iii ) the adjoint of an operator . is an by zero matrix ( if then we simply write ) , and is an by identity matrix .trace operator is denoted by } ] , -incoming boson fields in the vacuum state , which obey the commutation relations =\delta(t - s) ] , ^t ] , ] and denote the corresponding two - mode squeezing spectra between and as .fields and are epr entangled at the frequency rad / s if ] , =\delta_{ij} ] and =0 ] .in this section , we aim to optimize the epr entanglement generated in by the dual - nopa coherent feedback system of fig . [ fig : system ] , in the infinite bandwidth limit , by finding a complex unitary matrix at which the two - mode squeezing spectra of the two outgoing fields are locally minimized , with respect of . since the system is infinite bandwidth , for all , thus we shall denote and simply as and , with no dependence on .based on ( [ eq : v_+ ] ) , ( [ eq : v_- ] ) and ( [ eq : entanglement - criterion ] ) , the sum of the two - mode squeezing spectra is , \nonumber \\ & = & \operatorname{tr}\left[h(s)^*m_{1,2 } h(s ) \right ] \label{eq : entanglement}\end{aligned}\ ] ] where .\end{aligned}\ ] ] as is a function of or , we define as the value of for a fixed value of , and as the value of for a fixed value of .we aim to find a complex unitary matrix as a local minimizer of the cost function .the optimization problem with a unitary constraint can be solved by the method of modified steepest descent on a stiefel manifold introduced in , which employs the first - order derivative of the cost function .the stiefel manifold in our problem is the set . since for any square matrix such that is invertible , we expand as , where denotes terms that are products containing at least three . , and are real matrices , where .\end{aligned}\ ] ] following ( [ eq : entanglement ] ) and based on the facts that a matrix and its transpose have the same trace , we have \nonumber\\ & = & v(s ) + \operatorname{tr}[h(\delta s)^*m_{1,2 } h(s ) + h(s)^*m_{1,2 } h(\delta s ) + h(\delta s^2)^*m_{1,2 } h(s ) \nonumber\\ & & \quad + h(s)^*m_{1,2 } h(\delta s^2)+ h(\delta s)^*m_{1,2 } h(\delta s)]+ o(\lvert\delta s\rvert^3 ) \nonumber\\ & = & v(s ) + 2\operatorname{tr}[m h(\delta s)]+ 2\operatorname{tr}[m h(\delta s^2)]\nonumber\\ & & \quad + \operatorname{tr}[h(\delta s)^*m_{1,2 } h(\delta s ) ] + o(\lvert\delta s\rvert^3),\end{aligned}\ ] ] where and denotes that the function satisfies for some positive constant for all sufficiently small .furthermore , based on ( [ eq : relations - real - complex - matrix ] ) , we obtain that + \frac{1}{2}\left[\begin{array}{c } \operatorname{vec}(\delta\tilde{s})\\ \operatorname{vec}(\delta\tilde{s}^\ # ) \end{array } \right]^ * x \left[\begin{array}{c } \operatorname{vec}(\delta\tilde{s})\\ \operatorname{vec}(\delta\tilde{s}^\ # ) \end{array } \right]+o(\lvert\delta\tilde s\rvert^3 ) , \label{eq : expansion_v}\end{aligned}\ ] ] where ^ * h \left[\begin{array}{cc}(\tilde{k}^\ # \otimes \tilde{k } ) & ( \tilde{k}^\ # \otimes \tilde{k})^\ # \end{array}\right],\nonumber\\ h & = & 4\alpha^2 l^t(qmp)^t \otimes ( ( i_2 \otimes \tilde{h}_2)p ) + 2(qq^t ) \otimes ( p^tm_{1,2}p ) , \label{eq : dsx}\end{aligned}\ ] ] .\nonumber\end{aligned}\ ] ] is the directional derivative of at in the direction .[ th : critical_point ] the matrix corresponding to the dual - nopa coherent feedback system given by ( [ eq : scfb ] ) is a critical point of the function .according to , we have a one - to - one corresponding cost function on the tangent space to the stiefel manifold at the point , with a vector on this tangent space , defined by , where is the projection operator onto the manifold . the descent direction in the tangent space at based on ( [ eq : dsx ] ) , when , becomes , \label{eq : dscfb } \end{aligned}\ ] ] where is a real coefficient thus , the descent direction is thus , the gradient of the function at along the tangent space at is ( see [eq .( 27 ) ] ) , which establishes that is a critical point .now we check the hessian matrix of the function . based on proposition in and ( [ eq : expansion_v ] ) ,we have following the second order expansion along any direction on the tangent space at , +\frac{1}{2}\left[\begin{array}{c } \operatorname{vec}(\delta\tilde{s})\\ \operatorname{vec}(\delta\tilde{s}^\ # ) \end{array } \right]^ * { \rm hess}({\tilde s})\left[\begin{array}{c } \operatorname{vec}(\delta\tilde{s})\\ \operatorname{vec}(\delta\tilde{s}^\ # ) \end{array } \right]\nonumber\\ & & + o(\lvert\delta\tilde s\rvert^3 ) , \label{eq : expansion_v2}\end{aligned}\ ] ] where \label{eq : hess}\end{aligned}\ ] ] denotes the hessian matrix of .firstly , we consider the system in an ideal case , where there are no losses ( and ) .as reported in , in this lossless scenario the range of over which the dual - nopa coherent feedback system is stable in the finite bandwidth case is , independently of the actual bandwidth of the nopas .thus , it is natural to also take this as the range of admissible values for in the infinite bandwidth limit of this paper . by checking eigenvalues of the hessian matrix, we have the following theorem .[ th : ideal ] in the absence of transmission and amplification losses , is a local minimizer of the function when .let and . with the help of mathematica , the eigenvalues of at can be found to be as , and have positive values , while when , that is , .therefore , for , the hessian matrix is positive definite , which establishes that is a local minimizer for these values of .table [ tb : transmission ] and table [ tb : amplification ] illustrate the effect of transmission and amplification losses on the range of over which is a local minimizer .we see that as either transmission losses or amplification losses increase , the range of values of over which the dual - nopa coherent feedback network is optimal become wider ..influence of transmission losses on the range of over which is a local minimizer with and [ cols="^,^,^",options="header " , ]this paper has studied the optimization of epr entanglement of a static linear quantum system that is composed of a static linear passive optical network in a certain coherent feedback configuration with two nopas in the infinite bandwidth limit .we reformulate the optimization of the epr entanglement to the problem of finding a complex unitary matrix at which a cost function is locally minimized , with respect of . by employing the modified steepest descent on stiefel manifold method, we have found the unitary matrix corresponding to the coherent feedback system shown as fig . [fig : dual - nopa - cfb ] as a critical point of . when losses are neglected , the coherent feedback system is a local minimizer when . when transmission and amplification losses increase , the range of values of over which the coherent feedback system is a local minimizer of is enlarged .in addition , one may wonder if there exists other local minimizers at which the system generates better epr entanglement .hence future work can consider further developing the static passive optical network to search for another local optimizer that may yield better epr entanglement than the system studied in as shown in fig .[ fig : dual - nopa - cfb ] . z. shi and h. i. nurdin , optimization of distributed epr entanglement generated between two gaussian fields by the modified steepest descent method , in proceedings of the 2015 american control conference ( chicago , us , july 1 - 3 , 2015 ) .[ online ] available : http://arxiv.org/abs/1502.01070 j. laurat , g. keller , j.a .oliveira - huguenin , c. fabre , t. coudreau , a. serafini , g. adesso and f. illuminati , entanglement of two - mode gaussian states : characterization and experimental production and manipulation , j. opt .b : quantum semiclass . opt .7 , s577-s587 ( 2005 )
the purpose of this paper is to prove a local optimality property of a recently proposed coherent feedback configuration for distributed generation of epr entanglement using two nondegenerate optical parametric amplifiers ( nopas ) in the idealized infinite bandwidth limit . this local optimality is with respect to a class of similar coherent feedback configurations but employing different unitary scattering matrices , representing different scattering of propagating signals within the network . the infinite bandwidth limit is considered as it significantly simplifies the analysis , allowing local optimality criteria to be explicitly verified . nonetheless , this limit is relevant for the finite bandwidth scenario as it provides an accurate approximation to the epr entanglement in the low frequency region where epr entanglement exists .
with the development of multi - channel detectors and the recording of a huge amount of experimental data , the pass decade has witnessed a boom in the use of color images for the representation of spectroscopic data in a very compact and easily visualized way .typically , a color scale is associated with the experimental spectral intensity , which is displayed as a function of two independent variables .for example , such images are widely used in scanning tunneling microscopy ( stm ) [ ] , raman scattering [ ] , inelastic neutron scattering ( ins ) [ ] , atomic force microscopy ( afm ) [ ] , resonant inelastic x - ray scattering ( rixs ) [ ] and angle - resolved photoemission spectroscopy ( arpes ) [ ] .this imaging process is particularly efficient to represent energy band dispersions in the momentum or momentum - transfer spaces , where the energy and the momentum ( or momentum - transfer ) are the two independent variables .frequently though , many bands or features overlap or have significant broadness , making direct visualization of the raw data difficult .the main tool commonly used in arpes analysis to overcome this issue and improve direct visualization of band dispersion is the second derivative of intensity plots [ ] . despite its success and widespread use , the method of second derivative givessometimes results that differs slightly from the actual position of the maxima in the energy distribution curves ( edcs ) , where the photoemission intensity at fixed momentum is represented as a function of energy , or in the momentum distribution curves ( mdcs ) , where the photoemission intensity at fixed energy is given as a function of momentum .alternatives must thus be found to improve both accuracy and visualization of data . in this paper, we develop an analysis method for studying spectroscopic data based on the mathematical concept of curvature in one - dimension ( 1d ) and two - dimension ( 2d ) . as an example, we apply this method to the study of electronic energy dispersion from arpes data .we show two major advantages of the curvature method over the second derivative method : ( i ) the curvature method is more reliable in tracking the position of extrema and ( ii ) the curvature method can increase the sharpness of the dispersive features for a better visualization effect .we prove the efficiency of this method using both experimental and simulated data .the concept of curvature is used to quantitatively determine _ how much a curve is not straight_. it locally associates a radius of curvature , which can be either positive or negative , to a small segment along a curve . the mathematical definition in 1d of the curvature associated to a function is given by : for application to spectroscopic data , for example to an edc curve , may represent the signal intensity whereas represents a unitless variable such as normalized energy .the normalization of a variable that carries units is done through a transformation such as , where is a positive arbitrary constant with the same dimension as . since experimental spectroscopic functions themselvesare usually defined to an arbitrary factor , carries the same information as , where is an arbitrary positive constant . taking into account the arbitrariness in the absolute values of and , we can rewrite equation ( [ eq_1d ] ) as : since we are interested uniquely in the relative variations of the curvature , this equation can be reduced further to : where is a free parameter . in order to understand the meaning of , we test the previous equation in two limit cases : \(1 ) when , _ i.e_. when can be ignored , we get which gives the same result as the second derivative method .\(2 ) when , this latter solution diverges at the extrema , where = 0 .as approaches 0 , the peak positions in are getting closer and closer to the real peak positions . in the worst case ,when , the curvature should provide a result as good as the one given by the second derivative .therefore , the curvature is necessarily an improvement over the second derivative method in tracking the peak positions . in practice ,we avoid singularities while maintaining the reliability of by choosing an intermediate .empirically , we find out that the best compromise is reached when is of the order of the average or the maximum value of .hereafter , we express as , where is a positive constant and is the maximum value of . to illustrate the reliability of the curvature analysis , we simulate arpes data using known parameters .the arpes photoemission intensity can be expressed by the product of three terms : the fermi - dirac distribution , the spectral weight that contains all the information about the dispersion , and a matrix element factor that depends on momentum , as well as on the energy and polarization of the probing photons .since the latter term does not carry any information about the dispersion , we set it to 1 .the spectral weight can be expressed in terms of the energy dispersion as : where is the self - energy of the quasi - particles .the self - energy is known to depend only weakly on momentum and its imaginary part usually varies like at low energy .thus , we set the self - energy to : which satisfies the kramers - kronig transformation . setting and ev , we plot simulated arpes data in fig .[ fig:1dcurv](a ) for the dispersion ev at a temperature ( ) of 20 k. the result has been further convoluted by a gaussian function along the energy direction to simulate an energy resolution of 10 mev . in fig .[ fig:1dcurv](b ) , we compare the mdc along the red line in panel ( a ) to curvature curves of that same mdc using different values of . for a better comparison ,the sign of the curvature curves has been reversed and the maxima of all curves have been normalized to 1 . as expected for an asymmetrical lineshape ,the position of the curvature peak is slightly away from the real peak position when is large but converges to that latter position with decreasing .moreover , the peak sharpens rapidly as decreases .although this is obviously an advantage in tracking its position , we note that it is necessary to refrain decreasing too much while studying multi - feature systems since the sharpening of the peaks is accompanied by an increase of intensity in the curvature , which may affect the global contrast between all the features represented on a single image .we also note that since we are trying to find peak positions ( maxima or inflections in the spectra ) , only the positive parts of the sign - reversed second derivatives and the sign - reversed curvatures have a physical meaning ( the approximate position of peaks ) , and the negative parts are completely ignored . in fig .[ fig:1dcurv](c ) and fig .[ fig:1dcurv](d ) , we plot the edc and mdc along the black and red lines in panel ( a ) , respectively , along with their second derivative and curvature curves ( normalized and sign - reversed ) . since both the mdc and edc lineshapes are asymmetric with respect to the peak positions , the second derivative curves do not track the peak positions exactly and a small shift towards the highest slope change is observed .in contrast , the curvature analysis provides more reliable peak positions , in addition to giving sharper features .we performed the second derivative analysis for all edcs and mdcs and we show the corresponding second derivative intensity plots in fig . [ fig:1dcurv](e ) and[ fig:1dcurv](f ) , respectively .similarly , the edc- and mdc - curvature intensity plots associated with the data of fig .[ fig:1dcurv](a ) are given in fig .[ fig:1dcurv](g ) and fig .[ fig:1dcurv](h ) , respectively .obviously , the curvature method gives sharper features and allows a better tracking of the band dispersion as compared with the second derivative analysis .however , as for the analysis of edcs and mdcs and their corresponding second derivatives , the 1d curvature method presented here has some limitations over the whole range of energy and momentum .while the edc - curvature method is quite reliable to track the minima and maxima of band dispersions , it gives unreliable results near the fermi cutoff , which itself appears as a spectral feature .in contrast , the mdc - curvature method is quite precise near the fermi cutoff but fails to reveal precisely the dispersion near extrema .nevertheless , a cleaver combined use of edc- and mdc - curvature analysis allows to track the band dispersion completely and precisely .a more sophisticated analysis method is proposed in the next section .we now test the 1d curvature method on real experimental data . in fig .[ fig : appl](a ) , we show an intensity plot recorded at 15 k corresponding to the low - energy band dispersion near the fermi wavevector ( ) of the so - called band in optimally - doped ba ( = 37 k ) [ ] .as reported earlier , the dispersion exhibits in the superconducting state a kink or sudden slope change around 25 mev below the fermi energy ( ) due to an electron - mode coupling [ ] .although the kink is visible in the original image , it appears more clearly in the mdc - second derivative plot shown in fig .[ fig : appl](b ) .as expected from the previous discussion , the result is even sharper with the use of the mdc - curvature method , as illustrated in fig .[ fig : appl](c ) .the second derivative method is particularly efficient in arpes for the study of band dispersion complexes . in fig .[ fig : appl](d ) , we show an arpes intensity cut of sr recorded at 40 k along the m direction [ ] . within the wide energy range displayed ( down to about 1.5 ev below ) ,many bands exist and overlap , and it is very difficult to extract their band dispersion . the corresponding edc - second derivative intensity plot shown in fig .[ fig : appl](e ) is a clear improvement for the visualization of the main bands .once more , this advantage is reinforced with the edc - curvature method , as illustrated in fig .[ fig : appl](f ) .the bands are sharper and the reliability in tracking the peak position improved .color online .( a ) arpes intensity plot ( from ) . ( b)[(c ) ] corresponding intensity plot of second derivative [ 1d curvature ] along the momentum direction .( d ) arpes intensity plot ( from ) .( e)[(f ) ] corresponding intensity plot of second derivative [ 1d curvature ] along the energy direction.,width=336 ]despite its ability to track band dispersions , the 1d curvature method has some unavoidable problems when analyzing intensity images .the main problem comes from the fact that the images themselves , as well as the features they emphasize , are 2d rather than 1d objects . in this section ,we extend the 1d curvature method to a 2d method . as a first example, we treat the simplified case where the two independent variables determining the spectral intensity are equivalent . for example , this situation applies to afm and stm mappings , for which both independent variables represent a distance , as well as to arpes fermi surface mappings , for which both independent variables represent a momentum component .aftewards , we will focus on a more general case , where the independent variables are inequivalent , like in the energy _ vs _ momentum intensity plots used in arpes to reveal energy band dispersions .the equivalent in 2d of the second derivative is the laplacian : the passage from unitless variables to variables with same units modifies the equation only by a global factor that does not affect the global contrast between different features on an image plot .similarly to the second derivative , the mean curvature function has an equivalent in 2d for a function , which is given by : \frac{{{\partial ^2}f}}{{\partial { \tilde{y}^2 } } } - 2\frac{{\partial f}}{{\partial \tilde{x}}}\frac{{\partial f}}{{\partial \tilde{y}}}\frac{{{\partial ^2}f}}{{\partial \tilde{x}\partial \tilde{y } } } + [ 1 + { { ( \frac{{\partial f}}{{\partial \tilde{y}}})}^2}]\frac{{{\partial ^2}f}}{{\partial { \tilde{x}^2}}}}}{{{{2[1 + { { ( \frac{{\partial f}}{{\partial \tilde{x}}})}^2 } + { { ( \frac{{\partial f}}{{\partial \tilde{y}}})}^2}]}^{\frac{3}{2}}}}}\ ] ] color online . ( a ) image representation of the chinese character _ ho _( see the text ) .( b)[(c ) ] corresponding intensity plot of the laplacian [ 2d curvature ] .the original character in ( a)-(c ) is given by the red lines .( d ) arpes fermi surface mapping of ba .( e)[(f ) ] corresponding intensity plot of the laplacian [ 2d curvature].,width=302 ] when the independent variables carry the same units , we need to use the transformations and .considering that the spectral function is defined to a factor , we get : \frac{{{\partial ^2}f}}{{\partial { y^2 } } } - 2\frac{{\partial f}}{{\partial x}}\frac{{\partial f}}{{\partial y}}\frac{{{\partial ^2}f}}{{\partial x\partial y } } + [ c_0 + { { ( \frac{{\partial f}}{{\partial y}})}^2}]\frac{{{\partial ^2}f}}{{\partial { x^2}}}}}{{{{[c_0 + { { ( \frac{{\partial f}}{{\partial x}})}^2 } + { { ( \frac{{\partial f}}{{\partial y}})}^2}]}^{\frac{3}{2}}}}}\ ] ] where a global factor has been removed and is a free positive parameter .let s now compare both 2d methods . in fig .[ fig:2dcurv](a ) , we plot a chinese character ( _ ho _ , which means good " ) .the character has been broaden by a gaussian distribution and further blurred by a boxcar filter .although the character is recognizable on the raw image , the strokes are not sharp .the laplacian of this image is displayed in fig .[ fig:2dcurv](b ) .while the laplacian allows to sharpen the strokes a little , the latter remain broad and the whole character appears distorted .in contrast , the result obtained by the 2d curvature method and shown in fig .[ fig:2dcurv](c ) gives a much better representation of the original character , with very sharp strokes .only little distortion can be observed near stroke intersections and near the beginning and the end of each stroke .analysis of real arpes data with experimental noise leads to similar conclusion . in fig .[ fig:2dcurv](d ) , we display the arpes photoemmission intensity mapping around the brillouin zone center of a ba sample , which has been integrated over a 10 mev energy range around the fermi level .the high intensity regions represent the fermi surface .although the raw data are sufficient to distinguish the presence of two fermi surface sheets [ ] , the fermi surface contours are difficult to identify precisely . in this case , the laplacian improves the fermi surface determination of the two concentric fermi surfaces centered at the brillouin zone center .further improvement is provided by the 2d curvature , which makes the fermi surface contours narrower .unfortunately , spectroscopic data can not always be presented as 2d mappings with and axes having the same units .this is particularly true when dealing with the momentum space , like in arpes , ins and rixs .commonly , the results may represent the spectral intensity as a function of energy , and momentum or momentum - transfer .in that case , the laplacian can be adapted to variables and with different units by using the transformations and , where and are positive parameters carrying the same units as and , respectively .accounting once more for a global positive factor in the absolute value of the experimental spectral response , we obtain : where we removed a global factor .the latest equation has only one independent parameter , .a natural choice of parameter to capture the main features in an image plot is to make the second derivative terms of the same order of magnitude , which is done by setting the ranges of the data in and to similar values . for a square grid for example ( same number of columns and rows ) , that statement is equivalent to , where and are the stepsizes along the and axes , respectively .similarly to the laplacian , equation can be adapted to variables and with different units . using the same transformations for and , we get : \frac{{{\partial ^2}f}}{{\partial { y^2 } } } - 2c_xc_y\frac{{\partial f}}{{\partial x}}\frac{{\partial f}}{{\partial y}}\frac{{{\partial ^2}f}}{{\partial x\partial y } } + [ 1 + c_y{{(\frac{{\partial f}}{{\partial y}})}^2}]c_x\frac{{{\partial ^2}f}}{{\partial { x^2}}}}}{{{{[1 + c_x{{(\frac{{\partial f}}{{\partial x}})}^2 } + c_y{{(\frac{{\partial f}}{{\partial y}})}^2}]}^{\frac{3}{2}}}}}\ ] ] where and are the only two ( positive ) free parameters for this equation . using the same arguments as for the laplacian, we can set to assure a good visual representation . in this condition, we verify easily that in the limit where , and thus and , equation is simplified to which is equivalent to our definition given in equation of the laplacian with variables carrying units . in the opposite limit , when , we find : }^{\frac{3}{2}}}}}\ ] ] the latest equation diverges when : ^{\frac{3}{2}}=0\\ \rightarrow |\nabla f(\tilde{x},\tilde{y})| = 0\end{aligned}\ ] ] which corresponds exactly to the position of the extrema of .therefore , we conclude that the 2d curvature is necessarily an improvement compared to the laplacian in tracking the position of extrema . in figure [ 2ddispersion ] , we compare the laplacian and the 2d curvature intensity plots for the simulated electronic dispersion given in figure [ fig:1dcurv](a ) .as expected , the 2d curvature method gives sharper features .in addition , it tracks the original band dispersion with higher accuracy over the whole range of energy .it is also instructive to note that while the 1d curvature method using edcs and mdcs gives results better than the 2d curvature near the band bottom and near the fermi level , respectively , the 2d curvature is more reliable over the whole energy range .color online .( a ) laplacian of the simulated arpes intensity plot shown in figure 1(a ) .( b ) 2d curvature of the simulated arpes intensity plot shown in figure 1(a).,width=302 ]as with the second derivative method , the curvature analysis technique described in this paper is a powerful method to enhance dispersive features in a spectroscopic image .it is very important to keep in mind that this is its only purpose and that the information contained in the original spectra is indeed richer , despite being sometimes difficult to access .these visualization methods can thus be regarded as effective complementary tools in understanding spectroscopic data .for example , while the precise shape of mdcs and edcs from arpes data are often intimately related to intrinsic scattering and other electronic interactions , information completely lost in the curvature intensity plots , mdcs and edcs are not always good ways to represent dispersion .this is especially true for multi - bands systems when bands are broad . besides, band dispersions are 2d objects ( _ vs _ ) , which are thus more naturally represented by a 2d image plot . indeed ,mdc- and edc - analysis in arpes often lead to slightly different dispersion , even though real electronic dispersions , namely _ vs _ relationships , are uniquely defined objects . by using the 2d curvature method described here , it is possible to remove this ambiguity .however , we note that such analysis is accurate only when we dispose of sufficient data along both directions ( and ) .although the curvature technique constitutes an obvious improvement over the second derivative method in terms of reliability and sharpness of the spectral features , its main apparent disadvantage is the introduction of arbitrary parameters . as shown above, the curvature method is at least as reliable as the second derivative method in tracking the peak position of dispersive features , whatsoever the parameters used .similarly , the sharpness of the dispersive features is also improved compared to the second derivative method . in that sense ,the arbitrariness of the parameters is not a handicap .in fact , it gives some latitude to tune the relative contrast between different features from a single image and allow a better visualization effect .we have developed a method based on the concept of curvature to analyze spectroscopic image plots . as with the second derivative method , which is widely used ,the method presented here is quite efficient for representing dispersive features . using simulated and experimental spectral images, we demonstrated that compared to second derivative analysis , the new curvature method improves significantly the reliability in tracking dispersive feature .moreover , it sharpens spectral features for a better visualization of the spectroscopic features .we acknowledge useful discussions with y. b. huang , x. p. wang , t. j. min , t. ayral and a. van roekeghem .this work was supported by grants from cas ( 2010y1jb6 ) , nsfc ( 11004232 and 11050110422 ) and most of china ( 2010cb923000 ) .ma , z .- h . pan , f. c. niestemski , m. neupane , y .- m .xu , p. richard , k. nakayama , t. sato , t. takahashi h .- q .luo , l. fang , h .- h .wen , z. wang , h. ding and v. madhavan , phys .lett . , * 101 * , 207002 ( 2008 ) .j. schlappa , t. schmitt , f. vernay , v. n. strocov , v. ilakovac , b. thielemann , h. m. rnnow , s. vanishri , a. piazzalunga , x. wang , l. braicovich , g. ghiringhelli , c. marin , j. mesot , b. delley and l. patthey , phys .lett . , * 103 * , 047401 ( 2009 ) .j. mesot , m. randeria , m. r. norman , a. kaminski , h. m. fretwell , j. c. campuzano , h. ding , t. takeuchi , t. sato , t. yokoya , t. takahashi , i. chong , t. terashima , m. takano , t. mochiku and k. kadowaki , phys .b , * 63 * , 224516 ( 2001 ) . w. zhang , g. liu , j. meng , l. zhao , h. liu , x. dong , w. lu , j. s. wen , z. j. xu , g. d. gu , t. sasagawa , g. wang , y. zhu , h. zhang , y. zhou , x. wang , z. zhao , c. chen , z. xu and x. j. zhou , phys .lett . , * 101 * , 017002 ( 2008 ) .h. y. liu , g. f. chen , w. t. zhang , l. zhao , g. d. liu , t .- l .xia , x. w. jia , d. x. mu , s. y. liu , s. l. he , y. y. peng , j. f. he , z. y. chen , x. l. dong , j. zhang , g. l. wang , y. zhu , z. y. xu , c. t. chen and x. j. zhou , phys .lett . , * 105 * , 027001 ( 2010 ) .h. ding , p. richard , k. nakayama , k. sugawara , t. arakane , y. sekiba , a. takayama , s. souma , t. sato , t. takahashi , z. wang , x. dai , z. fang , g. f. chen , j. l. luo and n. l. wang , europhys .* 83 * , 47001 ( 2008 ) .
in order to improve the advantages and the reliability of the second derivative method in tracking the position of extrema from experimental curves , we develop a novel analysis method based on the mathematical concept of curvature . we derive the formulas for the curvature in one and two dimensions and demonstrate their applicability to simulated and experimental angle - resolved photoemission spectroscopy data . as compared to the second derivative , our new method improves the localization of the extrema and reduces the peak broadness for a better visualization on intensity image plots .
the use of textile composites has allowed the conception of engineering pieces with improved performances , but there is still a need for the development of new theories and softwares for the accurate modeling of the mechanical response of such materials during the forming processes of industrial products and components .the internal micro - structure of the woven fabrics reinforcements strongly influences their mechanical properties and makes models capable of capturing all the aspects of their complex mechanical behavior challenging to set up .two main directions of woven tows ( warp and weft ) with each composed by a high number of fibers define the structure of the woven fabric . because of the preferred directions generated by the fiber lines , the material possesses two usually orthogonal directions with a very high extensional rigidity . among the various industrial and commercial woven composites ,it is possible to highlight various schemes of weaving which confer different mechanical properties to the fabric .moreover , another characteristic that can profoundly determine the response of the material is the ratio between the weights ( or sizes ) of the yarns in the warp and weft directions .if the ratio is not equal to one , the woven fabric is called `` unbalanced '' , otherwise it is called `` balanced '' . in an unbalanced fabric , the properties in the two weaving directions strongly differ giving rise to a stiffer and/or stronger direction that can be of use for particular engineering applications . for example , if there is one main direction of loading , the use of an unbalanced fabric allows for the optimization of the material bringing real design advantages . during the process of weaving , small yarns or towsare woven to form a complex texture .these yarns are made with materials that possess high specific mechanical properties such as the traditional carbon and glass fibers or even polymeric and ceramic fibers .furthermore , in the case of the carbon fiber reinforcements , any single yarn is , itself , composed of thousands of small carbon fibers .this complex hierarchical microstructure characterizes the global features of the material . in a wealth of real cases ,it is reasonable to consider that the friction between the yarns prevents the slipping and contributes to the shear rigidity of the fabric that fundamentally determines its mechanical behavior .nevertheless , to comprehensively describe the behavior of woven fabrics , other phenomena besides the shear stiffness of the fabric and the elongation stiffness of the yarns , which can be ideally thought to be quasi - inextensible , must be taken into account . when the yarns that compose the fabric are relatively thick , they possess a relevant bending stiffness at the meso - level that accounts for some specific experimental response ( see ) and , in the case of unbalanced fabrics , the difference in the aforementioned bending stiffness can convey some interesting asymmetric macroscopic deformations .woven fabrics are materials that possess very interesting features in terms of specific stiffness and strength , deformability , dimensional stability , thermal expansion , corrosion resistance and many other properties , basically due to the intrinsic mechanical properties of the material comprising the fibers .of all such desirable features , the high shear deformability makes it possible for these materials to develop in various shapes . anyway , without tools to forecast phenomena such as the onset of wrinkling and slippage that limit the admissible deformation during the stamping operations , it is not possible to fully exploit the huge potential of these materials .hence , the development of comprehensive models for the description of the forming of these kinds of materials it is of primary importance . in the past years , researchers have proposed different approaches to such textile forming problems mainly focusing on the development of discrete and continuum models which are able to account for the basic deformation mechanisms that occur during the forming of woven composite reinforcements . in the first part of this paper , the results of some experiments ,that clearly show how the effect of the unbalance of the warp and weft material properties influences the overall macroscopic behavior of the considered composite interlocks , are presented .more particularly , it will be shown that , from an experimental point of view , unbalanced fabrics with very different local bending stiffness give rise to asymmetric macroscopic deformations during a bias - extension test .in fact , different complex phenomena take place at the mesoscopic level which are , to a big extent , related to * the in plane shear deformation ( change of direction of the warp and weft yarns ) , * the high differential bending rigidity of the two families of yarns , * the relative sliding of the two families of yarns .in particular , the strong unbalance in the bending rigidities results in an s - shaped deformed specimen .a second - gradient continuum model , able to capture the basic features of those phenomena , is introduced and some numerical simulations , which allow interpretation of the results of the experiments presented in this paper , are proposed . the main scope of the present paper is to propose a continuum model which is able to account for all the aforementioned microstructure - related deformation mechanisms by remaining in a continuum framework .it will be shown that * the shear deformation can be included by introducing a suitable energetic cost associated with the angle variation between the warp and weft directions , * the differential bending rigidities of the two families of yarns can be considerd by means of the introduction of second - gradient terms and such unbalance is responsible for the asymmetric s - shape of the specimen , * the relative sliding of the fibers can be included in the continuum model by introducing `` fictive '' elongations in the warp and weft directions . in what follows ,it will be denoted by `` fictive '' elongation , an elongation of the yarns as accounted for by the introduced continuum model .the adjective `` fictive '' wants to stress the fact that such elongation in the continuum model actually represents , to a big extent , a yarns sliding in the real mechanical system .this expedient allows to keep using a continuum model , even if the micro - displacement field of the real system suffers tangent jumps due to the relative motion of the yarns .complex theoretical formulations of the considered problem are not given , the main aim being that of noting some peculiar behaviors of unbalanced fabrics based on phenomenological observations .the introduced equivalent continuum model can be seen as a reasonable compromise between the detail of the description of the behavior of the underlying microstructures and the complexity of the adopted model .to characterize the macroscopic mechanical response of the woven composite reinforcements , two set of experimental tests are proposed .indeed , the behavior of such materials must be analyzed under different types of loads and , in particular , the most important feature to be determined is the in - plane shear response of the woven composite .in fact , due to the quasi - inextensibility of the fibers , the main deformation mode of the woven composites reinforcements during a forming process is the in - plane shear deformation ( angle variation between the warp and weft ) .two main tests are of current use for the determination of the in - plane shear stiffness of the fabrics .the first test developed was the picture frame test ( pft ) .in the pft , a square specimen of the woven composite is ideally subjected to a state of pure shear deformation . nevertheless , the state of pure shear is only theoretical : any misalignment of the specimen leads to an increase of the measured load .in addition , the fact that the yarns are tightly clamped fixes the direction of the yarns correspondingly to the four clamps and this fact generates bending of the fibers in the vicinity of the boundary during the motion .this fact usually results in an overestimation of the shear parameter due to these boundary effects .the second test used for the measure of the in - plane shear stiffness is the bias - extension test ( bet ) . in this test one of the edges of a rectangular sample of woven composite reinforcement , in which the yarns are initially oriented at with respect to the loading direction ,is displaced in the direction of the axis .the length / width ratio of the specimen must be larger than 2 ( in the present paper the ratio is 3 ) .when one of the ends of the specimen is displaced of a given amount , the formation of three types of regions ( a , b and c ) with almost homogeneous behavior in their interior can be observed ( figure [ fig : bet - schematics ] ) . in each of these areas , the angle between the warp and weft direction is almost constant .this specific kinematics is due to the quasi - inextensibilty of the yarns and to rotation without slippage between warp and weft yarns at the crossover points .the advantage of the bet , with respect to the pft , is that each yarn has at least one edge which is free , and this free edge is thought to be sufficient to avoid spurious tensions in the yarns as observed in . ] still , there are some phenomena that are not described in the schematics of figure [ fig : bet - schematics ] and which are nevertheless important for the complete understanding of the bet . as a matter of fact , the transition between two different areas at constant shear angle is not concentrated in a line .in fact , it is possible to observe the onset of transition layers , in which there is a gradual variation of the angle as shown in figure [ fig : bet - boundary ] . in such transition layers ,the angle variation between the two constant values is achieved by a smooth pattern which is directly associated to a local bending of the yarns . ]one more feature that can be highlighted is that the free boundary does not remain straight during the test as is assumed in the scheme of figure [ fig : bet - schematics ] , but it shows some curvature ( figure [ fig : bet - curvature ] ) .both of these phenomena can be understood by considering that the yarns possess a bending stiffness that depends on the micro - structure of the fabric . because the yarns always possess a non - vanishing bending stiffness , a variation of orientationcan not be concentrated in a point , but a gradual variation of such orientation takes place ( bending of the yarns ) .therefore , such angle variations must happen in a layer of non - vanishing size . with the usual first - gradient models ,it is not possible to include such micro - structural related phenomena while , with the aid of second - gradient theories , promising results have been obtained in . in those models ,a specific constitutive coefficient can be introduced which can be directly related to the bending energy of the fibers . ]the second - gradient continuum model introduced in the present paper is able to describe the main macroscopic and mesoscopic deformation mecanisms taking place during a bias extension test on an unbalanced fabric .the use of second - gradient theories to capture some effects of the microstructure on the overall behavior of microstructured material is not new and , as a matter of fact , it has been known since the pioneering works by piola , cosserat , midlin , toupin , eringen , green and rivlin and germain that many microstructure - related effects in mechanical systems can be still modeled by means of continuum theories .more recently , these generalized continuum theories have been widely developed to describe the mechanical behavior of many complex systems , such as exotic media obtained by homogenization of heterogeneous media .the main aim of the present paper is to show and discuss some specific results obtained during an uniaxial bet of unbalanced specimens of fibrous composite reinforcement .the task of fully exploring what theoretical tools are needed to optimize the modeling of such unbalanced materials is left as a subsequent work , the scope of the present manuscript being that of explaining the principal micro and macro deformation mechanisms which take place when observing the deformation of such unbalanced fabrics .in fact , based on a phenomenological observation of some experimental results , it will be shown that the deformation modes which take place in a bet on an unbalanced fabric are completely different from those specific of the bet on standard fabrics described above . moreover, a second - gradient continuum model will be proposed for a reasonable description of the micro and macro deformation of such unbalanced materials by simultaneously pointing out the strong and weak points that the employment of such a continuum modeling may have concerning the design of complex engineering parts .the authors are aware of the fact that extra experimental campaigns would be needed for a complete validation of the proposed second - gradient model .more particularly , such more comprehensive campaigns would need the setting up of the following experiments : * repetition of the same test ( bias extension test ) on different specimens with the same dimensions and characteristics .these tests would be needed to identify experimental errors that can be introduced during the experimental campaign and to precisely account for such variability in the performed study .* conception of independent tests ( other than the bias extension test ) which are suitably engineered to give rise to the same microscopic deformation modes ( fibers bending and slipping ) , but with different loading conditions .this test would allow the confirmation of the values of the parameters proposed in the present paper for the considered materials .* realization of the bias extension test on specimens with smaller dimensions to unveil possible size effects which are known to be possible in higher gradient or micromorphic materials .* realization of specific measurements which are devoted to measure local deformation mechanisms with the due precision .digital image correlation techniques could represent a good choice to effectively proceed in this direction .notwithstanding the undiscussed interest of the aforementioned tests and their necessity for a complete validation of the presented second - gradient model , they are not the primary objective of the present paper .the primary aim is to identify the main microstructure - related deformation modes in unbalanced woven fabrics , i.e. the differential bending of the fibers and the fibers slipping , and to show , via a reasonable second - gradient model , that they can not be neglected .the phase of conception of such extra experimental campaigns for a precise identification of second - gradient parameters on a given class of fibrous woven materials is postponed to further investigations .it has to be explicitly remarked that mechanical conditioning was not accounted for in the present study with the aim of being closer to the conditions of a real forming process .unbalanced fibrous composite reinforcements are such that the warp and weft yarns are comprised of a very different number of fibers and , therefore , the mechanical properties in the two directions can differ considerably .the material studied in this paper is an unbalanced 2.5 d composite interlock with a characteristic weaving pattern in the direction of the thickness which can be observed in figure [ fig : test - twill ] .the main advantage of interlock reinforcements is to overcome the low delamination fracture toughness of laminated composites .the bet is performed for two samples of 3x3 twill unbalanced carbon interlocks .the two specimens differ one from the other because they present different unbalance ratios between the warp and weft . to analyze the structure of the given specimen , it is possible , using tomographic optical systems , to obtain some virtual slices ( tomographic images ) that allow us to see inside the object without cutting .tomographic optical systems is a technology that , through the use of any kind of penetrating wave , reconstructs the geometry of a specific cross section of a scanned object .for specimen i , a tomographic image was obtained ( figure [ fig : test - twill ] ) where it is possible to observe the high unbalance of the specimen and the characteristic weaving pattern of the 2.5 d interlock .x - ray tomography of the considered interlock ( specimen i ) . ] to conduct the bet , the sample is positioned between the upper and lower jaws of a 100 kn zwick meca tensile machine ( figure [ fig : test - device ] ) .the force needed to deform the specimen is on the order of 100 n and , therefore , an auxiliary load cell of 500 n is used to allow for good resolution of the data from the test . during the test ,the lower clamp is static while the upper clamp is set , with a displacement - control , to move from 0 to 60 mm up .the displacement speed of the movable clamp is set to 4 mm / min . ] to analyze the experimental results and to reveal the characteristic deformation modes of the mesostructure , high - quality pictures of the samples during the deformation process are valuable .therefore , a 16mp camera in combination with a 200x optical zoom and two led adjustable color lights were used during the test . the detailed analysis of pictures of a black carbon specimen are still difficult to perform . for this reason, a white grid of lines aligned with the textile warp and weft yarns was added on the shear area a of the specimen i ( figure [ fig : test - samples ] ( a ) ) .however , during the analysis of the first specimen some difficulties following the deformation of the specimen were still present .therefore , for specimen ii , it was decided to locate only a couple of white points on top of area a that , with a post - processing step , lead to an easier access to important data like the local shear angle or the sliding between yarns .the initial configuration of the interlock specimens and the added reference lines and points are illustrated in figure [ fig : test - samples ] ., width=302 ] the results of the tests in terms of the load - displacement curve are shown in figure [ fig : test - fd ] .it can be seen that the force response for each of the specimens is almost linear except for a slight increase of stiffness at the end of the test .it is also easy to note that the two materials present very different macroscopic stiffnesses due to their different internal architecture . ]figure [ fig : test - shape ] shows the deformed shapes for both specimens during the development of the test .the quality of the test on the second specimen is much lower than that performed on the first one because some intrinsic asymmetries were introduced in the second specimen due to a non - perfect cutting .for this reason , the considerations will be illustrated by using the images relative to the specimen i , but analogous ones can be drawn for the specimen ii . ,width=415 ] , width=415 ] the main remark which may be inferred from the observation of the macroscopic deformed shape of the two specimens ( figure [ fig : test - shape ] ) is that it resulted in an asymmetric s - shape .this macroscopic asymmetry is not surprising considering the fact that the properties of the two families of yarns are very different in the two directions .what needs to be highlighted is the fact that such an asymmetric shape is related to precise deformation mechanisms of the meso - structure which have to be investigated to correctly understand the behavior of such unbalanced materials . in conclusion , a model , which represents with sufficient detail the the mechanical behavior of unbalanced fabrics ,must be conceived in such a way to describe with sufficient accuracy : * the macroscopic s - shaped deformation of the material , * the mesoscopic deformations of the yarns inside the material as related to the observed macroscopic s - shape . to this goal , it is henceforth essential to observe what are the characteristic deformation patterns of the yarns inside the considered unbalanced fabric when subjected to a bet .as it will be better demonstrated in the remainder of this section , the main mesoscopic deformation mechanisms which take place during a bet performed on an unbalanced fabric are * the in - plane shear deformation ( angle variation between the yarns with respect to their initial configuration ) * the local differential bending of the warp and weft yarns due to the unbalance of the fabrics * the relative slippage of the contact points between warp and weft yarns .ideally , if perfect pivots were placed to connect the warp and weft without interrupting the continuity of the yarns and if the two families of fibers could be modeled as wires with infinite rigidity with respect to elongation and vanishing bending stiffness , the observed motion would be the one presented in figure [ fig : bet - schematics ] .thus , the only deformation mode would be the variation of the direction of the fibers which could be directly interpreted as the angle variation between warp and weft .nevertheless , such an ideal situation is not the case in the considered material because the yarns present a non - vanishing bending stiffness and a relative slipping of the warp with respect to weft can also be observed .more particularly , as far as the thin yarns are concerned , they possess a very low bending stiffness and , as it is possible to see in figure [ fig : test - bending ] ( c ) , there is a very sharp variation of direction that can be considered to be concentrated in a very narrow layer .instead , in the case of the thick yarns ( figure [ fig : test - bending ] ( b ) ) , there is almost no measurable change in direction along the whole fiber , feature that can be uniquely related to an extremely high bending stiffness .if the two families of fibers are considered to be quasi - inextensible , the s - shape obtained in the test can be considered to be related to the very different bending stiffness of the two sets of yarns . as a consequence of this observation, it must be stated that it is not possible to describe this specific behavior without a model which accounts for the bending of the yarns at the mesoscopic level . , width=566 ] it is finally noted that the presence of measurable slippages of the fibers ( up to a maximum that is around 10% of the total length of the yarns ) strongly characterize this test .this phenomenon should be included in a complete model for this test and in general for woven composites .such slippages can be qualitatively recognized when comparing figures [ fig : test - samples ] ( b ) and [ fig : test - shape ] ( b ) .it is possible to see that some points which were initially located on the white cross marks which were drawn on the specimen have moved , thereby breaking the continuity of the cross marks themselves ( see also points in figure [ fig : test - slides ] ) . ]a macroscopic indicator of the sliding may be found in the global in - plane thickness of the specimen .if relative sliding of the fibers is permitted , the height of the specimen measured in the middle of the specimen itself can be much higher than the height that the specimen would have if no relative motions were permitted .in some sense , such effect of the sliding can be modeled as a possible `` fictive elongation '' of the fibers in the two directions .more precisely , since the yarns can slide in the real situation , they can rearrange themselves in such a way that the resulting apparent in - plane thickness of the specimen is much higher than the one that the specimen would theoretically have if the yarns were hinged together .it is clear that the presence of such internal sliding weakens the basis on which a continuum theory is founded . nonetheless , it is possible to continue using a continuum model with the limit of modeling such internal sliding as a `` fictive '' elongation of the yarns in the two directions .the price to pay for this modeling assumption is that the result of the simulations is a microstructure which is not perfectly superposable to the real one ( the `` real '' sliding is replaced by `` fictive '' elongations of the yarns ) . in spite of that price , the overall macroscopic pattern of the deformation is recovered , together with the main features of the deformation of the underlying microstructure .a model of this type could be of use for the simulation of the forming of unbalanced fabrics , even if the elongations which would be eventually present should be interpreted , at least partially , as relative sliding of the yarns .finally , it must be explicitly remarked that the presence of the described relative sliding does not allow a direct interpretation of the in - plane shear deformation as the angle variation between the warp and weft directions .in fact , the in - plane shear is simply defined as the angle variation between the current direction of the considered yarn and its initial direction. summarizing , it is possible to say that the main mesoscopic deformation mechanisms which take place during the bet on an unbalanced fabric have been isolated .these significant contributions include the in - plane shear of the yarns , the differential local bending of the yarns and the sliding of the yarns . in the next section a macroscopic , second - gradient , continuum model which is able to capture such mesoscopic deformation mechanisms together with their macroscopic counterpart will be introduced .this model will provide a sensible description of the macroscopic s - shaped deformation of the considered unbalanced fabric , as well as a reasonable prediction of the mesoscopic deformation of the yarns .as already remarked , to describe the response of the considered unbalanced fabric a second - gradient continuum model is introduced following the approaches of . as long as no slipping occurs, two superimposed fibers can rotate around their contact point and , if a straight line is drawn on the textile reinforcement , one could see that it curves but remains continuous ( see e.g. ) . as it has been established on a phenomenological basis , this situation is not always the case when performing a bet on unbalanced fabrics , because some relative sliding of the warp and weft yarns may occur during the test .the effect of such sliding is included in the overall behavior of the material by modeling it as a possible `` fictive elongation '' of the yarns , so reaching the threefold objective of * continue to use a continuum approach to model the mechanical behavior of unbalanced fabrics ( it provides significant advantages in terms of engineering finite element modeling of the forming processes ) * obtain realistic macroscopic deformations for the considered unbalanced specimen * obtain realistic mesoscopic deformations of the yarns ( the differential local bending of the fibers is precisely described and the sliding is accounted for by means of the introduction of fictive elongations ) . of course , the authors are aware of the fact that , such hypothesis of fictive elongation could be too restrictive for samples subjected to high strains , because it could produce some problems related to wrinkling during forming simulations .nevertheless , given that only small - to - moderate strains are considered in this paper , the adopted continuum hypothesis can be thought to represent a reasonable approximation of the underlying , microstructure - related , deformation mechanisms .the main point in the continuous hyperelastic models is the definition of the strain energy density .even if the energy is traditionally a function of only the strains ( first - gradient theory ) in different works such as , it is underlined that these types of energy may not be sufficient to model a class of complex contact interactions which are related to local bending stiffness of the yarns and which macroscopically affect the overall deformation of interlocks .as it has been previously highlighted , the phenomenon of the local bending of the yarns is a mesoscopic deformation pattern when applying the bet to an unbalanced fabric .for this reason , a model which is able to correctly account for the different bending stiffnesses of the two families of yarns must include second - gradient effects . as a matter of fact, a first - gradient continuum model in which very different tension stiffnesses of the yarns in the two directions are considered would allow the reproduction of an asymmetric macroscopic s - shape of the specimen .nevertheless , the description of the deformation of the single yarns would be unrealistic : sensible differential elongations of the two sets of yarns and piecewise straight lines would be the only possible deformed shapes for the yarns constituting the fabric .even if an `` ad hoc '' highly anisotropic choice of the constitutive parameters would allow reproduction of the desired macroscopic asymmetric s - shape , the material parameters would not be representative of the actual material behavior .this phenomenon will be explicitly shown in section [ subfirstgradient ] . in what follows , a second - gradient energy model that takes into account , in an averaged sense , some micro - structural properties such as the bending of the fibers will be presented with the same spirit of what done e.g. in .the second - gradient model proposed here is intrinsically macroscopic , in the sense that it is not easy to directly relate the introduced macroscopic coefficients to precise microscopic characteristics of the fibers . to this aim , suitable bottom - up approaches for networks of fibers should be used following the ideas presented e.g. in .when working with a continuum model , it is important to introduce a lagrangian configuration and a suitably regular kinematic field which associates to any material point its current position at time t. the image of the function gives , at any instant t , the current shape of the body : this time - varying domain is usually referred to as the eulerian configuration of the medium and , indeed , it represents the system during its deformation . because they will be used in the following , the displacement field , the tensor and the right cauchy - green deformation tensor and are second order tensors of components and respectively , then , where einstein notation of sum over repeated indexes is used . ] are introduced .the first - gradient kinematics of the continuum must be enriched by considering the second order tensor field which accounts for terms that can be associated to the macro - inhomogeneity of micro - deformation in the microstructure of the continuum .this model could be considered as a limiting case of generalized continua with microstructure like the one presented in for the linear - elastic case and those of for the case of non - linear elasticity . because a second - gradient theory can be readily obtained as limiting case of the micromorphic theory, one can then derive the second - gradient contact actions in terms of the micromorphic ones following the procedure used in .some of the possible types of constraints that could be included in the proposed micromorphic model which , for example , impose inextensibility of yarns giving rise to so - called micropolar continua are presented in . a hyperelastic , orthotropic, second - gradient model can be applied to the case of relatively thin fibrous composite reinforcements at finite strains . for the strain energy density which shall be used to simulate the mechanical behavior of the fibrous composite reinforcements in the finite strain regimeit is assumed a decomposition such as : in this formula , is the first - gradient strain energy and is the second - gradient strain energy . in this work , only an elastic analysis will be considered neglecting the phenomena such as the damage that can be analyzed in a second - gradient framework as done in . the additive decomposition of the strain energy density of eq .is based on the assumption that first and second - gradient effects are uncoupled in the considered problem .explicit expressions for isotropic strain energies suitable for modeling isotropic materials even at finite strains are available in literature ( see e.g. ) . for linear elastic isotropic second - gradient media , it is possible to determine generalized constitutive laws ( see ) . instead , in the case of orthotropy , strain energy potential expressions suitable to describe the real behavior of such materials are not common in the literature .for orthotropic materials , certain constitutive models are for instance presented in , where some polyconvex energies are proposed to describe the deformation of rubbers in uniaxial tests .explicit anisotropic hyperelastic potentials for soft biological tissues are also proposed in and reconsidered in in which their polyconvex approximations are derived .other examples of polyconvex energies for anisotropic solids are given in .it is even less common to find in the literature reliable constitutive models for the description of the real behavior of fibrous composite reinforcements at finite strains , but some attempts can be for instance recovered in . furthermore , the mechanical behavior of composite preforms with a rigid organic matrix ( see e.g. ) is quite different from the behavior of the sole fibrous reinforcements ( see e.g. ) rendering the mechanical characterization of such materials a major scientific and technological issue . in this work ,the preferred directions and are considered as the initial directions of the yarns and the direction as the normal to the plane of the woven fabric in the reference configuration .for relatively thin interlocks loaded in the plane , the out - of - plane effects can be considered to be negligible , therefore it is possible to use an orthotropic energy with an expression of the type : where the in - plane invariants of the cauchy - green deformation tensor are defined as : * represents the elongation strain in the direction * represents the elongation strain in the direction * represents the shear strain between the directions and that can be related to the angle variation these invariants of the tensor can be used to describe the main first - gradient deformation mechanisms intervening during the considered test that are the in - plane angle shear deformation of the yarns and the slippages of the two families of fibers .the energy associated with the shear angle variation can be described as a function of while the elongation parameters and are used to model the presence of the slippage by means of an equivalent continuum model .it is possible to develop complex non - linear energies , but that development is not one of the aims of this paper . instead, a simple first - gradient energy is introduced to describe the overall behavior of the considered material , such as : +\frac{1}{2}k_{sh}i_{8}^{2}\ ] ] this energy introduces an extensional equivalent stiffness and a shear stiffness .the first two terms account for the equivalent elongations of the yarns in the warp and weft directions , respectively .as already mentioned , such an elongation is fictitiously used to model the slipping of the yarns one with respect to the other .such approximation can be considered to be sensible because the overall effect of the slipping at the mesoscopic level results in a macroscopic increase of the width of the deformed specimen .such a width increase is fictitiously interpreted as an `` elongation '' of the yarns in the considered continuum model .the last term accounts for the shear deformation : angle variation between the yarns in the current configuration with respect to the reference one .it must be repeated once again that the extensional effects included in the model are associated with the description of the slipping of the two families of fibers while the shear term models the energy associated with the in - plane shear deformation . because the two extensional stiffness parameters describe the mutual interaction between the two families of yarns , and it is reasonable that the friction properties remain the same for the two directions , the simplification of the same extensional stiffness in the two directions of the fabrics was assumed .furthermore , with such a simplification , the formulation has a lower number of parameters and the asymmetry of the material can be completely attributed to the different values of the second - gradient parameters in the two directions .the main observable experimental phenomenon that can not be included by means of a classical first - gradient theory is the bending stiffness of the yarns . on the other hand , such local bending of the yarnscan be included by introducing a second - gradient energy function of the space derivatives of the invariant , rough descriptors of the in - plane curvature of the yarns of the fabric .the constitutive choice of the second - gradient energy for the considered unbalanced fabrics is of the type where and are the derivatives of with respect to the space coordinates along and . once again , a simple form was chosen even for the second - gradient energy and , furthermore , because the fabric is strongly unbalanced , the coefficient of the two derivatives were considered distinct leading to an energy of the type : this energy introduces the local bending stiffness of the yarns in direction and the local bending stiffness of the yarns in direction into the model . the second - gradient hyperelastic model proposed in this paperis based on a phenomenological approach : the addition of the second - gradient terms in the strain energy density as specified in eq . allows us to describe with a reasonable accuracy the onset of the observed phenomena . summarizing, the final form of the energy considered here takes the form : +\frac{1}{2}k_{2}i_{8}^{2}+\frac{1}{2 } \ , \nabla i_8\cdot \mathbf k \cdot \nabla i_8 \label{eq : secgradconstitutive}\ ] ] where it was set the governing equations in strong form and the associated boundary conditions of the considered generalized continuum can be a useful tool for the understanding of the physics of the considered problem. in particular , the boundary conditions which can be assigned in the considered problem describe the possible interactions of the external world with the considered specimen as it will be shown in the remainder of this section . following what done in , it is convenient to pass through a reformulation of the problem which makes uses of a constrained micromorphic model .more precisely , instead of minimizing a problem in which a second - gradient energy of the type ( [ eq : secgradconstitutive ] ) is considered , a supplementary kinematical field and a lagrange multiplier are introduced such that the strain energy density can be written as +\frac{1}{2}k_{2}i_{8}^{2}+\frac{1}{2 } \nabla \psi \cdot \mathbf k \cdot \nabla \psi+ \lambda \( \psi - i_8).\label{eq : secgradconstitutive_microm}\ ] ] a minimization problem of such a micromorphic model leads to the following set of bulk equations with \ ] ] and on the other hand , the boundary conditions that can be assigned on the boundaries of the considered specimen are and if now one lets tend to the angle variation , the second - gradient model presented in the previous subsection can be recovered .the implementation of a second - gradient theory through a constrained micromorphic theory in which is set to be equal to allows a better understanding of the boundary conditions which take an intuitive meaning . in the proposed model , and with reference to eqs . and ,can be assigned on the boundaries of the considered specimen the force or the displacement and the couple or the angle variation .the boundary conditions that are used in the numerical simulations proposed in the next subsection are : * vanishing displacement on one clamped edge and imposed displacement on the second clamped edge , * vanishing angle variation at the two clamped hands ( the angle between the yarns does not vary due to the clamp ) , * vanishing forces and couples on the two free boundaries . in this section , the results obtained with the introduced hyperelastic , unbalanced second - gradient model are shown . to clearly evaluate the obtained results ,it is useful to show once again the experimental shape ( figure [ fig : test-56 ] ) as reference for all of the following considerations . ]it is reminded to the reader , that the aim is not to fit the proposed model on the material ii due to some problems that occurred during the cutting of the specimen and , thus , could have affected the behavior of the material in a non - perfectly controllable manner . the constitutive parameters appearing in eq . were heuristically chosen by using an inverse method based on physical observations .more particularly , the values of the second - gradient parameters were initially chosen in such a way that the bending stiffness of the thin yarns is very small ( eventually almost vanishing ) , while the bending stiffness of the thick yarns is subsequently chosen to fit at best the experimental s - shape of the specimen .notwithstanding the values given to the parameters and the complete macroscopic s - shape of the specimen can not be recovered without tuning the value of the `` sliding '' parameter which is seen to have a direct influence on the in - plane thickness of the specimen .activating this parameter , the height of the specimen in the middle of the specimen itself starts increasing and becomes closer to the experimental shape .a subsequent parametric study is made on the values of the quoted parameters so as to reach the best possible fitting of the experimental s - shape . a different treatment is left to the in - plane shear parameter whose value is tuned to fit the experimental load - displacement curve ( figure [ fig : sim - ld ] ). a suitable optimization technique could be set up to recover the values of the parameters which better fit the experimental evidence .although interesting , this point is left open for further works which will be based on a more representative amount of experimental data . here , it is only presented a more intuitive calibration of the parameters which nevertheless allows to unveil the single effect of each of them on the macroscopic deformation of the considered specimen .the second - gradient hyperelastic model was implemented in to model the behavior of the first specimen , and the parameters used in the simulations are shown in the tab . [table : par ] .such implementation has been performed using the `` weak form '' package in which the expression for the power of internal actions associated to the energy in eq . has been explicitly given by imposing the constraint via a suitable lagrange multiplier . moreover , the boundary conditions used are those explicitly mentioned at the end of section [ strongform ] ..parameters of the proposed second - gradient continuum model[table : par ] [ cols="^,^,^,^",options="header " , ] first - gradient solution obtained with the constitutive choice of the parameters given is tab . [table : par-2 ] . ]it is possible to notice from the first - gradient results shown in figure [ fig : first - gradient - solution ] that the obtained deformation pattern turns to be completely unphysical for the following reasons : * even if an asymmetry of the macroscopic shape can be obtained , the boundary of the specimen obtained by means of the first - gradient model is piecewise linear , so that the actual curvature of the specimen can not be precisely recovered ; * the pattern of the microstructure associated with the desired macroscopic shape is completely unrealistic : it is possible to see that the thick yarns bend while the thin ones stay straight .moreover the variation of direction associated with such unphysical bending is concentrated on a very thin region , which is sensible in a first - gradient theory where no bending stiffness is associated with the yarns . force - displacement curve obtained with the first - gradient model . ]moreover , it is shown in figure [ fig : first - gradient - load ] that the limits of the considered first - gradient model can be unveiled also with reference to the load - displacement curve which significantly differs from the experimental one .the chosen first - gradient constitutive expression could be replaced with any other first - gradient one ( for example mooney - rivlin or ogden ) , but the internal trend of the fibers could never be recovered due to the fact that , by definition , no second order derivatives of displacement are present in first - gradient models .indeed , such higher order derivatives can be directly related to the curvature of the fibers and it is thus possible to conclude that only a second - gradient constitutive law allows to explicitly account for the ( differential ) bending of the yarns and , hence , for the description of the real deformation patterns of the fibers .the results presented in this last subsection show with no doubt that second - gradient theories are necessary to correctly model the differential bending stiffness of the two sets of yarns in a physically sound way while remaining in a continuum framework .the traditional continuum models based on the implementation of first - gradient energies neglect the description of some essential microstructure - related physical phenomena in woven fibrous composite reinforcements , such as the local bending of the yarns .the insertion of higher order terms in the energy is unavoidable if one wants to correctly describe such kind of materials in the framework of macroscopic continuum theories . in particular , in this paper the results obtained with the use of a suitable second - gradient energy that describe some particular experimental behaviors of woven fabrics were presented , namely : * the in - plane shear deformation * the asymmetry of the macroscopic shape due to the unbalanced bending stiffness of the warp and weft yarns * the slippage of warp yarns with respect to weft ones ( and vice - versa ) .the continuum second - gradient model introduced in this paper must be seen as a reasonable engineering compromise between the easy finite element implementation of the proposed equations and the detail at which the complex behavior of the underlying microstructures is described . despite its simplicity , the proposed model is able to * capture the main macroscopic s - shaped deformation mode of the considered unbalanced material * describe how this asymmetry in the macroscopic behavior is related to the differential local bending stiffness of the yarns at the mesoscopic level * include the presence of slipping of the yarns which has as a macroscopic counterpart an overall increment of the width within the specimen further studies should be mainly focused on * a more precise interpretation and computation of the generalized second - gradient internal actions which may be present in the considered generalized continua , * the development of suitable discrete models which are able to quantify the actual slippages of the yarns and relate it to the equivalent elongations proposed here , * the setting up of more comprehensive experimental campaigns aimed at i ) the precise estimation of experimental errors introduced during the tests , ii ) the conception and development of independent tests for the validation of the proposed second - gradient model and iii ) the study of eventual size effects in the considered unbalanced woven fabrics .the authors thank cnrs for the peps project which assured financial support to research presented in this paper , the mesr for the ph.d .scolarship of gabriele barbagallo and the anr for the one of ismael azehaf .dell isola f. , seppecher p. , 1995 .the relationship between edge contact forces , double force and interstitial working allowed by the principle of virtual power , c.r .ii , mec . phys .321 , 303 - 308 ferretti m. , madeo a. , dellisola f. , boisse p. 2014 . modeling the onset of shear boundary layers in fibrous composite reinforcements by second - gradient theory , zeitschrift fr angewandte mathematik und physik , volume 65 , issue 3 , 587 - 612 harrison p. , clifford m.j ., long a.c . , 2004 shear characterisation of viscous woven textile composites , a comparison between picture frame and bias extension experiments . composites science and technology 64:14531465 makradi a. , ahzi s. , garmestani h. , li d.s. , rmond y. , 2010 .statistical continuum theory for the effective conductivity of fiber filled polymer composites : effect of orientation distribution and aspect ratio a mikdam .composites science and technology 70 :3 , 510 - 517 mikdam a. , makradi a. , ahzi s. , garmestani h. , li d.s . , rmond y. , 2009 .effective conductivity in isotropic heterogeneous media using a strong - contrast statistical continuum theory .journal of the mechanics and physics of solids 57:1 , 76 - 86 nosrat - nezami f. , gereke t. , eberdt c. , cherif c. 2014 characterisation of the shear tension coupling of carbon - fibre fabric under controlled membrane tensions for precise simulative predictions of industrial preforming processes , composites : part a 67:131139 peng x.q ., cao j. , chen j. , xue p. , lussier d.s . , liu l. 2004 experimental and numerical analysis on normalization of picture frame tests for composite materials . composites science and technology 64:1121 placidi l. , 2014 , a variational approach for a nonlinear 1-dimensional second - gradient continuum damage model , continuum mechanics and thermodynamics volume 27 , issue 4 - 5 , 623 - 638 , doi : 10.1007/s00161 - 014 - 0338 - 9 rinaldi a. and placidi , l. 2014 , a microscale second - gradient approximation of the damage parameter of quasi - brittle heterogeneous lattices .mech . , 94 : 862877 .doi : 10.1002/zamm.201300028
the classical continuum models used for the woven fabrics do not fully describe the whole set of phenomena that occur during the testing of those materials . this incompleteness is partially due to the absence of energy terms related to some micro - structural properties of the fabric and , in particular , to the bending stiffness of the yarns . to account for the most fundamental microstructure - related deformation mechanisms occurring in unbalanced interlocks , a second - gradient , hyperelastic , initially orthotropic continuum model is proposed . a constitutive expression for the strain energy density is introduced to account for i ) in - plane shear deformations , ii ) highly different bending stiffnesses in the warp and weft directions and iii ) fictive elongations in the warp and weft directions which eventually describe the relative sliding of the yarns . numerical simulations which are able to reproduce the experimental behavior of unbalanced carbon interlocks subjected to a bias extension test are presented . in particular , the proposed model captures the macroscopic asymmetric s - shaped deformation of the specimen , as well as the main features of the associated deformation patterns of the yarns at the mesoscopic scale .
elucidating the collective dynamics of coupled genetic oscillators not only is important for the understanding of the rhythmic phenomena of living organisms , but also has many potential applications in bioengineering areas .so far , many researchers have studied the synchronization in genetic networks from the aspects of experiment , numerical simulation and theoretical analysis .for instance , in , the authors experimentally investigated the synchronization of cellular clock in the suprachiasmatic nucleus ( scn ) ; in , the synchronization are studied in biological networks of identical genetic oscillators ; and in , the synchronization for coupled nonidentical genetic oscillators is investigated .gene regulation is an intrinsically noisy process , which is subject to intracellular and extracellular noise perturbations and environment fluctuations .such cellular noises will undoubtedly affect the dynamics of the networks both quantitatively and qualitatively . in ,the authors numerically studied the cooperative behaviors of a multicell system with noise perturbations . but to our knowledge, the synchronization properties of stochastic genetic networks have not yet been theoretically studied .this paper aims to provide a theoretical result for the synchronization of coupled genetic oscillators with noise perturbations , based on control theory approach .we first provide a general theoretical result for the stochastic synchronization of coupled oscillators .after that , by taking the specific structure of many model genetic oscillators into account , we present a sufficient condition for the stochastic synchronization in terms of linear matrix inequalities ( lmis ) , which are very easy to be verified numerically . to our knowledge , the synchronization of complex oscillator networks with noise perturbations , even not in the biological context , has not yet been fully studied .recently , it was found that many biological networks are complex networks with small - world and scale - free properties .our method is also applicable to genetic oscillator networks with complex topology , directed and weighted couplings . to demonstrate the effectiveness of the theoretical results , we present a simulation example of coupled repressilators . throughout this paper , matrix is defined as an irreducible matrices with zero row sums , whose off - diagonal elements are all non - positive , and the other notations are defined in the appendix a.since we know very little about how the cellular noises act on the genetic networks , a simple way to incorporate random effects is to assume that certain noises randomly perturb the genetic networks in an additive manner .we consider the following networks of coupled genetic oscillators with random noise perturbations where defines the dynamics of individual oscillators , is called the noise intensity vector , belongs to .as we will see in the following analysis , the results hold no matter what is and no matter where it is introduced , so we do not explicitly express the form of here . can also be a function of the variables ( if so , some minor modifications are needed in the following ) . is a scalar zero mean gaussian white noise process .recall that the time derivative of a wiener process is a white noise process , hence we can define , where is a scalar wiener process .thus , the above equation can be rewritten as the following stochastic differential equation form : + v_i(t)dw_i(t),\,i=1,\cdots , n.\ ] ] the work can be easily extended to the case that and ^t ] with as a monotonic increasing function of the form ] with as a monotonic decreasing function of the form ] , is a matrix with all zero entries except for , ^t ] of ( a ) .( c ) the evolution of the synchronization error of for .,width=529 ] ) of all the genetic oscillators .( b ) zooming in the range ] of ( a ) .( c ) the evolution of the synchronization error of for .,width=529 ] in definition 1 , it requires that all the genetic oscillators have the same initial conditions , so that .if the genetic oscillators have different initial conditions , , and thus ( 12 ) in the appendix c is replaced by \}\\ & + \textbf{e}(v(x(0 ) ) .\end{array}\ ] ] since in genetic networks , the variables usually represent the concentrations of mrnas , proteins and chemical complexes , which are of ( not so large ) limited values , and so is . for a long time scale ,the last term of the above inequality is usually much smaller than the absolute value of the first term in the right - hand side , and thus the last term can be ignored roughly . in fig .3 , we show the same computations as those in fig .2 except that the oscillators are with different initial values ( randomly in the range ( 0 , 1 ) ) . after a period of evolution ,the network behaviors are similar to those in fig .2 , which verifies our above argument . in other words ,rigorously , according to definition 1 , we need that all the oscillators have the same initial conditions , but practically , for oscillators with different initial conditions , we can obtain almost the same results . for the purpose of comparison , in fig .4 we show the simulation results of a network without noise perturbations . as we can conclude from figs . 2 - 4, the networks with noise perturbation , though ca nt achieve perfect synchronization , can indeed achieve synchronization with small error fluctuation , and the network behaviors are similar to those of networks without noise perturbations .in addition to providing a sufficient condition for the stochastic synchronization , proposition 1 can also be used for designing genetic oscillator networks , which is a byproduct of the main results . from the synthetic biology viewpoint , to minimize the influence of the noises ( on the synchronization ) , we can design genetic oscillator networks according to the following rule : which is obviously from the above theoretical result .this is similar to the synthesis problem in control theory .in this paper , we presented a general theoretical method for analyzing the stochastic synchronization of coupled genetic oscillators based on systems biology approach . by taking the specific structure of genetic systems into account ,a sufficient condition for the stochastic synchronization was derived based on lmi formalism , which can be easily verified numerically .although the method and results are presented for genetic oscillator networks , it is also applicable to other dynamical systems . in coupled genetic oscillator networks , since there is a maximal activity of fully active promoters , it is more realistic to consider a michaelis - menten form of the coupling terms .as argued in , our theoretical method is also applicable to this case . to make the theoretical method more understandable and to avoid unnecessarily complicated notation , we discussed only on some simplified forms of the genetic oscillators , but more general cases regarding this topic can be studied in a similar way . for example : ( i ) the genetic oscillator model ( 5 ) can be generalized to more general case such that , the component of , are functions of , not only of , and and can also be of non - hill form , provided that , where are real vectors .( ii ) biologically , the genetic oscillators are usually nonidentical. we can consider genetic networks with both parametric mismatches and stochastic perturbations in similar ways as those presented in this paper and .( iii ) there are significant time delays in the gene regulation , due to the slow processes of transcription , translation and translocation .our result can be easily extended to the case that there are delays both in the coupling and the individual genetic oscillators . as we know, noises can play both beneficial and harmful roles ( for synchronization ) in biological systems . for the latter case ,the noise is a kind of perturbation , and it is interesting to study the robustness of the synchronization with respect to noise . in this paper , we addressed this question . for the former case ,in , the authors studied the mechanisms of noise - induced synchronization .to simulate the stocahstic differential equaiton , the well - known euler - maruyama scheme is most frequently used , which is also used in this paper . in this scheme ,the numerical trajectory is generated by , where is the time step and is a discrete time gaussian white noise with and . for more details , see e.g. .cl gave the topic , developed theoretical results , designed the numerical experiments , analyzed the data , contributed materials/ analysis tools .cl , lc and ka wrote the paper .this research was supported by grant - in - aid for scientific research on priority areas 17022012 from mext of japan , the fok ying tung education foundation under grant 101064 , the program for new century excellent talents in university , the distinguished youth foundation of sichuan province , and the national natural science foundation of china ( nsfc ) under grant 60502009 .20 yamaguchi s et.al .: synchronization of cellular clocks in the suprachiasmatic nucleus ._ science _ 2003 , 302 : 1408 - 1412 .mcmillen d , kopell n , hasty j , collins jj : synchronizing genetic relaxation oscillators by intercell signalling . _usa _ 2002 , 99 : 679 - 684 .wang r , chen l : synchronizing genetic oscillators by signaling molecules ._ j. biol . rhythms _ 2005 , 20 : 257 - 269 .kuznetsov a , kaern m , kopell n : synchronization in a population of hysteresis - based genetic oscillators ._ siam j. appl ._ 2004 , 65 : 392 - 425 .garcia - ojalvo j , elowitz mb , strogatz sh : modelling a synthetic multicellular clock : repressilators coupled by quorum sensing .usa _ 2004 , 101 : 10955 - 10960 .gonze d , bernard s , waltermann c , kramer a , herzel h : spontaneuous synchronization of coupled circadian oscillators ._ biophys .j. _ 2005 , 89 : 120 - 129 .li c , chen l , aihara k : synchronization of coupled nonidentical genetic oscillators ._ 2006 , 3 : 37 - 44 .elowitz mb , levine aj , siggia ed , swain ps : stochastic gene expression in a single cell ._ science _ 2002 , 297 : 1183 - 1186 .paulsson j : summing up the noise in gene networks ._ nature _ 2004 , 427 : 415 - 418 .raser jm , oshea ek : noise in gene expression : origins , consequences , and control . _science _ 2005 , 309 : 2010 - 2013 .blake wj , kaern m , cantor cr and collins jj : noise in eukaryotic gene expression ._ nature _ 2003 , 422 : 633 - 637 .kaern m , elston tc , blake wj and collins jj : stochasticity in gene expression : from theories to phenotypes ._ nature reviews genetics _ 2005 , 6 : 451 - 464 .chen l , wang r , zhou t , aihara k : noise - induced cooperative behavior in a multicell system ._ bioinfor ._ 2005 , 22 : 2722 - 2729 .li c , chen l , aihara k : transient resetting : a novel mechanism for synchrony and its biological examples ._ plos comp ._ 2006 , 2 : e103 .boyd s , el ghaoui l , feron f , balakrishnan v. _ linear matrix inequalities in system and control theory ._ philadelphia : siam .tong ahy et .al . : global mapping of the yeast genetic interaction network . _science _ 2004 , 303 : 808 - 813 .barabsi , al , oltvai zn : network biology : understanding the cell s functional organization ._ nature reviews genetics _ 2004 , 5 : 101 - 114 .kloeden pe , platen e and schurz h : _ numerical solution of sde through computer experiments ._ springer - verlag : berlin ; 1994 .stochastic differential equations : theory and applications_. john wiley & sons : london ; 1974 .xu s , chen t : robust control for the uncertain stochastic systems with state delay ._ ieee trans . auto ._ 2002 , 47 : 2089 - 2094 .wu cw : _ synchronization in coupled chaotic circuits and systems._. world scientific : singapore , 2002 .vidyasagar m : _ nonlinear systems analysis_. englewood cliffs , nj : prentice - hall ; 1993 .li c , chen l , aihara k : stability of genetic networks with sum regulatory logic : lure system and lmi approach ._ ieee trans .circuits and systems - i _ 2006 , 53 : 2451 - 2458 .goodwin bc : oscillatory behavior in enzymatic control processes .enzyme regul ._ 1965 , 3 : 425 - 438 .elowitz mb , leibler s : a synthetic oscillatory network of transcriptional regulators ._ nature _ 2000 , 403 : 335 - 338 .gardner ts , cantor cr , collins jj : construction of a genetic toggle switch in _escherichia coli_. _ nature _ 2000 , 403 : 339 - 342 .goldbeter a : a model for circadian oscillations in the _ drosophila _ period protein ( per ) ._ 1995 , 261 : 319 - 324 .lin w , he y : complete synchronization of the noise - perturbed chua s circuits. _ chaos _ 2005 , 15 : 023705 ._ a. notations _: throughout this paper , denotes the transpose of a square matrix .the notation is used to define a real symmetric positive definite ( negative definite ) matrix . denotes the -dimensional euclidean space ; and denotes the set of all real matrices . in this paper ,if not explicitly stated , matrices are assumed to have compatible dimensions . denotes the expectation operator ; is the space of square - integrable vector functions over ; stands for the euclidean vector norm , and stands for the usual norm .the kronecker product of an matrix and a matrix is the matrix defined as .\ ] ] for a general stochastic systems the diffusion operator acting on is defined by .\ ] ] for network ( 1 ) , a natural attempt is to study the mean - square asymptotic synchronization .analogue to the definition of mean - square stability , we can define the mean - square synchronization as follows : _ definition a1 _ : the network ( 1 ) is said to be mean - square synchronous if for every , there is a , such that for .if in addition , for all initial conditions , the network is said to be mean square asymptotically synchronous . in analyzing the synchronization of the network ( 1 ) , we use the lyapunov function , where is the kronecker product , and ^t\in r^{nn\times 1} ] and .since is a matrix with zero row sums and is the same for all , it is easy to show that the last term of is zero .thus if the following conditions hold , we will have = \textbf{e}[lv(x(t))dt]<0 ] , and , and assuming , we have so , the conditions for the mean - square asymptotically synchronization of the network ( 1 ) in this case are <0,\\ \forall y_1,y_2 \in r^n\ , ( y_1\neq y_2)\\ ( u\otimes p)(g\otimes d+i\otimes t)+(g\otimes d+i\otimes t)^t(u\otimes p)+\rho(g\otimes d)^th(g\otimes d)\leq 0,\\ u\otimes p\leq \rho i. \end{array}\ ] ] cases , that is neither are the same for all , nor reduce to zero when , the network is hardly to achieve mean - square asymptotically synchronization . experimental results also show that usually the genetic oscillators can not achieve mean - square synchronization ( see for example ) .so , we argue that the study of mean - square synchronization is unrealistic ( and therefore meaningless ) in genetic networks . in ref . , the authors studied the mean - square asymptotic synchronization of two master - slave coupled chua s circuits .they assume that the noise intensity depends on the difference of the states of the two systems , which is also somewhat unrealistic . to obtain the general synchronization condition ( 3 ) of the network ( 1 ), we also use the lyapunov function . by it s formula , we obtain the stochastic differential . according to definition 1 , we assume that the oscillators have the same initial conditions, thus we can derive .for , we define \}\ ] ] then , it is easy to show that for , \}\\ \equiv & \textbf{e}\{\int_0^ts_1(s)ds\}. \end{array}\ ] ] assuming , and letting , we have \\ & + 2x^t(t)(u\otimes p)(g\otimes d+i\otimes t)x(t)\\ & + \mbox{trace}(v(t)v^t(t)(u\otimes p))\\ & + \frac{\rho}{\gamma}\sum_{i < j}(-u_{ij})(x_i(t)-x_j(t))^t ( x_i(t)-x_j(t))\\ & -\rho\sum_iv_i^t(t)v_i(t)\}\\ \leq & \frac{\gamma}{\rho}\{\sum_{i < j}(-u_{ij})[2(x_i(t)-x_j(t))^t\\ & \cdot p(f(x_i)-f(x_j)-t(x_i - x_j))\\ & + \frac{\rho}{\gamma}(x_i(t)-x_j(t))^t ( x_i(t)-x_j(t))]\\ & + 2x^t(t)(u\otimes p)(g\otimes d+i\otimes t)x(t)\}. \end{array}\ ] ] if , we will have , and thus , ( 2 ) follows immediately from ( 3 ) .proposition 1 can be proved by replacing in of ( 3 ) by the dynamics of ( 5 ) , and using the sector conditions ( 6 ) .we have \\ & + \frac{\rho}{\gamma}(y_1(t)-y_2(t))^t(y_1(t)-y_2(t))\\ \leq & 2(y_1(t)-y_2(t))^t p(a - t)(y_1(t)-y_2(t))\\ & + \frac{\rho}{\gamma}(y_1(t)-y_2(t))^t(y_1(t)-y_2(t))\\ & + 2(y_1(t)-y_2(t))^tpb_1(f(y_1(t))-f(y_2(t)))\\ & -2(y_1(t)-y_2(t))^tpb_2(g(y_1(t))-g(y_2(t)))\\ & -2\sum_{l=1}^n\lambda_{1l}(f_l(y_{1l}(t))-f_l(y_{2l}(t)))\\ & \cdot [ ( f_l(y_{1l}(t))-f_l(y_{2l}(t)))-k_1(y_{1l}(t)-y_{2l}(t))]\\ & -2\sum_{l=1}^n\lambda_{2l}(g_l(y_{1l}(t))-g_l(y_{2l}(t)))\\&\cdot [ ( g_l(y_{1l}(t))-g_l(y_{2l}(t)))-k_2(y_{1l}(t)-y_{2l}(t ) ) ] .\end{array}\ ] ] by letting and denoting the first matrix in ( 8) by , we have for all except for , where ^t\in r^{3n\times 1}$ ] .so , the first condition in ( 3 ) is satisfied . substituting , the second inequality in ( 3 ) is equivalent to the second inequality in ( 8) .thus , proposition 1 is proved .
* background * : the study of synchronization among genetic oscillators is essential for the understanding of the rhythmic phenomena of living organisms at both molecular and cellular levels . genetic networks are intrinsically noisy due to natural random intra- and inter - cellular fluctuations . therefore , it is important to study the effects of noise perturbation on the synchronous dynamics of genetic oscillators . from the synthetic biology viewpoint , it is also important to implement biological systems that minimizing the negative influence of the perturbations . * results * : in this paper , based on systems biology approach , we provide a general theoretical result on the synchronization of genetic oscillators with stochastic perturbations . by exploiting the specific properties of many genetic oscillator models , we provide an easy - verified sufficient condition for the stochastic synchronization of coupled genetic oscillators , based on the lure system approach in control theory . a design principle for minimizing the influence of noise is also presented . to demonstrate the effectiveness of our theoretical results , a population of coupled repressillators is adopted as a numerical example . * conclusion * : in summary , we present an efficient theoretical method for analyzing the synchronization of genetic oscillator networks , which is helpful for understanding and testing the synchronization phenomena in biological organisms . besides , the results are actually applicable to general oscillator networks . _ keywords : _ genetic networks , noise , lyapunov function , lure system , repressilator
quantum correlations have become a ubiquitous resource in short and long - range communication using photons as carriers of quantum information ( qubits ) .the most significant developments have been realised using polarisation as the degree of freedom ( dof ) of choice ; robust against atmospheric perturbations , and can easily be controlled with wave - plates and polarising elements .polarisation - based quantum communication is , however , limited to , and requires the sender and receiver to share a frame of reference .quantum protocols allows for more information to be packed onto single photons .the use of spatial modes of light to realise high dimensions has seen many notable advances , with orbital angular momentum ( oam ) being the preferred dof .oam forms a convenient basis , is easy to measure with phase only holograms , and is conserved down to the single photon level . however , it is worth noting that despite its potential , entanglement based on spatial modes poses challenges in its implementation . in both free - space and optical fibres , modal crosstalk and the concomitant decay of entanglementare the main challenges . in free - space quantum channels ,spatial modes are adversely affected by atmospheric turbulence , which reduces the probability of detecting photons , while the induced scattering among spatial modes leads to a loss of entanglement in the final state measured in a given subspace . to circumvent the deleterious effects of turbulence , as well as the need for a shared reference frame ,hybrid oam and polarisation qubit states have been put forward as possible carriers for more robust communication .these hybrid states are rotation invariant , and have been used to demonstrate alignment - free , robust quantum communication , where qubits are encoded the two dofs that are entangled . to date , channels with quantum states have been demonstrated over 144 km with polarisation , over 210 m in a controlled environment to minimise turbulence , recently over 3 km across vienna .fibre channels with two dimensional entangled spatial modes languish at the centimetre scale , and no study to date has managed to report on the transport of high dimensional entanglement in any practical sense , in either free - space or fibre . to advancefurther requires characterisation schemes that allows one to gain information on the channel , predict the effects of perturbations , and implement error - correction in real - time .process tomography is an essential tool to obtain knowledge about the action of a channel in general , and its effects on the propagation of entangled states in particular . at the single photon level , this characterisation is difficult to do , especially with entangled states : one needs the quantum link to work before it can be characterised , but having it characterised would be immensely helpful in getting it to work .thus the process tomography of quantum channels in which ( entangled ) spatial modes are used remains topical but challenging .here we demonstrate a simple approach to characterise a quantum channel using classical light .we exploit the property of vector beams , so - called classically entangled light , to show that .this proves that beyond the mathematical resemblance to , classical entanglement does hold physical significance . as an example, we demonstrate that the transport and decay are identical in a channel perturbed by atmospheric turbulence . moreover , .thus , a full characterisation of quantum channels can be obtained via state tomography of classically entangled light beams .this new technique determine the action of turbulence , and other channels , on pairs of spatial modes for quantum and classical states of light , and replaces the usual process tomography in both cases .finally , we demonstrate the applicability of the tools in a proof - of - principle communication experiment employing classically entangled states , showing that the characterisation of the channel allows for information recovery and robust data transfer .* concept*. consider a typical scenario where quantum information is shared between two parties ( alice and bob ) , as shown in fig .[ fig : concept](a ) .alice generates two photons entangled in their spatial dof , which we will take to be the oam dof .her bi - photon state can then be written as : .she sends one photon to bob , which passes through the channel ; in this study we will consider a free - space link through a turbulent atmosphere as our example , but the concept is not restricted to this particular case and can be adapted to different channels .it has been shown , theoretically and experimentally , that perturbations due to such a channel negatively affect the correlations between the photons , thereby decreasing the efficiency and security of the quantum communication link . in this scenario , photon a experiences the channel while photon b does not .the resulting state of the photon pair carries the complete information about the channel .this is due to the so - called choi - jamiolkowski isomorphism , and could be experimentally verified by teleporting states of single photons using the photon pair in state as an entanglement resource .the resulting teleportation channel would reproduce the same state changes as the turbulence channel .we claim that this quantum scenario has a classical equivalent , depicted in fig .[ fig : concept].(b ) .here alice prepares a classical beam that is non - separable in oam and polarisation : , sending the entire beam to bob through the same channel . in this formalism and now refer to the two degrees of freedom in the non - separable light field , and not to two photons in the entangled system .but polarisation is not affected by turbulence , so the degree of freedom that experiences the deleterious effects of the channel is that of the spatial mode ( a ) . in both casesonly the states and are affected .the equivalence of the quantum and classical scenarios , together with the fact that the outgoing state in the quantum case contains the full information about the channel , strongly suggest that such non - separable states of light may be used to characterise the effect of the channel on the quantum state , an idea which we later validate theoretically and experimentally . + * classically entangled states .* our hybrid encoding space , described by the higher - order poincar sphere , is formed from the tensor product of the infinite dimensional oam and the two - dimensional polarisation hilbert spaces .we are interested in states of the form where .equation [ eq : vector beam_quantum ] defines a two - photon entanglement system expressed in the oam basis , , with each photon carrying quanta of oam .equation [ eq : vector beam_classical ] defines a vector vortex mode , which here will be the classical equivalent to the system defined in eq .[ eq : vector beam_quantum ] .the basis states correspond the left and right circular polarisation states , respectively . among the many tools used to evaluate the degree of entanglement we choose the concurrence as our measure , as it has been shown to be effective in quantifying the degree of quantum and classical entanglement .for qubit pairs defined as in eqs .[ eq : vector beam_quantum ] and [ eq : vector beam_classical ] , this is given by : we consider an oam state passing through turbulence the oam spectrum . here , , the state evolution is a unitary transformation that maps pure states onto pure states ( see supplementary information ) : where are the modal weightings .thus , a given input vector vortex mode propagating through a turbulent channel will be transformed as follows : note that we omit the subscripts a and b for simplicity of notation .it follows that the matrix elements of the , m , typically determined by process tomography by a state tomography of the output state .+ * decay of classical entanglement in turbulence . *the entanglement of the final state ( eq . [ eq : outstateturb ] ) is given by the concurrence , which , can be expressed in terms of the of the input state where for example , in weak turbulence where , eq . [ eq : concinturb ] reduces to : the initial and final entanglement are approximately the same , i.e. , the photons remain entangled to each other .conversely , in relatively strong turbulence where , the concurrence vanishes .the relation in eq .[ eq : concinturb ] has been derived in for entangled photons .the broadening of the oam spectrum described by eq .[ eq : state expansion ] leads to inter - modal coupling among vector modes , resulting in a loss of entanglement with increasingly strong perturbations . within the subspace of vector modes with , this coupling can be analysed using the four vector modes in this space .these modes are orthogonal and constitute a basis , analogous to the bell basis , that can be used to encode information . in optical waveguide theory , these modes are widely known as optical fibre modes , which we label as : by way of example , consider a propagating in a strong turbulence regime . in the special case where ( strong coupling ) , the final state reduces to which is separable ( not entangled ) i.e. , the spatial and polarisation dofs can be factorised .equivalently in the quantum case , perturbations incurred by one of the two photons ( modal dispersion and projection onto a subspace ) will transform an initial entangled state into a final factorisable ( separable ) state .+ * classical and quantum experiments .* here we demonstrate experimentally the equivalence between the evolution of classical and quantum entanglement in turbulence .our classical experimental setup is illustrated in fig .[ fig : setup ] and comprises creation , propagation and detection stages . in the creation step , the vector vortex modeis prepared either directly from a `` spiral laser '' , or by using wave plates and -plates to transform a linearly polarised gaussian beam into a vector vortex mode .we passed our vector mode through a turbulent channel ( turbulence phase screen ) that was made to vary with time , and analysed the output beam with two detection systems : a vector mode sorter to uniquely detect each of the maximally entangled modes , thus evaluating the amount of inter - modal coupling , and a tomography detector to evaluate the concurrence . in the quantum experiment ,we used spontaneous parametric down conversion ( spdc ) to produce two photons entangled in oam , of which one was sent through a turbulent channel ( see methods ) .a state tomography of the two photons was performed to determine the evolution of the concurrence as a function of the turbulence strength . to account for fluctuations in the number of photons, we used an over - complete set of measurements to reconstruct the density matrix ( see methods ) . in fig .[ fig : evo entangl](a ) we show the measured dependence of the concurrence of our quantum and classical states as a function of the degree of turbulence in the channel , together with the theoretical prediction from .we used a vector mode and an maximally entangled oam state as our equivalent classical and quantum systems , respectively .the experimental results for both the classical and quantum cases are in excellent agreement with the theory .hence , the agreement between the classical and quantum experiments validates the equivalence of the quantum and classical models depicted in fig .[ fig : concept ] .the inset in fig .[ fig : evo entangl](a ) shows the variation in the measured fidelity of a vector mode in turbulence , computed with respect to a maximally entangled state .furthermore , by varying the concurrence of the input vector mode in fig .[ fig : evo entangl](b ) , we experimentally confirmed for the first time , the existence of the choi - jamiolkowski isomorphism for spatial modes as summarised in eq .[ eq : concinturb ] , i.e. , that there is a linear relationship between and . the observed decay of entanglement with increasing turbulence , as predicted by eq .[ concsr ] , is explained by examining the effects of turbulence on the mode spectrum : for example , in the classical case , with increasing turbulence strength , the scattering among vector vortex modes is increased , as seen in figs .[ fig : evo crosstalk ] ( a)-(e ) .this to the spreading of the oam spectrum , as shown in figs .[ fig : evo crosstalk](f ) and [ fig : evo crosstalk](g ) , which leads to crosstalk among vector vortex modes , as described by eq .[ eq : outstateturb ] , and similarly for the quantum having characterised the impact of the channel on the states to be propagated , we used the setup to show a proof - of - principle data transmission over the channel .it has been recently shown that the non - separability of vector vortex can be used to encode two bits of information simultaneously on the entangled dofs . in our scheme ,we ( de)multiplexed the four maximally entangled vector modes as shown in fig .[ fig : setup](c ) .this allowed us to perform a four - bit encoding scheme based on these states .the encoded image in fig .[ fig : maxwell](a ) was transmitted through a turbulent channel with an average turbulence strength . without any correction ,the received image shows significant amounts of distortion , resulting in a 64.2% correlation coefficient with respect to the encoded image [ fig .[ fig : maxwell](c ) ] .this is due to the intermodal coupling corrupting the encoded bits sequences , and resulting in state errors measured at the receiver s end .a practical advantage of studying the decoherence induced by a channel is the ability to mitigate perturbations through pre- or post - processing of the data .after propagation through a perturbing medium , the input and output states can be related by where m is the channel operator of the system that contains all the information about the crosstalk induced by the medium .this matrix is graphically represented in fig .[ fig : maxwell](b ) .thus the perturbation can be cancelled by correcting the final state , , with . using this correction technique, we obtained an image with an increased correlation coefficient of 98.9% , as shown in fig .[ fig : maxwell](d ) .the characterisation of quantum channels is a _ sine qua non _ to the implementation of practical quantum communication protocols .perturbations from the environment constitute a hindrance in realising quantum links , particularly when using entanglement as a resource . herewe have described , as an example , the effects of atmospheric turbulence on entangled spatial modes . using a classically equivalent system, we showed that the quantum channel can be characterised with bright classical sources . using vector vortex modes ,so - called classically entangled light , we have proved that the state evolution of two entangled dofs is identical to that of two photons entangled in one dof , when propagating through atmospheric turbulence . as a corollary , in both the quantum and classical pictures ,our models show an identical decay in entanglement correlation with increasing perturbations .this provides new insights into the notion of entanglement at the classical level ; that is , beyond the mathematical non - separability of the dofs , nature can not between classical and quantum entanglement in as far as characterising the channel is concerned .furthermore , our work represents , to the best of our knowledge , the first side by side comparison of classical and quantum entanglement .we have shown this for atmospheric turbulence , but the approach can easily be generalised to other perturbations . the decay of entanglement we observed in our results show that vector vortex modes are not resilient to atmospheric turbulence .this is further supported by the decay in fidelity of the final state after turbulence , measured with respect to a maximally entangled state ( see methods ) , as shown in the inset of fig .[ fig : evo entangl](a ) .the decay of entanglement and fidelity we found is not in contradiction with a method to recover qubits encoded in two rotationally symmetric vector states that was tested against the influence of turbulence .this method uses a filter ( post - selection ) to eliminate all spatial crosstalk components generated by weak turbulence , resulting in the loss of photons .however , this post - selection approach does not provide a measure of resilience to turbulence as all modes can be recovered using this technique .we confirmed that the channel s impact on the quantum state can be determined from a single measurement of the maximally entangled state . unlike in quantum optics experiments with entangled bi - photons in their spatial modes , here the degree of entanglement ( non - separability ) of our classical lightmay easily be adjusted with simple polarisation optics .this allows a sender ( receiver ) at the input ( output ) to predict the loss in correlation as a result of perturbation induced by the channel o quantum states with arbitrary degree of entanglement .this is a consequence of the choi - jamiolkowski isomorphism which , for the first time , has been demonstrated with spatial modes. this may pave the way for forward - error - correction and identification of an eavesdropper in quantum key distribution protocols through a noisy channel . using the tools we presented to characterise the channel, we demonstrated a simple prepare - and - measure protocol whereby data was encoded on classically entangled states , sent through the turbulent channel , decoded and corrected using the channel matrix .although this demonstration serves as a proof - of - principle , the techniques presented in this work can be applied to quantum error - correction .the characterisation of quantum channels through process tomography requires multiple measurements to be performed on the state over extended lengths of time - this is what gives rise to mixed states despite the unitary behaviour of the state evolution . using classical light , the process tomography measurements can then be done simultaneously ( since there are many photons ) , allowing for real - time error - correction . in conclusion , we have proposed and demonstrated a classical approach to study the transport of a quantum entangled system in a perturbing channel . using free - space communication in a turbulent atmosphere as an example , we claimed and proved the equivalence of classical and quantum entanglement when characterising a channel .this equivalence was demonstrated in a direct comparison of the decay in the degree of entanglement for a bright classical vector beam and entangled photons . in this paradigm, we showed that the process tomography of quantum channels , requiring multiple measurements on the quantum state , can be replaced by a state tomography on the classical beam .this process tomography of a communication channel using classical light can be done in real - time , and implemented in quantum links for real - time error - correction .lastly , we have proved , again using classical light , the choi - jamiolkowski isomorphism : given a channel , the decay in entanglement of a quantum state can be predicted from that of a maximally entangled state , through a linear relationship . through our theoretical analysis and experimental investigations ,we have proved that classical entanglement is more than a mathematical non - separability ; it has physical properties which , in some cases , nature itself can not differentiate from those of its quantum counterpart .* generation and detection of vector vortex beams using a -plate*. the generation of vector vortex beams has been made convenient with the invention of -plates .these are phase plates with locally varying birefringence that gives rise to a coupling between sam and oam through the pacharatnam - berry geometric phase .the encoding of entangled qubits with a -plate is summarised by the following transformation rules : where is the topological charge of the -plate .the four vector vortex modes of a given subspace are non - separable superpositions of qubit states generated as in ( [ eq : qplate1 ] ) and ( [ eq : qplate2 ] ) with the and input components phase shifted by or . by transforming a linearly polarised gaussian beam , the and are generated with a -plate with topological charge , while the and are with one having topological charge .+ in addition to their encoding function , -plates can also be used as decoders .this is achieved by simply reversing the generation process outlined in ( [ eq : qplate1 ] ) and ( [ eq : qplate2 ] ) thus , one recovers the information encoded when the encoding and decoding -plates have identical topological charges ( ) .this technique is in principle identical to the modal decomposition of scalar modes with slms ( see supplementary information for further details ) : a mode is directed onto the slm , where an inner product of the incident field with a match filter hologram is performed , and the on - axis intensity is measured by a camera situated after a fourier lens .when the input mode matched the filter , a bright on - axis intensity was observed ; otherwise a zero on - axis intensity was measured .thus , the modal content of the state exiting the turbulence plate was efficiently measured . +* concurrence of entangled qubit pairs . * in general, the concurrence of an arbitrary qubit state ( pure or mixed ) can be computed from its density matrix where are the eigenvalues in decreasing order of the hermitian matrix , and is the pauli matrix 12 & 12#1212_12%12 [1 ] [0 ] link:\doibase 10.1038/nphys629 [ * * , ( ) ] link:\doibase 10.1038/nature11472 [ * * , ( ) ] link:\doibase 10.1038/nature11332 [ * * , ( ) ] link:\doibase 10.1073/pnas.1517007112 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.88.127902 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1038/ncomms5502 [ * * , ( ) ] link:\doibase 10.1126/sciadv.1501165 [ * * , ( ) ] link:\doibase 10.1364/aop.8.000200 [ * * , ( ) ] link:\doibase 10.1038/35085529 [ * * , ( ) ] , link:\doibase 10.1364/oe.20.013195 [ * * , ( ) ] link:\doibase 10.1364/ol.37.003735 [ * * , ( ) ] * * , ( ) link:\doibase 10.1088/1367 - 2630/9/4/094 [ * * , ( ) ] , link:\doibase 10.1364/ol.34.000142 [ * * , ( ) ] link:\doibase 10.1364/oe.24.006959 [ * * , ( ) ] link:\doibase 10.1364/oe.24.002919 [ * * , ( ) ] link:\doibase 10.1103/physreva.92.012326 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physreva.77.032345 [ * * , ( ) ] link:\doibase 10.1038/ncomms1951 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.113.060503 [ * * , ( ) ] * * ( ) link:\doibase 10.1073/pnas.1517574112 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.106.240505 [ * * , ( ) ] * * , ( ) , , , * * , ( ) link:\doibase 10.1103/physrevlett.110.263602 [ * * , ( ) ] * * ( ) link:\doibase 10.1126/science.aad7174 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) link:\doibase 10.1103/physreva.45.8185 [ * * , ( ) ] link:\doibase 10.1007/s11704 - 008 - 0017 - 8 [ * * , ( ) ] link:\doibase 10.1103/physreva.92.023833 [ * * , ( ) ] link:\doibase 10.1038/nphys826 [ ( ) ] link:\doibase 10.1364/ol.40.004887 [ * * , ( ) ] link:\doibase 10.1007/978 - 1 - 4613 - 2813 - 1 [ _ _ ] , vol . ,( ) link:\doibase 10.1038/nphoton.2016.37 [ * * , ( ) ] * * , ( ) * * , ( ) link:\doibase 10.1103/physreva.84.062101 [ * * , ( ) ] * * , ( ) in _ _ , vol .( ) _ _ , , ( ) * * , ( )consider an arbitrary vector mode generated by passing a linearly polarised field gaussian field through a -plate , where the position vector is expressed in standard polar coordinates , and the unit vector represents the polarisation direction .the gaussian field is transformed by a -plate , which can be represented by the following hermitian operator , \ , \ ] ] where is the topological charge of the -plate .subsequently , passing through a second q - plate with topological charge , and measuring the linear polarisation state results in the following output : where is a vector state .let be a two - dimensional normalised position vector .by projecting eq .[ eq : innerprod ] into position space , we obtain using a lens , the field observed in the fourier plane is given by from the orthogonality of oam modes and polarisation , the on - axis intensity in the fourier plane will be given by this means that measuring the on - axis intensity will yield a non - zero value if and only if the two -plates have the same topological charge , and the polarisation measured is that of the initial field that generated .the strength of a turbulent medium can be characterised by the strehl ratio , which is defined as where and are the on - axis intensities of the aberrated and non - aberrated gaussian modes , respectively .this is applicable for both the weak and strong turbulence regimes , where 1 represents no turbulence and 0 represents a highly turbulent medium .figure [ fig : sr setup ] illustrates the detrimental effects of different turbulence strengths on a vector vortex mode .the kolmogorov power spectrum is given by with , where and are the inner and outer scales of the turbulence , and define the limits within which the above power spectrum describes an isotropic and homogeneous atmosphere .the turbulence phase screens are generated by fourier transforming the product of a random function with the power spectrum above . using a slm , we digitally generated turbulence phase screens and obtained the calibration curve illustrated in fig .[ fig : sr calibration ] the effect of a turbulent medium on the strehl ratio , assuming weak irradiance fluctuations , is given by where sr is the strehl ratio , is the diameter of the receiving aperture , and is the fried parameter , given by although eq .( [ eq : sr1 ] ) is valid for weak irradiance fluctuations , it has not been derived for a single phase screen scenario , which is the case for the current experimental setup .one can compute the strehl ratio for a single phase screen , using the quadratic structure function approximation .the resulting expression is similar to eq .( [ eq : sr1 ] ) . here is the beam radius of the input beam .we will assume that , without the quadratic structure function approximation , the relationship is of the form the concurrence of a photon pair that has an initial entangled state ( bell state ) and where only one photon propagates through single phase screen turbulence , evolves according to where using eq .( [ eq : sr3 ] ) , we can express in terms of the strehl ratio so that eq .( [ conc ] ) becomes for a hybrid oam - polarisation qubit state , the concurrence is computed as follows for a vector vortex mode defined as in eq .[ eq : vector beam_classical ] , reduces to .recall the expression derived for the concurrence of the input and output states : .we can extend our analysis by imposing conditions on .we want a symmetric distribution with its maximum centered at . for the sake of the argument, we will assume a gaussian - like discrete function for where is the oam index and is the width of the distribution , which depends on turbulence .we can rewrite eq .[ eq : coutcin1 ] as : if , then the concurrence , which is explained by the fact that the input beam is not a vector beam . in the case of ,the concurrence of the output state will be equal to that of the input state : .this is because the oam modes are so far apart that the crosstalk resulting from the turbulence will not affect the measured oam modes , as illustrated in fig .[ fig : gaussianturb ] . if , then , as this implies that the initial state is not perturbed ( no turbulence ) .3 ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] _ _ , , ( ) * * , ( ) * * , ( ) link:\doibase 10.1007/s11704 - 008 - 0017 - 8 [ * * , ( ) ]
* high - dimensional entanglement with spatial modes of light increased security and information capacity over quantum channels . unfortunately , entanglement decays due to perturbations , without a tomography of the channel . paradoxically , the channel tomography itself is not possible without a working link . here we overcome this problem with a robust approach to characterising quantum channels by means of classical light . using free - space communication in a turbulent atmosphere as an example , we show that the state evolution of classically entangled degrees of freedom is equivalent to that of quantum entangled photons , thus providing new physical insights into the notion of classical entanglement . quantum error - correction in short and long haul optical communication , in both free - space and fibre . *
the frequency dependency of the phase velocity and wave attenuation originating from the wave - medium interaction is mathematically expressed by the dispersion relation , where and are the wavenumber and angular frequency of the wave respectively . since it is possible to extract the viscoelastic parameters of a material from its dispersion curve , the investigation to obtain an accurate dispersion relation becomes pivotal for many applications in the field of acoustics , medical imaging , seismology and geophysics .one of the methods which is followed to model dispersion assumes that the material possesses memory , which can be classified into two types ; memory in time and memory in space . in materials with time nonlocality or simply memory , the effect at a given point in space at a given time is dependent on the causes at the same point for all preceding times . on the other hand , in materials with space nonlocality or spatial memory , the effect at a given point in space at a given time is dependent on the causes at all points in space . in this paper, we differentiate the dispersion arising due to time nonlocality and spatial nonlocality by referring to them as `` temporal dispersion '' and `` spatial dispersion '' respectively . for an elastic material with temporal memory ,physically it implies that the material remembers its past deformations .such materials do not obey the simple hookean elasticity , but rather they fall into the category of viscoelasticity .the classical viscoelastic models , e.g. the maxwell , kelvin - voigt and zener constitutive stress - strain models , lead to temporal dispersion and attenuation of waves in materials .the basic building blocks of the models are elastic springs and viscous dashpots , which are arranged in series and ( or ) parallel combinations to depict the interplay between the elastic and viscous properties of a material .the fractional order counterpart of the classical viscoelastic models which are studied using fractional calculus , have been found even more useful in investigating the dispersive properties of a wide range of complex materials ( e.g. biological tissues , polymers and earth sediments ) . for the last fifty years , it has been well established that under the action of a deforming stress , the constituent points of an elastic material display range - dependent nonlocal interaction between themselves .consequently , the applied stress is not confined to a local point , but rather distributed to all the interacting points of the material .a wave travelling in such a nonlocal material undergoes spatial dispersion , in addition to the regular temporal dispersion .it is essential to emphasize that temporal dispersion manifests itself as a result of the localized wave - material interaction which is primarily dependent on the frequency .but since space nonlocality in a material is dictated by the wavelength of the traversing wave , the resulting spatial dispersion is determined by the wavenumber . lately , spatial nonlocal operators of the form of fractional laplacians have been used to model dispersive properties of acoustic media .however , the study of spatial dispersion has not received as much attention as the temporal dispersion .the main factor behind this is the satisfactory explanation of dispersive properties of most materials by temporal dispersion , which is modelled under the framework of classical theory of elasticity .however since the classical theory is based on the principles of continuum mechanics , it assumes localized stress - strain in the material , and is therefore not applicable in problems where nonlocal interactions play a dominant role .the primary motivation for our work is that much of the current research ( see , e.g. ) has focused more on the mechanical properties of nonlocal elastic materials than its dispersive properties .second , the results from such studies may even influence how new materials can be artificially engineered .besides , we will also illustrate how the framework of tempered fractional calculus can be used to overcome the boundary value problems which are often encountered when nonlocal elasticity is modelled using fractional calculus .the purpose of this paper is therefore to modify the nonlocal elasticity in an infinitely long one - dimensional bar with tempering , and then investigate the spatial dispersion of elastic waves in the bar .further , we show how different numerical techniques can be used to solve the spatial dispersion relation which is often analytically difficult .the rest of the paper is organized as follows . in section [ sec:2 ] ,the suitability of fractional calculus to study nonlocal elasticity problems is analysed in the light of some recent studies . in section [ sec:3 ] ,we spatially temper the conventional power - law attenuation kernel of the nonlocal elastic bar and examine its significance . then in section [ sec:4 ], we utilize the mathematical framework of tempered fractional calculus to derive the spatial dispersion relation from the constitutive equation of the bar . in section [ sec:5 ] ,we employ numerical techniques to solve the dispersion equation for complex wavenumbers and subsequently obtain the dispersion plots .further , we compare temporal dispersion and spatial dispersion by pointing out the resemblance of the phase velocity dispersion curve obtained from the space - fractional tempered nonlocal elastic model with that from the time - fractional zener model. moreover , the spatial dispersion relation anticipates an unusual attenuation behaviour for the elastic wave propagation in tempered nonlocal elastic bar , which we justify by physical arguments . finally in section [ sec:6 ] , the potential implications of this work are used to draw some conclusions .nonlocal continuum field theories describe the physics of materials whose behaviour at a given local point is determined by the state of all points of the body ( see , ) .the theory is built upon two assumptions .first , the molecular interactions in a material are inherently nonlocal .second , the theory assumes the energy balance law to be valid globally for the entire body .the comprehensive and robust formulation of nonlocal theory is ascertained by its wide range of applications . lately, nonlocal theory has also found applications in nanomechanics which integrates solid mechanics with atomistic simulations to analyse the strength and dispersive properties of carbon nanotubes ( cnts ) , . according to the krner - eringen s model of nonlocal elasticity , the total strain response of a material to a deforming stress is the sum of the local strain and , the nonlocal strain , .the nonlocal strain is given by the convolution integral of the strain in the neighbourhood with the attenuation kernel , where is the euclidean distance from the local point .for an infinitely long , one - dimensional , isotropic , linear nonlocal elastic bar , the constitutive equation is then given as ,\ ] ] where is the elastic modulus at zero frequency .the material constant represents the strength of nonlocality in the material .as seen , in the limit as , equation reduces to the case of hooke s law .it is reasonable to assume the bar to be one - dimensional provided the bar thickness on both sides is much smaller than the wavelength of the propagating wave .besides , an infinite length of the bar ensures complete attenuation of the wave by the time it reaches its free ends , which makes the wave propagation free from reflections and the resulting complexities of boundary value problems .an illustration of the equivalent mechanical model of nonlocal elasticity can be found in , where points of a material are shown connected to each other by means of springs of different stiffnesses .although some suitable forms of the attenuation kernel have been suggested in the relevant literature ( see , e.g. ) , recently there has been a growing interest in the power - law kernel ( see , ) , which is given as where ] and is the tempering parameter . as illustrated in fig . [fig : attenuation ] , the tempered attenuation kernel overcomes the singularity and ensures finite stress - strain at the local point . besides , the behaviour of the tempered kernel in the far nonlocal region is almost identical to that of the conventional kernel for all values of . in the limit as ,the applied stress is felt all over the bar with little attenuation . on the other hand , in the limit as ,the attenuation kernel approaches the dirac - delta form implying that the stress is completely attenuated in the immediate neighbourhood of the local point .as seen , the tempered attenuation kernel does not decay monotonically from the local point and therefore , seems contrary to the behaviour expected from an ideal attenuation kernel .however we stress that , since the two peaks arising due to tempering lie in the immediate vicinity of the local point , the behaviour of the material is not affected significantly .besides , this undesired tempering effect can easily be compensated by giving appropriate weighting to the material constant . replacing the conventional power - law attenuation kernel in equation by its tempered version ,we then obtain the constitutive relation of the tempered nonlocal elastic bar as , .\ ] ] comparison of the conventional attenuation kernel ( top pane ) with the tempered attenuation kernel ( bottom pane ) for .local point corresponds to .the fractional order has values ( magenta , solid curve ) , ( blue , dashed curve ) , ( black , dotted curve ) and ( red , thicker solid curve).,title="fig : " ] comparison of the conventional attenuation kernel ( top pane ) with the tempered attenuation kernel ( bottom pane ) for .local point corresponds to .the fractional order has values ( magenta , solid curve ) , ( blue , dashed curve ) , ( black , dotted curve ) and ( red , thicker solid curve).,title="fig : " ]since in this section we will transform equation to the fourier domain , it is essential to first introduce the mathematical framework of dispersion modelling . for a unit amplitude ,one - dimensional propagating plane wave , the displacement is given as : where is space and is time . in a loss - less medium ,the phase velocity is , the group velocity is , and the two are equal . besides , the wavenumber is a real quantity , which due to its directional attributes is also called the wave propagation vector .as long as the total energy is conserved during the wave - material interaction , the convention adopted to model wave attenuation is to assume and as real and complex quantities respectively .however , there are also some examples where the opposite convention of as a complex quantity and as a real quantity have been made ( see , e.g. dissipation of thermoelastic waves in solids ) . in this work, we assume the wave propagation vector to be complex such that where is the wave velocity vector and is the wave attenuation vector . here , instead of following the common practice of representing the real and imaginary parts of by and ( see , e.g. ) ,we choose to comply with the notations mentioned in and thereby avoid ambiguities with the fractional exponent of the attenuation kernel .moreover , the imaginary part of the wavenumber , which we denote as `` wave attenuation '' , should not be confused with the attenuation kernel .substituting equation in equation , the displacement of an attenuating wave in a dispersive medium becomes with the phase and group velocities given by the real part of as and .both and are functions of and also related to each other by the kramers - kronig relations due to causality .the fact that the kramers - kronig relations are fundamentally the hilbert transform pair has two interesting inferences .first , both and can not have imaginary values simultaneously .second , the wave attenuation is always coupled with its velocity .now , we introduce the spatio - temporal fourier transform as =\hat{u}\left(k,\omega\right)\triangleq\int\limits _ { -\infty}^{\infty}\int\limits _ { -\infty}^{\infty}u\left(x , t\right)e^{-i\left(kx-\omegat\right)}dx\mbox { } dt.\ ] ] as we are concerned with the dependency of wave - material interactions on the wave propagation vector alone , we neglect the time domain and only consider the fourier transform in the spatial domain . taking the fourier transform of equation ,we get : -5pt where -12pt ^{-\alpha}dy , \text { and}\ ] ] ^{-\lambda y}y^{-\alpha}dy.\ ] ] according to the mass conservation principle , strain is given as , and , its fourier transform is next , we consider newton s second law of conservation of momentum and the equation of motion for the plane wave , from which we respectively have : where , is the phase velocity of the wave at zero frequency and is the mass density of the material .assuming zero initial conditions and integrating on both sides of equation , we get . the corresponding fourier transform of is then , substituting equations and in , we obtain : the translational property of the fourier transform further gives , =e^{-iky } \cdot \hat{\epsilon}\left(k\right).\ ] ] substituting equations and in equations and , we rewrite the expressions of and as : replacing and in equation with their respective equivalent forms and , and then making use of the relation we readily obtain next , we proceed to obtain a closed - form solution of the two integrals .it is straightforward to see that the two integrals are modified forms of the gamma function whose convergence is established using the dominated convergence theorem ( see , chapter 3 in ) .substituting , in equation , where and is the complex wavenumber , we obtain a useful expression : -10pt the integrals in equation are then evaluated as the calculation of similar integrals but for a different exponent of the integrating variable can also be found in a recent paper ( see , theorem 2.1 in ) .now , substituting the value of the integrals from equations and in equation , we finally have .\ ] ] equation is the required spatial dispersion relation which governs the propagation of an elastic wave in a tempered nonlocal elastic bar .it is essential to note that the tempering parameter has a dimension of ^{-1} ] .in the absence of nonlocality i.e. , if , the spatial dispersion relation reduces to the classical case of loss - less dispersion given as . the equivalent form of the constitutive relation is obtained if the same substitution is made in equation .the dispersion equation is a nonlinear equation with complex coefficients and fractional - order exponents of complex wavenumber .as it is difficult to obtain the closed - form expressions for real and imaginary parts of , we resort to root finding algorithms . besides, the choice of numerical algorithms is limited since most of the available methods are usually applicable for equations containing real coefficients and integer - order exponents ( see , chapter 9 in ) .we even tried to formulate and numerically implement the classical root - finding algorithms ( such as , the bisection method and the newton - raphson method ) in the complex domain .however , the only algorithm which ensured convergence and achieved the desired level of accuracy was the mller s method . based on the generalization of the secant method ,mller s method uses quadratic interpolation to approximate the given function and then the root of the function is approximated by the root of the interpolating quadratic ( see , chapter 7 in ) .the algorithm requires three distinct guesses which are used for three functional evaluations to start with , but continues with one function evaluation afterwards .the method does not requires evaluation of derivatives of the function and its rate of convergence is about , i.e. nearly quadratic so that the number of correct decimal places almost doubles with each iteration .further , we have also eliminated the potential round - off errors due to subtractive cancellation by implementing double precision calculation followed by an alternative formulation of the roots . for a quadratic equation of the form , where , and are its coefficients , the formula for the root which we have implemented in the algorithm is , instead of its conventional form ( see , chapter 3 in ) .we set the parameters , and to solve the dispersion equation for three different values of ; . for each value, the dispersion equation is solved for a wide range of frequency values , .each frequency decade is uniformly sampled into a minimum of four points , however in situations where significant jumps in the values of the roots are observed , a finer sampling of six to eight points is followed .the values of the numerically obtained roots are accurate up to a minimum of six significant figures .however there was an exception for , where the algorithm could not converge properly in the frequency range of - hz , which further illustrates the numerical difficulty associated with the extraction of complex roots .it should be noted that since the coefficients of wavenumber in equation are complex , roots do not come in conjugate pairs . as illustrated in fig .[ fig : dispersion ] , the obtained roots for a given frequency value give the respective phase velocity and wave attenuation .the main observations from the plots can be summarized into four points .first , the pattern of an increased phase velocity with frequency corresponds to anomalous dispersion which is also seen in the case of time - fractional kelvin - voigt and zener models ( see , figures 1 and 2 in ) .second , in the high frequency regime , the phase velocity levels off just like the fractional zener model .third , in the low frequency regime of hz , wave attenuation is relatively higher than in the intermediate - to - high frequency regime .fourth , for frequency values hz , the straight lines in the log - log attenuation plot , clearly indicate the power - law dependency of wave attenuation on frequency .further , wave attenuation increases as the value of is increased .frequency - dependent phase velocity ( top plane ) and wave attenuation ( bottom plane ) in a tempered nonlocal elastic bar .the markers ; square for ( blue , dashed curve ) , cross for ( black , dotted curve ) and circle for ( red , solid curve ) , represent the numerically obtained roots .lack of markers in the frequency decade of - hz for is due to the non - convergence of mller s algorithm.,title="fig : " ] frequency - dependent phase velocity ( top plane ) and wave attenuation ( bottom plane ) in a tempered nonlocal elastic bar . the markers ; square for ( blue , dashed curve ) , cross for ( black , dotted curve ) and circle for ( red , solid curve ) , represent the numerically obtained roots .lack of markers in the frequency decade of - hz for is due to the non - convergence of mller s algorithm.,title="fig : " ] the obtained dispersion plots can be understood from its underlying physics . unlike the case of temporal dispersion ,spatial dispersion is determined by the wave propagation vector of the propagating wave . in the case of low frequency waves ,the wavelength of the wave is large , as a result the wave interacts with an equivalent larger region of the nonlocal elastic bar .the nonlocal attenuation mechanism which is spread over the length of the bar leads to greater wave attenuation and slows it down . as the frequency of the wave is increased , the size of the wave and hence , the nonlocal region of the bar with which the waves interacts becomes smaller .for very high frequencies , the wave `` feels '' the material only at a local point and since we have only considered the nonlocal attenuation mechanism , wave attenuation effectively disappears . as explained ,a high frequency wave suffers less opposition in the nonlocal elastic bar , and therefore traverses with maximum phase velocity and little attenuation .one of the goals which has been achieved in the present work is the understanding of how the phase velocity and wave attenuation are affected as it propagates in a nonlocal elastic material .even though the wave attenuation in a nonlocal bar appears unusual , it seems very physical .considering the fact that wave attenuation in a nonlocal bar is negligible in the high frequency regime , such a material if engineered could find applications as an effective channel to transfer energy .we have also established the importance of tempering and the framework of fractional calculus in investigating nonlocal problems where conventional power - law kernels give non - physical results .further work should include an investigation of the physical implications of the material constant and the fractional exponent of the attenuation kernel .agarwal , d.n .pattanayak , e. wolf , electromagnetic fields in spatially dispersive media .b _ * 10 * , no 4 ( 1974 ) , 14471475 .atanackovi , b. stankovi , generalized wave equation in nonlocal elasticity ._ acta mech . _ * 208 * , no 1 - 2 ( 2009 ) , 110 .b. banerjee , _ an introduction to metamaterials and waves in composites_. crc press , london ( 2011 ) .banerjee , y.h .pao , thermoelastic waves in anisotropic solids ._ j. acoust .* 56 * , no 5 ( 1974 ) , 14441454 .bhatia , _ ultrasonic absorption : an introduction to the theory of sound absorption and dispersion in gases , liquids and solids_. dover publications , new york ( 2012 ) .a. carpinteri , p. cornetti , a. sapora , a fractional calculus approach to nonlocal elasticity ._ * 193 * , no 1 ( 2011 ) , 193204 .cartea , d. del - castillo - negrete , fluid limit of the continuous - time random walk with general lvy jump distribution functions .e _ * 76 * , no 4 ( 2007 ) , 041105 .g. casula , j.m .carcione , generalized mechanical model analogies of linear viscoelastic behaviour ._ b. geofis .appl . _ * 34 * , no 136 ( 1992 ) , 235256 .n. challamel , d. zorica , t.m .atanackovi , d.t .spasi , on the fractional generalization of eringen s nonlocal elasticity for wave propagation . _ cr .mecanique _ * 341 * , no 3 ( 2013 ) , 298303 .chapra , r.p .canale , _ numerical methods for engineers_. mcgraw - hill , new york ( 2009 ) .eringen , d.g.b .edelen , on nonlocal elasticity .sci . _ * 10 * , no 3 ( 1972 ) , 233248 .eringen , linear theory of nonlocal elasticity and dispersion of plane waves .sci . _ * 10 * , no 5 ( 1972 ) , 425435 .eringen , vistas of nonlocal continuum physics .sci . _ * 30 * , no 10 ( 1992 ) , 15511565 .eringen , _ nonlocal continuum field theories_. springer , ( 2002 ) .a. hanyga , m. seredynska , spatially fractional - order viscoelasticity , non - locality , and a new kind of anisotropy ._ j. math .phys . _ * 53 * , no 5 ( 2012 ) , 052902 - 1052902 - 21 .s. holm , s.p .nsholm , f. prieur , r. sinkus , deriving fractional acoustic wave equations from mechanical and thermal constitutive equations .appl . _ * 66 * , no 5 ( 2013 ) , 621629 .s. holm , s.p .nsholm , comparison of fractional wave equations for power law attenuation in ultrasound and elastography ._ ultrasound med .biol . _ * 40 * , no 4 ( 2014 ) , 695703 .jongen , j.m .thijssen , m. van den aarssen , w.a .verhoef , a general model for the absorption of ultrasound by biological tissues and experimental verification ._ j. acoust_ * 79 * , no 2 ( 1986 ) , 535540 .d. klatt , u. hamhaber , p. asbach , j. braun , i. sack , noninvasive assessment of the rheological behavior of human organs using multifrequency mr elastography : a study of brain and liver viscoelasticity .biol . _ * 52 * , no 24 ( 2007 ) , 72817294 .e. krner , elasticity theory of materials with long range cohesive forces .j. solids struct ._ * 3 * , no 5 ( 1967 ) , 731742 .f. mainardi , _ fractional calculus and waves in linear viscoelasticity : an introduction to mathematical models_. world scientific , singapore ( 2010 ) .mller , b. gurevich , m. lebedev , seismic wave attenuation and dispersion resulting from wave - induced flow in porous rocks - a review ._ geophysics _ * 75 * , no 5 ( 2010 ) , 75a14775a164 . s.p .nsholm , s. holm , linking multiple relaxation , power - law attenuation , and fractional wave equations ._ j. acoust .* 130 * , no 5 ( 2011 ) , 30383045 .nsholm , s. holm , on a fractional zener elastic wave equation .* 16 * , no 1 ( 2013 ) , 2650 ; doi : 10.2478/s13540 - 013 - 0003 - 1 ; http://www.degruyter.com/view/j/fca.2013.16.issue-1/issue-files/fca.2013.16.issue-1.xml[http://www.degruyter.com/view/j/ ] .nsholm , model - based discrete relaxation process representation of band - limited power - law attenuation ._ j. acoust .soc . am . _* 133 * , no 3 ( 2013 ) , 17421750 .m. di paola , m. zingales , long - range cohesive interactions of non - local continuum faced by fractional calculus .j. solids struct ._ * 45 * , no 21 ( 2008 ) , 56425659 .m. di paola , g. failla , a. pirrotta , a. sofi , m. zingales , the mechanically based non - local elasticity : an overview of main results and future challenges .t. r. soc .a _ * 371 * , no 1993 ( 2013 ) , 20120433 . c. polizzotto , nonlocal elasticity and related variational principles . _ int .j. solids struct . _* 38 * , no 42 ( 2001 ) , 73597380 .w.h . press ,teukolsky , w.t .vetterling , b.p .flannery , _ numerical recipes 3rd edition : the art of scientific computing_. cambridge university press , new york ( 2007 ) .a. sapora , p. cornetti , a. carpinteri , wave propagation in nonlocal elastic continua modelled by a fractional calculus approach .nonlinear sci . _ * 18 * , no 1 ( 2013 ) , 6374 .p. straka , m.m .meerschaert , r.j .mcgough , y. zhou , fractional wave equations with attenuation .* 16 * , no 1 ( 2013 ) , 262272. v. sundararaghavan , a. waas , non - local continuum modeling of carbon nanotubes : physical interpretation of non - local kernels using atomistic simulations ._ j. mech .solids _ * 59 * , no 6 ( 2011 ) , 11911203 .szabo , causal theories and data for acoustic attenuation obeying a frequency power law ._ j. acoust .* 97 * , no 1 ( 1995 ) , 1424 .tarasov , lattice model with power - law spatial dispersion for fractional elasticity .j. phys . _ * 11 * , no 11 ( 2013 ) , 15801588 .q. wang , wave propagation in carbon nanotubes via nonlocal continuum mechanics ._ j. appl .phys . _ * 98 * , no 12 ( 2005 ) , 124301 .m. zhang , p. nigwekar , b. castaneda , k. hoyt , j.v .joseph , a. di s. agnese , e.m . messing ,strang , d.j .rubens , k.j .parker , quantitative characterization of viscoelastic properties of human prostate correlated with histology ._ ultrasound med .* 34 * , no 7 ( 2008 ) , 10331042 . dept .of informatics , university of oslo + p.o .box 1080 , blindern + no-0316 oslo , norway received : july 15 , 2015 + e - mail : vikashp.uio.no revised : january 28 , 2016 + e - mail : sverre.uio.no + + p.o .box 53 , n-2027 kjeller , norway + e - mail : peter.no
we apply the framework of tempered fractional calculus to investigate the spatial dispersion of elastic waves in a one - dimensional elastic bar characterized by range - dependent nonlocal interactions . the measure of the interaction is given by the attenuation kernel present in the constitutive stress - strain relation of the bar , which follows from the krner - eringen s model of nonlocal elasticity . we employ a fractional power - law attenuation kernel and spatially temper it , to make the model physically valid and mathematically consistent . the spatial dispersion relation is derived , but it turns out to be difficult to solve , both analytically and numerically . consequently , we use numerical techniques to extract the real and imaginary parts of the complex wavenumber for a wide range of frequency values . from the dispersion plots , it is found that the phase velocity dispersion of elastic waves in the tempered nonlocal elastic bar is similar to that from the time - fractional zener model . further , we also examine the unusual attenuation pattern obtained for the elastic wave propagation in the bar . _ msc 2010 _ : primary 26a33 , 74b20 , 74d10 ; secondary 26a30 , 42a38 , 65h04 _ key words and phrases _ : fractional calculus , acoustic wave equations , eringen model , nonlocal elasticity , tempered fractional calculus , fractional zener model , spatial dispersion , anomalous attenuation ` the peer - reviewed version of this paper is published in fract . calc . appl . anal . vol . 19 , no 2 ( 2016 ) , pp . 498 - 515 , doi : 10.1515/fca-2016 - 0026 , and is available online at http://www.degruyter.com/view/j/fca the current document is an e - print which differs in e.g. pagination , reference numbering , and typographic detail . `
in recent years , convex programs have become increasingly popular for solving a wide range of problems in machine learning and other fields , ranging from theoretical modeling , e.g. , latent variable graphical model selection , low - rank feature extraction ( e.g. , matrix decomposition and matrix completion ) , subspace clustering , and kernel discriminant analysis , to real - world applications , e.g. , face recognition , saliency detection , and video denoising .most of the problems can be ( re)formulated as the following linearly constrained separable convex program , , where s are convex sets , the program can be transformed into ( [ eq : model_problem_multivar ] ) by introducing auxiliary variables , c.f .( [ eq : model_problem_multivar_convex_sets])-([eq : redefine_equiv ] ) . ] : where and could be either vectors or matrices a block " of variables because it may consist of multiple scalar variables .we will use bold capital letters if a block is known to be a matrix .] , is a closed proper convex function , and is a linear mapping . without loss of generality , we may assume that none of the s is a zero mapping , the solution to is non - unique , and the mapping is onto is not full column rank but full row rank , where is the matrix representation of . ] . in this subsection, we present some examples of machine learning problems that can be formulated as the model problem .low - rank representation ( lrr ) is a recently proposed technique for robust subspace clustering and has been applied to many machine learning and computer vision problems . however , lrr works well only when the number of samples is more than the dimension of the samples , which may not be satisfied when the data dimension is high .so liu et al . proposed latent lrr to overcome this difficulty .the mathematical model of latent lrr is as follows : where is the data matrix , each column being a sample vector , is the nuclear norm , i.e. , the sum of singular values , and is the norm , i.e. , the sum of absolute values of all entries .latent lrr is to decompose data into principal feature and salient feature , up to sparse noise .nonnegative matrix completion ( nmc ) is a novel technique for dimensionality reduction , text mining , collaborative filtering , and clustering , etc .it can be formulated as : where is the observed data in the matrix contaminated by noise , is an index set , is a linear mapping that selects those elements whose indices are in , and is the frobenius norm .nmc is to recover the nonnegative low - rank matrix from the observed noisy data . to see that the nmc problem can be reformulated as ( [ eq : model_problem_multivar ] ) , we introduce an auxiliary variable and rewrite as where is the characteristic function of the set of nonegative matrices . besides unsupervised learning models shown above , many supervised machine learning problems can also be written in the form of .for example , using logistic function as the loss function in the group lasso with overlap , one obtains the following model : where and , , are the training data and labels , respectively , and and parameterize the linear classifier . , , are the selection matrices , with only one 1 at each row and the rest entries are all zeros .the groups of entries , , , may overlap each other .this model can also be considered as an extension of the group sparse logistic regression problem to the case of overlapped groups . introducing , , , and , where , can be rewritten as which is a special case of .although general theories on convex programs are fairly complete nowadays , e.g. , most of them can be solved by the interior point method , when faced with large scale problems , which are typical in machine learning , the general theory may not lead to efficient algorithms .for example , when using cvx , an interior point based toolbox , to solve nuclear norm minimization problems ( i.e. , one of the s is the nuclear norm of a matrix , e.g. , and ) , such as matrix completion , robust principal component analysis , and low - rank representation , the complexity of each iteration is , where is the matrix size .such a complexity is unbearable for large scale computing . to address the scalability issue ,first order methods are often preferred . the accelerated proximal gradient ( apg )algorithm is popular due to its guaranteed convergence rate , where is the iteration number .however , apg is basically for unconstrained optimization . for constrained optimization, the constraints have to be added to the objective function as penalties , resulting in approximated solutions only . the alternating direction method ( adm ) has regained a lot of attention recently andis also widely used .it is especially suitable for separable convex programs like ( [ eq : model_problem_multivar ] ) because it fully utilizes the separable structure of the objective function . unlike apg, adm can solve ( [ eq : model_problem_multivar ] ) exactly .another first order method is the split bregman method , which is closely related to adm and is influential in image processing .an important reason that first order methods are popular for solving large scale convex programs in machine learning is that the convex functions s are often matrix or vector norms or characteristic functions of convex sets , which enables the following subproblems ( called the proximal operation of ) to have closed form solutions .for example , when is the norm , , where is the soft - thresholding operator ; when is the nuclear norm , the optimal solution is : , where is the singular value decomposition ( svd ) of ; and when is the characteristic function of the nonnegative cone , the optimal solution is .since subproblems like ( [ eq : proxy ] ) have to be solved in each iteration when using first order methods to solve separable convex programs , that they have closed form solutions greatly facilitates the optimization .however , when applying adm to solve ( [ eq : model_problem_multivar ] ) with non - unitary linear mappings ( i.e. , is not the identity mapping , where is the adjoint operator of ) , the resulting subproblems may not have closed form solutions in ( [ eq : proxy ] ) becomes , which can not be reduced to .] , hence need to be solved iteratively , making the optimization process awkward .some work has considered this issue by linearizing the quadratic term in the subproblems , hence such a variant of adm is called the linearized adm ( ladm ) . propose the generalized adm that makes both adm and ladm as its special cases and prove its globally linear convergence by imposing strong convexity on the objective function or full - rankness on some linear operators .nonetheless , most of the existing theories on adm and ladm are for the _ two - block _ case , i.e. , in ( [ eq : model_problem_multivar ] ) .the number of blocks is restricted to two because the proofs of convergence for the two - block case are not applicable for the multi - block case , i.e. , in ( [ eq : model_problem_multivar ] ) .actually , a naive generalization of adm or ladm to the multi - block case may diverge ( see ( [ eq : parallel_bp ] ) and ) . unfortunately , in practice multi - block convex programs often occur , e.g. , robust principal component analysis with dense noise , latent low - rank representation ( see ) , and when there are extra convex set constraints ( see ( [ eq : nmc ] ) and ( [ eq : model_problem_multivar_convex_sets])-([eq : model_problem_multivar_equiv ] ) ) .so it is desirable to design practical algorithms for the multi - block case . recently and considered the multi - block ladm and adm , respectively . to safeguard convergence , proposed ladm with gaussian back substitution ( ladmgb ) , which destroys the sparsity or low - rankness of the iterates during iterations when dealing with sparse representation and low - rank recovery problems , while proposed adm with parallel splitting , whose subproblems may not be easily solvable .moreover , they all developed their theories with the penalty parameter being fixed , resulting in difficulty of tuning an optimal penalty parameter that fits for different data and data sizes .this has been identified as an important issue . to propose an algorithm that is more suitable for convex programs in machine learning , in this paper we aim at combining the advantages of , , and , i.e. , combining ladm , parallel splitting , and adaptive penalty .hence we call our method ladm with parallel splitting and adaptive penalty ( ladmpsap ) .with ladm , the subproblems will have forms like ( [ eq : proxy ] ) and hence can be easily solved . with parallel splitting ,the sparsity and low - rankness of iterates can be preserved during iterations when dealing with sparse representation and low - rank recovery problems , saving both the storage and the computation load . with adaptive penalty ,the convergence can be faster and it is unnecessary to tune an optimal penalty parameter .parallel splitting also makes the algorithm highly parallelizable , making ladmpsap suitable for parallel or distributed computing , which is important for large scale machine learning . when all the component objective functions have bounded subgradients , we prove convergence results that are stronger than the existing theories on adm and ladm .for example , the penalty parameter can be _ unbounded _ and the _ sufficient and necessary _ conditions of the global convergence of ladmpsap can be obtained as well .we also propose a simple optimality measure and prove the convergence rate of ladmpsap in an ergodic sense under this measure . our proof is simpler than those in and which relied on a complex optimality measure .when a convex program has extra convex set constraints , we further devise a practical version of ladmpsap that converges faster thanks to better parameter analysis .finally , we generalize ladmpsap to cope with more difficult s , whose proximal operation is not easily solvable , by further linearizing the smooth components of s .experiments testify to the advantage of ladmpsap in speed and numerical accuracy .note that also proposed a multiple splitting algorithm for convex optimization .however , they only considered a special case of our model problem ( [ eq : model_problem_multivar ] ) , i.e. , all the linear mappings s are identity mappings . with their simpler model problem ,linearization is unnecessary and a faster convergence rate , , can be achieved .in contrast , in this paper we aim at proposing a practical algorithm for efficiently solving more general problems like ( [ eq : model_problem_multivar ] ) .we also note that used the same linearization technique for the smooth components of s as well , but they only considered a special class of s .namely , the non - smooth component of is a sum of and norms or its epigraph is polyhedral .moreover , for parallel splitting ( jacobi update ) has to incorporate a postprocessing to guarantee convergence , by interpolating between an intermediate iterate and the previous iterate .third , still focused on a fixed penalty parameter .again , our method can handle more general s , does not require postprocessing , and allows for an adaptive penalty parameter .a more general splitting / linearization technique can be founded in . however , the authors only proved that any accumulation point of the iteration is a kuhn - karush - tucker ( kkt ) point and did not investigate the convergence rate .there was no evidence that the iteration could converge to a unique point .moreover , the authors only studied the case of fixed penalty parameter .although dual ascent with dual decomposition can also solve ( [ eq : model_problem_multivar ] ) in a parallel way , it may break down when some s are not strictly convex , which typically happens in sparse or low - rank recovery problems where norm or nuclear norm are used .even if it works , since is not strictly convex , dual ascent becomes dual _ subgradient _ ascent , which is known to converge at a rate of slower than our rate .moreover , dual ascent requires choosing a good step size for each iteration , which is less convenient than adm based methods .the remainder of this paper is organized as follows .we first review ladm with adaptive penalty ( ladmap ) for the two - block case in section [ sec : ladmap-2var ] .then we present ladmpsap for the multi - block case in section [ sec : ladmpsap ] .next , we propose a practical version of ladmpsap for separable convex programs with convex set constraints in section [ sec : pplamdap ] .we further extend ladmpsap to proximal ladmpsap for programs with more difficult objective functions in section [ sec : g - ladmpsap ] .we compare the advantage of ladmpsap in speed and numerical accuracy with other first order methods in section [ sec : exp ] .finally , we conclude the paper in section [ sec : con ] .we first review ladmap for the two - block case of ( [ eq : model_problem_multivar ] ) .it consists of four steps : 1 .update : 2 .update : 3 .update : 4 .update : where is the lagrange multiplier , is the penalty parameter , with ( is the operator norm of ) , and is an adaptively updated parameter ( see ) .please refer to for details .note that the latest is immediately used to compute ( see ) .so and have to be updated alternately , hence the name alternating direction method .in this section , we extend ladmap for multi - block separable convex programs ( [ eq : model_problem_multivar ] ) .we also provide the _ sufficient and necessary conditions _ for global convergence when subgradients of the objective functions are all bounded .we further prove the convergence rate in an ergodic sense .contrary to our intuition , the multi - block case is actually fundamentally different from the two - block one . for the multi - block case , it is very natural to generalize ladmap for the two - block case in a straightforward way , with unfortunately , we were unable to prove the convergence of such a naive ladmap using the same proof for the two - block case .this is because their fejr monotone inequalities ( see remark [ rem : different ] ) can not be the same .that is why he et al .has to introduce an extra gaussian back substitution for correcting the iterates .actually , the above naive generalization of ladmap may be divergent ( which is even worse than converging to a wrong solution ) , e.g. , when applied to the following problem : where and and are gaussian random matrix and vector , respectively , whose entries fulfil the standard gaussian distribution independently . also analyzed the naively generalized adm for the multi - block case and showed that even for three blocks the iteration could still be divergent .they also provided sufficient conditions , which basically require that the linear mappings should be orthogonal to each other ( , ) , to ensure the convergence of naive adm .fortunately , by modifying slightly we are able to prove the convergence of the corresponding algorithm . more specifically , our algorithm for solving ( [ eq : model_problem_multivar ] ) consists of the following steps : 1 .update s in parallel : + 2 .update : + 3 .update : + where , and with being a constant and being a threshold .indeed , we replace with as ( [ eq : hat_lambda ] ) , which is independent of , and the rest procedures of the algorithm , including the scheme and to update the penalty parameter , are all inherited from , except that s have to be made larger ( see theorem [ thm : converge_multivar ] ) . as now s are updated in parallel and changes adaptively, we call the new algorithm ladm with _parallel splitting _ and _ adaptive penalty _ ( ladmpsap ) .some existing work ( e.g. , ) proposed stopping criteria out of intuition only , which may not guarantee that the correct solution is approached .recently , and suggested that the stopping criteria can be derived from the kkt conditions of a problem .here we also adopt such a strategy .specifically , the iteration terminates when the following two conditions are met : the first condition measures the feasibility error .the second condition is derived by comparing the kkt conditions of problem and the optimality condition of subproblem .the rules ( [ eq : update_beta_multivar ] ) and ( [ eq : update_rho ] ) for updating are actually hinted by the above stopping criteria such that the two errors are well balanced . for better reference , we summarize the proposed ladmpsap algorithm in algorithm [ alg : ladmpsap - multivar ] . for fast convergence , we suggest that and and should be chosen such that increases steadily along with iterations .set , , , , , , , .compute as ( [ eq : hat_lambda ] ) .update s in parallel by solving update by ( [ eq : update_lambda_multivar ] ) and by ( [ eq : update_beta_multivar ] ) and ( [ eq : update_rho ] ) . in the following, we always use to denote the kkt point of problem ( [ eq : model_problem_multivar ] ) . for the global convergence of ladmpsap, we have the following theorem , where we denote for simplicity . *( convergence of ladmpsap ) * if is non - decreasing and upper bounded , , , then generated by ladmpsap converge to a kkt point of problem ( [ eq : model_problem_multivar]).[thm : converge_multivar ] theorem [ thm : converge_multivar ] is a convergence result for general convex programs ( [ eq : model_problem_multivar ] ) , where s are general convex functions and hence needs to be bounded . actually , almost all the existing theories on adm and ladm even assumed a fixed .for adaptive , it will be more convenient if a user needs not to specify an upper bound on because imposing a large upper bound essentially equals to allowing to be unbounded . since many machine learning problems choose s as matrix / vector norms , which result in bounded subgradients , we find that the boundedness assumption can be removed .moreover , we can further prove the _ sufficient and necessary _ condition for global convergence .[ thm : convergence_unbounded]*(sufficient condition for global convergence ) * if is non - decreasing and , , is bounded , , then the sequence generated by ladmpsap converges to an optimal solution to ( [ eq : model_problem_multivar ] ) .[ rem : convergence_of_lambda ] theorem [ thm : convergence_unbounded ] does not claim that converges to a point .however , as we are more interested in , such a weakening is harmless .we also have the following result on the necessity of .[ thm : convergence_unbounded_necessary]*(necessary condition for global convergence ) * if is non - decreasing , , is bounded , , then is also a necessary condition for the global convergence of generated by ladmpsap to an optimal solution to ( [ eq : model_problem_multivar ] ) . with the above analysis , when all the subgradients of the component objective functions are bounded we can remove in algorithm [ alg : ladmpsap - multivar ] .the convergence rate of adm and ladm in the traditional sense is an open problem . although claimed that they proved the linear convergence rate of adm , their assumptions are actually quite strong .they assumed that the non - smooth part of is a sum of and norms or its epigraph is polyhedral .moreover , the convex constraint sets should all be polyhedral and bounded .so although their results are encouraging , for general convex programs the convergence rate is still a mystery .recently , and proved an convergence rate of adm and adm with parallel splitting in an ergodic sense , respectively .namely violates an optimality measure in .their proof is lengthy and is for fixed penalty parameter only . in this subsection ,based on a simple optimality measure we give a simple proof for the convergence rate of ladmpsap .for simplicity , we denote , , and .we first have the following proposition .[ prop : optimality ] is an optimal solution to ( [ eq : model_problem_multivar ] ) if and only if there exists , such that since the left hand side of ( [ eq : constrained_optimality ] ) is always nonnegative and it becomes zero only when is an optimal solution , we may use its magnitude to measure how far a point is from an optimal solution .note that in the unconstrained case , as in apg , one may simply use to measure the optimality .but here we have to deal with the constraints .our criterion is simpler than that in , which has to compare with all .then we have the following convergence rate theorem for ladmpsap in an ergodic sense .[ thm : convergence_rate_2var]*(convergence rate of ladmpsap ) * define , where .then the following inequality holds for : where and .theorem [ thm : convergence_rate_2var ] means that is by from being an optimal solution .this theorem holds for both bounded and unbounded . in the bounded case , is simply .theorem [ thm : convergence_rate_2var ] also hints that should approach infinity to guarantee the convergence of ladmpsap , which is consistent with theorem [ thm : convergence_unbounded_necessary ] .in real applications , we are often faced with convex programs with convex set constraints : where is a closed convex set . in this section, we consider to extend ladmpsap to solve the more complex convex set constraint model .we assume that the projections onto s are all easily computable . for many convex sets used in machine learning ,such an assumption is valid , e.g. , when s are nonnegative cones or positive semi - definite cones . in the following ,we discuss how to solve ( [ eq : model_problem_multivar_convex_sets ] ) efficiently . for simplicity ,we assume , .finally , we assume that is an interior point of .we introduce auxiliary variables to convert into and , .then ( [ eq : model_problem_multivar_convex_sets ] ) can be reformulated as : where is the characteristic function of , where .the adjoint operator is where is the -th sub - vector of , partitioned according to the sizes of and , .then ladmpsap can be applied to solve problem ( [ eq : model_problem_multivar_equiv ] ) .the lagrange multiplier and the auxiliary multiplier are respectively updated as and is updated as ( see ( [ eq : update_xi ] ) ) /(\eta_i\beta_k)\right\|^2,\ \label{eq : ladmpsap_update_xi1}\\ { \mathbf{x}}_{n+i}^{k+1}&=&\operatorname*{argmin}\limits_{{\mathbf{x}}\in x_i } { \frac{\displaystyle \eta_{n+i}\beta_k}{\displaystyle 2}}\left\|{\mathbf{x}}-{\mathbf{x}}_{n+i}^k-\hat{{\mathbf{\lambda}}}^k_{i+1}/(\eta_{n+i}\beta_k)\right\|^2\nonumber\\ & = & \pi_{x_i } \left({\mathbf{x}}_{n+i}^k+\hat{{\mathbf{\lambda}}}^k_{i+1}/(\eta_{n+i}\beta_k)\right),\label{eq : ladmpsap_update_xi2}\end{aligned}\ ] ] where is the projection onto and . as for the choice of s , although we can simply apply theorem [ thm : converge_multivar ] to assign their values as and , , such choices are too pessimistic . as s are related to the magnitudes of the differences in from , we had better provide tighter estimate on s in order to achieve faster convergence .actually , we have the following better result .[ thm : better_eta ] for problem ( [ eq : model_problem_multivar_equiv ] ) , if is non - decreasing and upper bounded and s are chosen as and , , then the sequence generated by ladmpsap converge to a kkt point of problem ( [ eq : model_problem_multivar_equiv ] ) . finally , we summarize ladmpsap for problem ( [ eq : model_problem_multivar_equiv ] ) in algorithm [ alg : ladmpsap - multivar_equiv ] , which is a practical algorithm for solving ( [ eq : model_problem_multivar_convex_sets ] ) .set , , , , , , , , , .compute as ( [ eq : update_hatlambda_multivar - equiv ] ) .update , , in parallel as ( [ eq : ladmpsap_update_xi1])-([eq : ladmpsap_update_xi2 ] ) .update by ( [ eq : update_lambda_multivar - equiv ] ) and by ( [ eq : update_beta_multivar ] ) and ( [ eq : update_rho ] ) .( note that in ( [ eq : update_rho ] ) , ( [ eq : stopping1 ] ) , and ( [ eq : stopping2 ] ) , and should be replaced by and , respectively . )analogs of theorems [ thm : convergence_unbounded ] and [ thm : convergence_unbounded_necessary ] are also true for algorithm [ alg : ladmpsap - multivar_equiv ] although s are unbounded , thanks to our assumptions that all , , are bounded and is an interior point of , which result in an analog of proposition [ prop : unbounded_beta ] .consequently , can also be removed if all , , are bounded .since algorithm [ alg : ladmpsap - multivar_equiv ] is an application of algorithm [ alg : ladmpsap - multivar ] to problem ( [ eq : model_problem_multivar_equiv ] ) , only with refined parameter estimation , its convergence rate in an ergodic sense is also , where is the number of iterations .in ladmpsap we have assumed that the subproblems ( [ eq : update_xi ] ) are easily solvable . in many machine learning problems , the functions s are often matrix or vector norms or characteristic functions of convex sets .so this assumption often holds .nonetheless , this assumption is not always true , e.g. , when is the logistic loss function ( see ) .so in this section we aim at generalizing ladmpsap to solve even more general convex programs .we are interested in the case that can be decomposed into two components : where both and are convex , is : and may not be differentiable but its proximal operation is easily solvable . for brevity , we call the lipschitz constant of . recall that in each iteration of ladmpsap , we have to solve subproblem .since now we do not assume that the proximal operation of is easily solvable , we may have difficulty in solving subproblem . by, we write down as since is , we may also linearize it at and add a proximal term .such an idea leads to the following updating scheme of : { \right\|}^2 , \end{array}\label{eq : update_xi_proximal}\end{aligned}\ ] ] where .the choice of is presented in theorem [ thm : general_convergence_unbounded ] , i.e. , where and are both positive constants . by our assumption on , the above subproblems are easily solvable .the update of lagrange multiplier and are still respectively goes as and but with the iteration terminates when the following two conditions are met : these two conditions are also deduced from the kkt conditions .we call the above algorithm as proximal ladmpsap and summarize it in algorithm [ alg : general - ladmpsap - multivar ] .set , , , , , , .compute as ( [ eq : hat_lambda ] ) .update s in parallel by solving { \right\|}^2,\ i=1,\cdots , n,\label{eq : general_ladmpsap_update_xi}\ ] ] where .update by ( [ eq : update_lambda_multivar ] ) and by with defined in . as for the convergence of proximal ladmpsap, we have the following theorem .[ thm : general_convergence_unbounded]*(convergence of proximal ladmpsap ) * if is non - decreasing and upper bounded , , where and are both positive constants , , then generated by proximal ladmpsap converge to a kkt point of problem ( [ eq : model_problem_multivar ] ) .we further have the following convergence rate theorem for proximal ladmpsap in an ergodic sense .[ thm : general_convergence_unbounded_rate]*(convergence rate of proximal ladmpsap ) * define , where .then the following inequality holds for : where and . when there are extra convex set constraints , , , we can also introduce auxiliary variables as in section [ sec : pplamdap ] and have an analogy of theorems [ thm : better_eta ] and [ thm : convergence_rate_2var ] .[ thm : better_eta_proximal ] for problem ( [ eq : model_problem_multivar_equiv ] ) , where is described at the beginning of section [ sec : g - ladmpsap ] , if is non - decreasing and upper bounded and , where , , , and , , then generated by proximal ladmpsap converge to a kkt point of problem ( [ eq : model_problem_multivar_equiv ] ) . the convergence rate in an ergodic sense is also , where is the number of iterations .in this section , we test the performance of ladmpsap on three specific examples of problem ( [ eq : model_problem_multivar ] ) , i.e. , latent low - rank representation ( see ) , nonnegative matrix completion ( see ) , and group sparse logistic regression with overlap ( see ) .we first solve the latent lrr problem . in order to test ladmpsap and related algorithms with data whose characteristics are controllable, we follow to generate synthetic data , which are parameterized as ( , , , ) , where , , , and are the number of independent subspaces , points in each subspace , and ambient and intrinsic dimensions , respectively .the number of scale variables and constraints is . as first order methods are popular for solving convex programs in machine learning , here we compare ladmpsap with several conceivable first order algorithms , including apg , naive adm , naive ladm , ladmgb , and ladmps .naive adm and naive ladm are generalizations of adm and ladm , respectively , which are straightforwardly generalized from two variables to multiple variables , as discussed in section [ sec : ladm+psap ] .naive adm is applied to solve ( [ eq : llrr ] ) after rewriting the constraint of ( [ eq : llrr ] ) as . for ladmps , is fixed in order to show the effectiveness of adaptive penalty .the parameters of apg and adm are the same as those in and , respectively . for ladm , we follow the suggestions in to fix its penalty parameter at , where is the size of . for ladmgb ,as there is no suggestion in on how to choose a fixed , we simply set it the same as that in ladm .the rest of the parameters are the same as those suggested in .we fix in ladmps and set and in ladmpsap . for ladmpsap , we also set , where and are the parameters s in algorithm [ alg : ladmpsap - multivar ] for and , respectively . for the stopping criteria , and , with and used for all the algorithms . for the parameter in ( [ eq : llrr ] ) , we empirically set it as . to measure the relative errors in the solutions we run ladmpsap 2000 iterations with to obtain the estimated ground truth solution ( ) .the experiments are run and timed on a notebook computer with an intel core i7 2.00 ghz cpu and 6 gb memory , running windows 7 and matlab 7.13 .table [ tab : latlrr ] shows the results of related algorithms .we can see that ladmps and ladmpsap are faster and more numerically accurate than ladmgb , and ladmpsap is even faster than ladmps thanks to the adaptive penalty .moreover , naive adm and naive ladm have relatively poorer numerical accuracy , possibly due to converging to wrong solutions .the numerical accuracy of apg is also worse than those of ladmps and ladmpsap because it only solves an approximate problem by adding the constraint to the objective function as penalty .note that although we do not require to be bounded , this does not imply that will grow infinitely . as a matter of fact ,when ladmpsap terminates the final values of are , , and for the three data settings , respectively .we then test the performance of the above six algorithms on the hopkins155 database , which consists of 156 sequences , each having 39 to 550 data vectors drawn from two or three motions . for computational efficiency , we preprocess the data by projecting them to be 5-dimensional using pca .we test all algorithms with , which is the best parameter for lrr on this database .table [ tab : latlrr_rel ] shows the results on the hopkins155 database .we can also see that ladmpsap is faster than other methods in comparison .in particular , ladmpsap is faster than ladmps , which uses a fixed .this testify to the advantage of using an adaptive penalty . [ cols="^,^,^,^,^,^,^,^",options="header " , ] then we consider the pathway analysis problem using the breast cancer gene expression data set , which consists of 8141 genes in 295 breast cancer tumors ( 78 metastatic and 217 non - metastatic ) .we follow and use the canonical pathways from msigdb to generate the overlapping gene sets , which contains 639 groups of genes , 637 of which involve genes from our study .the statistics of the 637 gene groups are summarized as follows : the average number of genes in each group is 23.7 , the largest gene group has 213 genes , and 3510 genes appear in these 637 groups with an average appearance frequency of about four .we follow to restrict the analysis to the 3510 genes and balance the data set by using three replicates of each metastasis patient in the training set .we use model ( [ eq : logit ] ) to select genes , where .we want to predict whether a tumor is metastatic ( ) or non - metastatic ( ) .we compare proximal ladmpsap with the active set method , which was adopted in , ladm , and ladmpsap . in ladmpsap and proximal ladmpsap , we both set and . for ladm, we try multiple choices of and choose the one that results in the fastest convergence . in ladm and ladmpsap, we terminate the inner loop by apg when the norm of gradient of the objective function of the subproblem is less than .the thresholds for terminating the outer loop are all chosen as and .for the three ladm based methods , we first solve to select genes .then we use the selected genes to re - train a traditional logistic regression model and use the model to predict the test samples . as in we partition the whole data set into three subsets to do the experiment three times .each time we select one subset as the test set and the other two as the training set ( i.e. , there are samples for training ) .it is worth mentioning that only kept the 300 genes that are the most correlated with the output in the pre - processing step .in contrast , we use all the 3510 genes in the training phase .table [ table - gene ] shows that proximal ladmpsap is more than ten times faster than the active set method used in , although it computes with a more than ten times larger training set .proximal ladmpsap is also much faster than ladm and ladmpsap due to the lack of inner loop to solve subproblems .the prediction error and the sparseness at the pathway level by proximal ladmpsap is also competitive with those of other methods in comparison .in this paper , we propose linearized alternating direction method with parallel splitting and adaptive penalty ( ladmpsap ) for efficiently solving linearly constrained multi - block separable convex programs , which are abundant in machine learning .ladmpsap fully utilizes the properties that the proximal operations of the component objective functions and the projections onto convex sets are easily solvable , which are usually satisfied by machine learning problems , making each of its iterations cheap .it is also highly parallel , making it appealing for parallel or distributed computing .numerical experiments testify to the advantages of ladmpsap over other possible first order methods .although ladmpsap is inherently parallel , when solving the proximal operations of component objective functions we will still face basic numerical algebraic computations . so for particular large scale machine learning problems , it will be interesting to integrate the existing distributed computing techniques ( e.g. , parallel incomplete cholesky factorization and caching factorization techniques ) with our ladmpsap in order to effectively address the scalability issues .z. lin is supported by nsfc ( nos .61272341 , 61231002 , 61121002 ) .r. liu is supported by nsfc ( no .61300086 ) , the china postdoctoral science foundation ( no .2013m530917 ) , the fundamental research funds for the central universities ( no .dut12rc(3)67 ) and the open project program of the state key lab of cad&cg ( no.a1404 ) , zhejiang university .z. lin also thanks xiaoming yuan , wotao yin , and edward chang for valuable discussions and htc for financial support .to prove this theorem , we first have the following lemmas and propositions . [lem : kkt]*(kkt condition ) * the kuhn - karush - tucker ( kkt ) condition of problem ( [ eq : model_problem_multivar ] ) is that there exists , such that where is the subgradient of .the first is the feasibility condition and the second is the duality condition .such is called a kkt point of problem ( [ eq : model_problem_multivar ] ) .[ lem : subgradients ] for generated by algorithm [ alg : ladmpsap - multivar ] , we have that where .this can be easily proved by checking the optimality conditions of ( [ eq : update_xi ] ) .[ lem : weak - monotonicity ] for generated by algorithm [ alg : ladmpsap - multivar ] and a kkt point of problem ( [ eq : model_problem_multivar ] ) , the following inequality holds : this can be deduced by the monotonicity of subgradient mapping .[ lem : basic - identity ] for generated by algorithm [ alg : ladmpsap - multivar ] and a kkt point of problem ( [ eq : model_problem_multivar ] ) , we have that this can be easily checked .first , we add ( [ eq : basic_id_line3 ] ) and ( [ eq : basic_id_line5 ] ) to have where we have used ( [ eq : kkt2 ] ) in ( [ eq : basic_id_proof_line3 ] ) .then we apply the identity to see that ( [ eq : basic_id_line1])-([eq : basic - identity ] ) holds .[ prop : basic - identity - multivar ] for generated by algorithm [ alg : ladmpsap - multivar ] and a kkt point of problem ( [ eq : model_problem_multivar ] ) , the following inequality holds : we continue from ( [ eq : basic_id_line5])-([eq : basic - identity ] ) . as ,we have plugging the above into ( [ eq : basic_id_line5])-([eq : basic - identity ] ) , we have ( [ eq : basic - identity - multivar - line1])-([eq : basic - identity - multivar ] ) .[ rem : different ] proposition [ prop : basic - identity - multivar ] shows that the sequence is fejr monotone . proposition [ prop : basic - identity - multivar ] is different from lemma 1 in supplementary material of because for we can not obtain an ( in)equality that is similar to lemma 1 in supplementary material of such that each term with minus sign could be made non - positive . such fejr monotone ( in)equalities are the corner stones for proving the convergence of lagrange multiplier based optimization algorithms .as a result , we can not prove the convergence of the naively generalized ladm for the multi - block case .then we have the following proposition .[ prop : converge_multivar ] let , .if is non - decreasing , , , is generated by algorithm [ alg : ladmpsap - multivar ] , and is any kkt point of problem ( [ eq : model_problem_multivar ] ) , then 1 . is nonnegative and non - increasing .2 . , , and .3 . , .we divide both sides of ( [ eq : basic - identity - multivar - line1])-([eq : basic - identity - multivar ] ) by to have then by ( [ eq : weak - monotonicity ] ) , and the non - decrement of , we can easily obtain 1 ) .second , we sum both sides of ( [ eq : basic - identity - multivar - line1])-([eq : basic - identity - multivar ] ) over to have then 2 ) and 3 ) can be easily deduced .now we are ready to prove theorem [ thm : converge_multivar ] .the proof resembles that in . *( of theorem [ thm : converge_multivar ] ) * by proposition [ prop : converge_multivar]-1 ) and the boundedness of , is bounded , hence has an accumulation point , say .we accomplish the proof in two steps .we first prove that is a kkt point of problem ( [ eq : model_problem_multivar ] ) .by proposition [ prop : converge_multivar]-2 ) , so any accumulation point of is a feasible solution .since , we have let . by observing proposition [ prop : converge_multivar]-2 ) andthe boundedness of , we have so we conclude that is an optimal solution to ( [ eq : model_problem_multivar ] ) .again by we have fixing and letting , we see that so , .thus is a kkt point of problem ( [ eq : model_problem_multivar ] ) .we next prove that the whole sequence converges to . by choosing in proposition [ prop :converge_multivar ] , we have by proposition [ prop : converge_multivar]-1 ) , we readily have so .as can be an arbitrary accumulation point of , we conclude that converge to a kkt point of problem ( [ eq : model_problem_multivar ] ) .we first have the following proposition .[ prop : unbounded_beta ] if is non - decreasing and unbounded , and is bounded for , then proposition [ prop : converge_multivar ] holds and as the conditions here are stricter than those in proposition [ prop : converge_multivar ] , proposition [ prop : converge_multivar ] holds .then we have that is bounded due to proposition [ prop : converge_multivar]-1 ) .so is bounded due to . is also bounded thanks to proposition [ prop : converge_multivar]-2 ) .we rewrite lemma [ lem : subgradients ] as then by the boundedness of , the unboundedness of and proposition [ prop : converge_multivar]-2 ) , letting , we have that where is any accumulation point of , which is the same as that of due to proposition [ prop : converge_multivar]-2 ) .recall that we have assumed that the mapping is onto .so .therefore by ( [ eq : subproblem_optimality2 ] ) , . based on proposition [ prop :unbounded_beta ] , we can prove theorem [ thm : convergence_unbounded ] as follows . * ( of theorem [ thm : convergence_unbounded ] ) * when is bounded , the convergence has been proven in theorem 1 . in the following , we only focus on the case that is unbounded . by proposition [ prop : converge_multivar]-1 ), is bounded , hence has at least one accumulation point . by proposition [ prop : converge_multivar]-2 ) , is a feasible solution .since and proposition [ prop : converge_multivar]-3 ) , there exists a subsequence such that as and is bounded , we may assume that it can be easily proven that then letting in ( [ eq : subsequence_to_zero ] ) , we have then by , letting and making use of ( [ eq : equal_zero ] ) , we have so together with the feasibility of we have that converges to an optimal solution to ( [ eq : model_problem_multivar ] ) . finally , we set and be the corresponding lagrange multiplier in proposition [ prop : converge_multivar ] . by proposition[ prop : unbounded_beta ] , we have that by proposition [ prop : converge_multivar]-1 ) , we readily have so .* ( of theorem [ thm : convergence_unbounded_necessary ] ) * we first prove that there exist linear mappings , , such that s are not all zeros and .indeed , is equivalent to where and are the matrix representations of and , respectively .( [ eq : linear_mappings_equiv ] ) can be further written as recall that we have assumed that the solution to is non - unique .so is not full column rank hence ( [ eq : linear_mappings_equiv ] ) has nonzero solutions .thus there exist s such that they are not all zeros and . by lemma [ lem : subgradients ] , as is bounded , , so is where and in ( [ eq : cancel_lambda ] ) we have utilized to cancel , whose boundedness is uncertain .then we have that there exists a constant such that if , then is a cauchy sequence , hence has a limit .define where is any optimal solution .then so if is initialized badly such that then , which implies that can not converge to .note that ( [ eq : bad_init ] ) is possible because is not a zero mapping given the conditions on .* ( of proposition [ prop : optimality ] ) * if is optimal , it is easy to check that holds . since , we have so if ( [ eq : constrained_optimality ] ) holds , we have with ( [ eq : constrained_optimality2 ] ) , we have so ( [ eq : constrained_optimality1 ] ) reduces to . as satisfies the feasibility condition , it is an optimal solution to ( [ eq : model_problem_multivar ] ) . *( of theorem [ thm : convergence_rate_2var ] ) * we first deduce by proposition [ prop : basic - identity - multivar ] , we have so by lemma [ lem : subgradients ] and combining the above inequalities , we have here we use the fact that , which is guaranteed by and .summing the above inequalities from to , and dividing both sides with , we have next , by the convexity of and the squared frobenius norm , we have combining ( [ eq : recursive4-line1])-([eq : recursive4 ] ) and ( [ eq : recursive5-line1])-([eq : recursive5 ] ) , we have only need to prove the following proposition . then by the same technique for proving theorem [ thm : converge_multivar ] , we can prove theorem [ thm : better_eta ] . [prop : basic - identity - multivar - equiv ] for generated by algorithm [ alg : ladmpsap - multivar_equiv ] and a kkt point of problem ( [ eq : model_problem_multivar_equiv ] ) , we have that we continue from ( [ eq : basic - identity - multivar - line7 ] ) : then we can have ( [ eq : basic - identity - multivar - equiv - line1])-([eq : basic - identity - multivar - equiv ] ) .to prove theorem [ thm : general_convergence_unbounded ] , we need the following proposition : [ prop : inequ_para ] for generated by algorithm [ alg : general - ladmpsap - multivar ] and a kkt point of problem ( [ eq : model_problem_multivar ] ) with described in section [ sec : g - ladmpsap ] , we have that it can be observed that so we have and on the one hand , \notag\\ & & -\frac{1}{2\beta_k}\left(\|\lambda^{k+1}-\lambda\|^2-\|\lambda^{k}-\lambda\|^2+\|\hat{\lambda}^{k}-\lambda^k\|^2-\|\lambda^{k+1}-\hat{\lambda}^k\|^2\right)\notag\\ & = & \sum\limits_{i=1}^n \left[\frac{\tau_i^{(k)}}{2}\left(\|{\mathbf{x}}_i^k-{\mathbf{x}}_i\|^2-\|{\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i\|^2-\|{\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i^k\|^2\right)+\frac{l_i}{2}\|{\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i^k\|^2\right]\notag\\ & & -\frac{1}{2\beta_k}\left(\|\lambda^{k+1}-\lambda\|^2-\|\lambda^{k}-\lambda\|^2+\|\hat{\lambda}^{k}-\lambda^k\|^2-\beta_k^2{\left\|}\sum\limits_{i=1}^n{\mathcal{a}}_i({\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i^k){\right\|}^2\right)\notag\\ & \leq&\frac{1}{2}\sum\limits_{i=1}^n \tau_i^{(k)}\left(\|{\mathbf{x}}_i^{k}-{\mathbf{x}}_i\|^2-\|{\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i\|^2\right)-\frac{1}{2}\sum\limits_{i=1}^n\left(\tau_i^{(k ) } -l_i - n\beta_k\|{\mathcal{a}}_i\|^2\right)\|{\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i^k\|^2\notag\\ & & + \frac{1}{2\beta_k}\left(\|\lambda^{k}-\lambda\|^2-\|\lambda^{k+1}-\lambda\|^2-\|\hat\lambda^{k}-\lambda^k\|^2\right ). \label{pop1_line_end}\end{aligned}\ ] ] on the other hand , so we have let and , we have +\frac{1}{2\beta_k}\left(\|\lambda^{k}-\lambda^*\|^2-\|\lambda^{k+1}-\lambda^*\|^2\right)\notag\\ & & -\frac{1}{2}\sum\limits_{i=1}^n\left(\tau_i^{(k)}-l_i - n\beta_k\|{\mathcal{a}}_i\|^2\right)\|{\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i^k\|^2-\frac{1}{2\beta_k}\|\hat\lambda^{k}-\lambda^k\|^2.\notag\end{aligned}\ ] ] * ( of theorem [ thm : general_convergence_unbounded ] ) * as minimizes , we have by proposition [ prop : inequ_para ] , we have dividing both sides by and using , the non - decrement of and the non - increment of , we have it can be easily seen that is bounded , hence has an accumulation point , say . summing - over , we have so and as .hence , which means that is a feasible solution . from ( [ pop1_line1])-([pop1_line_end ] ), we have let . by the boundedness of we have together with the feasibility of , we can see that is a kkt point . by choosing have using ( [ the_line1])-([the_line2 ] ) , we have so .* ( of theorem [ thm : general_convergence_unbounded_rate ] ) * by the definition of and , \hspace*{2cm}\\ & \geq&{\frac{\displaystyle \beta_k}{\displaystyle 2}}\left[\sum\limits_{i=1}^n\left(\eta_i - n\|{\mathcal{a}}_i\|^2\right)\|{\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i^k\|^2+\frac{1}{\beta_k^2}\|\hat\lambda^{k}-\lambda^k\|^2\right]\notag\\ & \geq&{\frac{\displaystyle \alpha\beta_k}{\displaystyle 2}}(n+1)\left(\sum\limits_{i=1}^n\|{\mathcal{a}}_i\|^2\|{\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i^k\|^2+\frac{1}{\beta_k^2}\|\hat\lambda^{k}-\lambda^k\|^2\right)\notag\\ & \geq&{\frac{\displaystyle \alpha\beta_k}{\displaystyle 2}}(n+1)\left(\sum\limits_{i=1}^n\|{\mathcal{a}}_i({\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i^k)\|^2+\frac{1}{\beta_k^2}\|\hat\lambda^{k}-\lambda^k\|^2\right)\notag\\ & = & { \frac{\displaystyle \alpha\beta_k}{\displaystyle 2}}(n+1)\left(\sum\limits_{i=1}^n\|{\mathcal{a}}_i({\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i^k)\|^2+{\left\|}\sum\limits_{i=1}^n{\mathcal{a}}_i({\mathbf{x}}_i^k)-{\mathbf{b}}{\right\|}^2\right)\notag\\ & \geq&{\frac{\displaystyle \alpha\beta_k}{\displaystyle 2}}{\left\|}\sum\limits_{i=1}^n{\mathcal{a}}_i({\mathbf{x}}_i^{k+1})-{\mathbf{b}}{\right\|}^2.\end{aligned}\ ] ] so by ( [ line_1])-([line_end ] ) and the non - decrement of , we have dividing both sides by and using the non - decrement of and the non - increment of , we have \hspace*{1cm}\\ & \leq & \frac{1}{2}\sum\limits_{i=1}^n \beta_k^{-1}\tau_i^{(k)}\left(\|{\mathbf{x}}_i^{k}-{\mathbf{x}}_i^*\|^2-\|{\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i^*\|^2\right)+\frac{1}{2\beta_k^2}\left(\|\lambda^{k}-\lambda^*\|^2-\|\lambda^{k+1}-\lambda^*\|^2\right)\notag\\ & \leq&\frac{1}{2}\sum\limits_{i=1}^n \left(\beta_k^{-1}\tau_i^{(k)}\|{\mathbf{x}}_i^{k}-{\mathbf{x}}_i^*\|^2-\beta_{k+1}^{-1}\tau_i^{(k+1)}\|{\mathbf{x}}_i^{k+1}-{\mathbf{x}}_i^*\|^2\right)\notag\\ & & + \left(\frac{1}{2\beta_k^2}\|\lambda^{k}-\lambda^*\|^2-\frac{1}{2\beta_{k+1}^2}\|\lambda^{k+1}-\lambda^*\|^2\right).\end{aligned}\ ] ] summing over and dividing both sides by , we have using the convexity of and , we have so we have boyd s , parikh n , chu e , peleato b , eckstein j ( 2011 ) distributed optimization and statistical learning via the alternating direction method of multipliers . in : jordan m ( ed ) foundations and trends in machine learning subramanian a , tamayo p , mootha v , mukherjee s , et al ( 2005 ) gene set enrichment analysis : a knowledge - based approach for interpreting genome - wide expression profiles .proceedings of the national academy of sciences 102(43):267288
many problems in machine learning and other fields can be ( re)for - mulated as linearly constrained separable convex programs . in most of the cases , there are multiple blocks of variables . however , the traditional alternating direction method ( adm ) and its linearized version ( ladm , obtained by linearizing the quadratic penalty term ) are for the two - block case and can not be naively generalized to solve the multi - block case . so there is great demand on extending the adm based methods for the multi - block case . in this paper , we propose ladm with parallel splitting and adaptive penalty ( ladmpsap ) to solve multi - block separable convex programs efficiently . when all the component objective functions have bounded subgradients , we obtain convergence results that are stronger than those of adm and ladm , e.g. , allowing the penalty parameter to be _ unbounded _ and proving the _ sufficient and necessary conditions _ for global convergence . we further propose a simple optimality measure and reveal the convergence rate of ladmpsap in an ergodic sense . for programs with extra convex set constraints , with refined parameter estimation we devise a practical version of ladmpsap for faster convergence . finally , we generalize ladmpsap to handle programs with more difficult objective functions by linearizing part of the objective function as well . ladmpsap is particularly suitable for sparse representation and low - rank recovery problems because its subproblems have closed form solutions and the sparsity and low - rankness of the iterates can be preserved during the iteration . it is also highly parallelizable and hence fits for parallel or distributed computing . numerical experiments testify to the advantages of ladmpsap in speed and numerical accuracy .
ebola is a lethal virus for humans that is currently under strong research due to the recent outbreak in west africa and its socioeconomic impact ( see , e.g. , and references therein ) .world health organization ( who ) has declared ebola virus disease epidemic as a public health emergency of international concern with severe global economic burden . at fatal ebola infection stage ,patients usually die before the antibody response . mainly after the 2014 ebola outbreak in west africa, some attempts to obtain a vaccine for ebola disease have been realized .according to the who , results in july 2015 from an interim analysis of the guinea phase iii efficacy vaccine trial show that vsv - ebov ( merck , sharp & dohme ) is highly effective against ebola .since 2014 , different mathematical models to analyze the spread of the 2014 ebola outbreak have been presented ( see , e.g. , and references therein ) . in these modelsthe populations under study are divided into compartments , and the rates of transfer between compartments are expressed mathematically as derivatives with respect to time of the size of the compartments . in a recent work , a system of eight nonlinear ( fractional ) differential equations for a population divided into eight mutually exclusive groups was considered : susceptible , exposed , infected , hospitalized , asymptomatic but still infectious , dead but not buried , died , and completely recovered . by comparing the numerical results of this mathematical model and the real data provided by who , the difference in the period of 438 days analyzed is about 7 cases per day . note that in the day 438 after the beginning of the outbreak , the number of confirmed cases is 15018 .there exist different models for the spreading of ebola , beginning with the simplest sir and seir models and later more complex but also more realistic models have been considered . in , a stochastic discrete - time susceptible - exposed - infectious - recovered ( seir ) model for infectious diseases is developed with the aim of estimating parameters from daily incidence and mortality time series for an outbreak of ebola in the democratic republic of congo in 1995 . in ,the authors use data from two epidemics ( in democratic republic of congo in 1995 and in uganda in 2000 ) and built a seihfr ( susceptible - exposed - infectious - hospitalized - f(dead but not yet buried)-removed ) mathematical model for the spread of ebola haemorrhagic fever epidemics taking into account transmission in different epidemiological settings ( in the community , in the hospital , during burial ceremonies ) . in ,the authors propose a sird ( susceptible - infectious - recovered - dead ) mathematical model using classical and beta derivatives . in this model, the class of susceptible individuals does not consider new born or immigration .the study shows that , for small portion of infected individuals , the whole country could die out in a very short period of time in case there is no good prevention . in ,a fractional order seir ebola epidemic model is proposed and the authors show that the model gives a good approximation to real data published by who , starting from march 27th , 2014 .optimal control is a mathematical theory that emerged after the second world war with the formulation of the celebrated pontryagin maximum principle , responding to practical needs of engineering , particularly in the field of aeronautics and flight dynamics . in the last decade ,optimal control has been largely applied to biomedicine , namely to models of cancer chemotherapy ( see , e.g. , ) , and recently to epidemiological models . in , the authors present a comparison between sir and seir mathematical models used in the description of the ebola virus propagation .they applied optimal control techniques in order to understand how the spread of the virus may be controlled , e.g. , through education campaigns , immunization or isolation . in ,the authors introduce a deterministic seir type model with additional hospitalization , quarantine and vaccination components in order to understand the disease dynamics .optimal control strategies , both in the case of hospitalization ( with and without quarantine ) and vaccination , are used to predict the possible future outcome in terms of resource utilization for disease control and the effectiveness of vaccination on sick populations . both in and , the authors study optimal control problems with cost functionals without any state or control constraints . here ,we modify the model analyzed in in order to consider optimal control problems with vaccination constraints .more precisely , we introduce an extra variable for the number of vaccines used , and we compare the hypothetical results if the vaccine were available at the beginning of the outbreak with the results of the model without vaccines .firstly , we consider an optimal control problem with an end - point state constraint , that is , the total number of available vaccines , in a fixed period of time , is limited .secondly , we analyze an optimal control problem with a mixed state constraint , in which there is a limited supply of vaccines at each instant of time for a fixed interval of time .both optimal control problems have been analytically solved .moreover , we have performed a number of numerical simulations in three different scenarios : unlimited supply of vaccines ; limited total number of vaccines to be used ; and limited supply of vaccines at each instant of time . from the results obtained in the first two cases ,when there is no limit in the supply of vaccines or when the total number of vaccines used is limited , the optimal vaccination strategy implies a vaccination of 100% of the susceptible population in a very short period of time ( smaller than one day ) . in practice , this is a very difficult task because limitations in the number of vaccines and also in the number of humanitarian and medical teams in the affected regions are common . in this direction ,the third analyzed case is extremely important since we consider a limited supply of vaccines at each instant of time .the paper is organized as follows . in section [ sec:2 ], we recall a mathematical model for ebola virus . in section [ sec:3 ] ,the introduction of effective vaccination for ebola virus is modeled .an optimal control problem with an end - point state constraint is formulated and solved analytically in section [ sec:4 ] , which models the case where the total number of available vaccines in a fixed period of time is limited . in section [ sec:5 ] , the limited supply of vaccines at each instant of time for a fixed interval of time is mathematically translated into an optimal control problem with a mixed state control constraint . a closed form of the unique optimal control is given . in section [ sec:6 ] , we solve numerically the optimal control problems proposed in sections [ sec:4 ] and [ sec:5 ] .finally , we end with section [ sec:7 ] of discussion of the results .the total population under study is subdivided into eight mutually exclusive groups : susceptible ( ) , exposed ( ) , infected ( ) , hospitalized ( ) , asymptomatic but still infectious ( ) , dead but not buried ( ) , buried ( ) , and completely recovered ( ) .this model is adapted from and analyzed in , where the birth and death rate are assumed to be equal and are denoted by , and the contact rate of susceptible individuals with infective , dead , hospitalized and asymptomatic individuals are denoted by , , and , respectively . exposed individuals become infectious at a rate .the per capita rate of progression of individuals from the infectious class to the asymptomatic and hospitalized classes are denoted by and , respectively .individuals in the dead class progress to the buried class at a rate .hospitalized individuals progress to the buried class and to the asymptomatic class at rates and , respectively .asymptomatic individuals become completely recovered at a rate .infectious individuals progress to the dead class at a fatality rate .dead and buried bodies are incinerated at a rate .we assume that the total population , , is constant , that is , the birth and death rates , both denoted by , are equal to the incineration rate .the model is mathematically described by the following system of eight nonlinear ordinary differential equations : \displaystyle \frac{de}{dt } = \frac{\beta_i}{n } s i + \frac{\beta_h}{n } s h + \frac{\beta_d}{n } s d + \frac{\beta_r}{n } s r - \sigma e - \mu e,\\[0.2 cm ] \displaystyle \frac{di}{dt } = \sigma e - ( \gamma_1 + \epsilon + \tau + \mu)i,\\[0.2 cm ] \displaystyle \frac{dr}{dt } = \gamma_1 i + \gamma_2 h - ( \gamma_3 + \mu ) r,\\[0.2 cm ] \displaystyle \frac{dd}{dt } = \epsilon i - \delta_1 d - \xi d,\\[0.2 cm ] \displaystyle \frac{dh}{dt } = \tau i - ( \gamma_2 + \delta_2 + \mu ) h,\\[0.2 cm ] \displaystyle \frac{db}{dt } = \delta_1 d + \delta_2 h - \xi b \\[0.2 cm ] \displaystyle \frac{dc}{dt } = \gamma_3 r - \mu c. \end{cases}\ ] ] in fig . [ figure:1 ] , we give a flowchart presentation of model . in this flowchart, we identify the compartmental classes as well as the parameters appearing in the model .moreover , the values of the parameters are given in table [ table : parameters ] . the basic reproduction number( that is , the number of cases one case generates on average over the course of its infectious period , in an otherwise uninfected population ) of model can be computed using the associated next - generation matrix method .it is obtained as the spectral radius of the following matrix , known as the next - generation - matrix : where with therefore , the basic reproduction number is given by .\end{gathered}\ ] ] as it is well - known , if the basic reproduction number , then the infection will stop in the long run ; but if , then the infection will spread in population .-0.5in0 in .parameter values for model , corresponding to a basic reproduction number .the values of the parameters come from . [ cols= " <, < , < " , ] in this section , we have recalled a model for describing the ebola virus transmission . now we want to address the question about how to introduce vaccination as a prevention measure .this is analyzed in the next section .we now introduce vaccination of the susceptible population with the aim of controlling the spread of the disease .we assume that the vaccine is effective so that all vaccinated susceptible individuals become completely recovered ( see , e.g. , for vaccination in a seir model that corresponds to a system of four nonlinear ordinary differential equations ) .let us introduce in model a control function , which represents the percentage of susceptible individuals being vaccinated at each instant of time with ] and on the number of available vaccines at each instant of time with ] that minimizes the cost functional \ , dt,\ ] ] where the constants and represent the weights associated with the number of infected individuals and on the cost associated with the vaccination program , respectively .we assume that the control function takes values between 0 and 1 . when , no susceptible individual is vaccinated at time ; if , then all susceptible individuals are vaccinated at .let denote the total amount of available vaccines in a fixed period of time ] , , called the _ adjoint vector _ , such that where the hamiltonian is defined by with with , and where and with the null matrix .the minimization condition holds almost everywhere on ] , the optimal control must satisfy the optimal control given by is unique due to the boundedness of the state and adjoint functions and the lipschitz property of systems and .we would like to note that if we consider the optimal control problem without any restriction on the number of available vaccines , that is , to find the optimal solution , with , which minimizes the cost functional subject to the control system , initial conditions , and free final conditions , then the adjoint functions must satisfy transversality conditions , , and , since , the optimal control is given by in a concrete situation , the number of available vaccines is always limited .therefore , it is also important to study the optimal control problem with such kind of constraints. this is done in section [ sec:5 ] . both problems , with and without constraints ,are numerically solved in section [ sec:6 ] .a particularly challenging situation in vaccination programs happens when there is a limited supply of vaccines at each instant of time for a fixed interval of time ] , and ; { \mathbb{r}}) ] ( nontriviality condition ) ; * ( adjoint system ) ; * }(\tilde{u}(t)) ] stands for the normal cone from convex analysis to ] . at the end of the 90 days period ,the total number of individuals who got active infection is approximately and individuals in the case and , respectively . , , and .the dashed line represents the case where and the continuous line represents the case where .,scaledwidth=90.0% ] , , and .the dashed line represents the case where and the continuous line represents the case where .,scaledwidth=90.0% ] in the case , the optimal control takes the maximum value for less than one day ( approximately 0.72 days ) with a cost equal to , and in the case , the optimal control takes the maximum value for approximately 2.2 days with a cost equal to .the cost associated to the case is lower than the one in the case , although more individuals are vaccinated , since the number of individuals in the class is lower .namely , in the case , the number of individuals with active infection at the end of 90 days is equal to and in the case the respective number is equal to .this means that in the case , in a epidemiological scenario corresponding to a basic reproduction number greater than one , 10000 vaccines will not be enough to eradicate the disease .additionally , if we consider the maximum value for the total number of vaccines used during the period of 90 days to be equal to , , , and , then we observe that the optimal control remains more time at the maximum value when the supply of vaccines is bigger , which means that when the total number of available vaccines is increased there will be resources to vaccinate all susceptible individuals for a longer period of time , which implies a bigger reduction of the number of individuals who get infected by the virus ( see fig .[ control : vac : variar ] and [ control : vac : variar : zoom ] for the optimal control strategy and respective zoom in the period of vaccination ) .consider now the case where the weight constant associated with the cost of implementation of the vaccination strategy , designated by the optimal control , is bigger than one , for example , consider and , and and .to simplify , consider in both cases and .when we increase the weight constant , the maximum value attained by the optimal control becomes lower than one ( see fig . [fig : control : vacc : limfinal : a50a500 ] ) . in the case for , the optimal control starts with the value and is a decreasing function with a cost function . at the end of approximately 3.7 days ,the control remains equal to zero . for ,the optimal control starts with the value and is also a decreasing function , with a cost . at the end of 13.5 days, it remains equal to zero .the behavior of the optimal state variables , , , , , , and are similar . from previous results , we observe that when there is no limit on the supply of vaccines , or when the total number of used vaccines is limited , the optimal vaccination strategy implies a vaccination of 100 per cent of the susceptible population in a very short period of time , sometimes smaller than one day .but we know that in practice this is a very difficult task , since there are limitations in the number of vaccines available and also in the number of health care workers or humanitarian teams in the regions affected by ebola virus with capacity to vaccinate such a big number of individuals almost simultaneously . from this point of view , it is important to study the case where there is a limited supply of vaccines at each instant of time . in this section ,we consider , a shorter interval of time ] , is equal to .such solution is the less costly of the three considered , followed by the constraint for ] , with a cost of .the strategy associated with the constraint is the one where a lowest number of susceptible individuals completely recover through vaccination , with individuals in the class at the end of 10 days .if we consider that at each instant of time there are 1200 vaccines available during a period of 15 days , then completely recover .this is the strategy with more individuals in the class .if we consider 16 days , but only 900 vaccines available for each instant of time , then only individuals completely recover ( see fig . [ cmixed ] ) .for all three mixed constraint situations , the number of individuals in the classes , , , , and does not change significantly ( therefore , the figures with these classes are omitted ) .as the number of available vaccines represent a small percentage of the susceptible population , in the three cases the optimal vaccination strategies for the constraints and suggest that the percentage of the susceptible population that is vaccinated is always inferior than 18 percent . in the case of the constraint ,this percentage is always inferior to 8 percent ( see fig .[ controlmixed ] ) .we assume that , in a near future , an effective vaccine against the ebola virus will be available . under this assumption ,three different scenarios have been studied : unlimited supply of vaccines ; limited total number of vaccines to be used ; and limited supply of vaccines at each instant of time .we have solved the optimal control problems analytically and we have performed a number of numerical simulations in the three aforementioned vaccination scenarios . some authors have already considered the optimal control problem with vaccination for ebola disease , but always with unlimited supply of vaccines .it turns out that the solution to this mathematical problem is obvious : the solution consists to vaccinate all susceptible individuals in the beginning of the outbreak .this is a very particular case of our work , investigated in section [ sec:6.1 ] ( see figure [ fig : control : vacc : nolim ] ) .if vaccines are available without any restriction , then one could completely eradicate ebola in a very short period of time .these results show the importance of an effective vaccine for ebola virus and the very good results that can be attained if the number of available vaccines satisfy the needs of the population .unfortunately , such situation is not realistic : in case an effective vaccine for ebola virus will appear , there always will be restrictions on the number of available vaccines as well as constraints on how to inoculate them in a proper way and in a short period of time ; economic problems might also exist . in our work , for first time in the literature of ebola , an optimal control problem with state and control constraints has been considered .mathematically , it represents a health public problem of limited total number of vaccines .the results obtained in section [ sec:6.2 ] provide useful information on the number of vaccines to be bought , in order to reduce the number of new infections with minimum cost .for example , the results between 10000 and 20000 vaccines ( in 90 days ) are completely different . with 10000 vaccines, the number of cumulative infected cases continues to increase , while with 20000 vaccines it is already possible to decrease the new infections .the optimal solution , in this case , is similar to the case of unlimited supply of vaccines , that is , it implies a vaccination of 100 per cent of the susceptible population in a very short period of time . in practice, this is an unrealistic task , due to the necessary number of vaccines and humanitarian teams in the regions affected by ebola .therefore , we conclude that it is important to study the case where there is a limited supply of vaccines at each instant of time .this was investigated in section [ sec:6.3 ] .this situation is much richer and the optimal control solution is not obvious . for a given number of available vaccines at each instant of time, we have a different solution , which is the optimal rate of susceptible individuals that should be vaccinated . in this case, the optimal control implies the vaccination of a small subset of the susceptible population .it remains the ethical problem of how to choose the individuals to be vaccinated .three reviewers deserve special thanks for helpful and constructive comments .the work of area was partially supported by the ministerio de economa y competitividad of spain , under grants mtm201238794c0201 and mtm201675140p , co - financed by the european community fund feder .ndarou acknowledges the aims - cameroon 20142015 fellowship , nieto the partial financial support by the ministerio de economa y competitividad of spain under grants mtm201015314 , mtm201343014p and mtm201675140p , and xunta de galicia under grants r2014/002 and grc 2015/004 , co - financed by the european community fund feder .silva was supported through the portuguese foundation for science and technology ( fct ) post - doc fellowship sfrh / bpd/72061/2010 .the work of silva and torres was partially supported by fct through cidma and project uid / mat/04106/2013 , and by project toccata , reference ptdc / eei - aut/2933/2014 , funded by project 3599 promover a produo cientfica e desenvolvimento tecnolgico e a constituio de redes temticas ( 3599-ppcdt ) and feder funds through compete 2020 , programa operacional competitividade e internacionalizao ( poci ) , and by national funds through fct .( mr3394468 ) [ 10.1186/s13662 - 015 - 0613 - 5 ] i. area , h. batarfi , j. losada , j. j. nieto , w. shammakh and a. torres , on a fractional order ebola epidemic model , _ adv .difference equ . _* 2015 * ( 2015 ) , art .i d 278 , 12 pp .[ 10.1155/2014/261383 ] a. atangana and e. f. doungmo goufo , on the mathematical analysis of ebola hemorrhagic fever : deathly infection disease in west african countries , _ biomed research international _ * 2014 * ( 2014 ) , art .i d 261383 , 7 pp .( mr3181992 ) [ 10.3934/mbe.2014.11.761 ] m. h. a. biswas , l. t. paiva and m. r. de pinho , a seir model for control of infectious diseases with constraints , _ math .* 11 * ( 2014 ) , no . 4 , 761784 .( mr2082775 ) [ 10.1016/j.jtbi.2004.03.006 ] g. chowell , n. w. hengartner , c. castillo - chavez , p. w. fenimore and j. m. hyman , the basic reproductive number of ebola and the effects of public health measures : the cases of congo and uganda , * 229 * ( 2004 ) , no . 1 , 119126 .( mr1057044 ) [ 10.1007/bf00178324 ] o. diekmann , j. a. p. heesterbeek and j. a. j. metz , on the definition and the computation of the basic reproduction ratio in models for infectious diseases in heterogeneous populations , _ j. math ._ * 28 * ( 1990 ) , no . 4 , 365382 .[ 10.1371/journal.pcbi.1004200 ] s. duwal , s. winkelmann , c. schtte and m. von kleist , optimal treatment strategies in the context of treatment for prevention against hiv-1 in resource - poor settings , _ plos comput . biol . _* 11 * ( 2015 ) , no . 4 , art .i d e1004200 , 30 pp .[ 10.1126/science.1259657 ] s. k. gire , a. goba , k. g. andersen , r. s. g. sealfon , d. j. park , l. kanneh , et al ., genomic surveillance elucidates ebola virus origin and transmission during the 2014 outbreak , _ science _ ,* 345 * ( 2014 ) , no . 6202 , 13691372 .( mr3578107 ) [ 10.1080/15502287.2016.1231236 ] d. hincapie - palacio , j. ospina and d. f. m. torres , approximated analytical solution to an ebola optimal control problem , _ int . j. comput .methods eng .* 17 * ( 2016 ) , no . 5 - 6 , 382390 .[ 10.1016/j.bios.2015.08.040 ] a. kaushik , s. tiwari , r. d. jayant , a. marty and m. nair , towards detection and diagnosis of ebola virus disease at point - of - care , _ biosensors and bioelectronics _ * 75 * ( 2016 ) , 254272 .[ 10.1111/j.1541 - 0420.2006.00609.x ] p. e. lekone and b. f. finkenstdt , statistical inference in a stochastic epidemic seir model with control intervention : ebola as a case study , _ biometrics _ * 62 * ( 2006 ) , no . 4 , 11701177 . [ 10.1371/journal.pone.0022309 ] n. k. martin , a. b. pitcher , p. vickerman , a. vassall and m. hickman , optimal control of hepatitis c antiviral treatment programme delivery for prevention amongst a population of injecting drug users , _ plos one _ * 6 * ( 2011 ) , no . 8 , e22309 , 17 pp .( mr2744727 ) r. miller neilan and s. lenhart , _ an introduction to optimal control with an application in disease modeling_. in : modeling paradigms and analysis of disease transmission models , vol . 75 of dimacs ser. discrete math .soc . , providence , ri , 2010 , 6781 .( mr3538903 ) [ 10.1155/2016/9352725 ] g. a. ngwa and m. i. teboh - ewungkem , a mathematical model with quarantine states for the dynamics of ebola virus disease in human populations , ( 2016 ) , art .i d 9352725 , 29 pp .( mr0166037 ) l. s. pontryagin , v. g. boltyanskii , r. v. gamkrelidze and e. f. mishchenko , _ the mathematical theory of optimal processes _ , interscience publishers john wiley & sons , inc .new york - london , 1962 .( mr3349757 ) [ 10.1155/2015/842792 ] a. rachah and d. f. m. torres , mathematical modelling , simulation , and optimal control of the 2014 ebola outbreak in west africa , _ discrete dyn .* 2015 * ( 2015 ) , art .i d 842792 , 9 pp .[ 10.1371/currents.outbreaks.fd38dd85078565450b0be3fcd78f5ccf ] c. m. rivers , e. t. lofgren , m. marathe , s. eubank and b. l. lewis , modeling the impact of interventions on an epidemic of ebola in sierra leone and liberia , technical report , _ plos currents outbreaks _ , 2014 .[ 10.1086/514318 ] a. k. rowe et al ., clinical , virologic , and immunologic follow - up of convalescent ebola hemorrhagic fever patients and their household contacts , kikwit , democratic republic of the congo , ( 1999 ) , suppl . 1 ,
the ebola virus disease is a severe viral haemorrhagic fever syndrome caused by ebola virus . this disease is transmitted by direct contact with the body fluids of an infected person and objects contaminated with virus or infected animals , with a death rate close to 90% in humans . recently , some mathematical models have been presented to analyse the spread of the 2014 ebola outbreak in west africa . in this paper , we introduce vaccination of the susceptible population with the aim of controlling the spread of the disease and analyse two optimal control problems related with the transmission of ebola disease with vaccination . firstly , we consider the case where the total number of available vaccines in a fixed period of time is limited . secondly , we analyse the situation where there is a limited supply of vaccines at each instant of time for a fixed interval of time . the optimal control problems have been solved analytically . finally , we have performed a number of numerical simulations in order to compare the models with vaccination and the model without vaccination , which has recently been shown to fit the real data . three vaccination scenarios have been considered for our numerical simulations , namely : unlimited supply of vaccines ; limited total number of vaccines ; and limited supply of vaccines at each instant of time . ivn area faal ndarou juan j. nieto cristiana j. silva and delfim f. m. torres
* games on graphs .* games played on graphs are central in several important problems in computer science , such as reactive synthesis , verification of open systems , and many others .the game is played by several players on a finite - state graph , with a set of angelic ( existential ) players and a set of demonic ( universal ) players as follows : the game starts at an initial state , and given the current state , the successor state is determined by the choice of moves of the players .the outcome of the game is a_ play _ , which is an infinite sequence of states in the graph .a _ strategy _ is a transducer to resolve choices in a game for a player that given a finite prefix of the play specifies the next move .given an objective ( the desired set of behaviors or plays ) , the goal of the existential players is to ensure the play belongs to the objective irrespective of the strategies of the universal players . in verification and control of reactive systemsan objective is typically an -regular set of paths .the class of -regular languages , that extends classical regular languages to infinite strings , provides a robust specification language to express all commonly used specifications , and parity objectives are a canonical way to define such -regular specifications .thus games on graphs with parity objectives provide a general framework for analysis of reactive systems. * perfect vs partial observation . *many results about games on graphs make the hypothesis of _ perfect observation _( i.e. , players have perfect or complete observation about the state of the game ) . in thissetting , due to determinacy ( or switching of the strategy quantifiers for existential and universal players ) , the questions expressed by an arbitrary alternation of quantifiers reduce to a single alternation , and thus are equivalent to solving two - player games ( all the existential players against all the universal players ) .however , the assumption of perfect observation is often not realistic in practice .for example in the control of physical systems , digital sensors with finite precision provide partial information to the controller about the system state .similarly , in a concurrent system the modules expose partial interfaces and have access to the public variables of the other processes , but not to their private variables .such situations are better modeled in the more general framework of _ partial - observation _ games . * partial - observation games . * since partial - observation games are not determined , unlike the perfect - observation setting , the multi - player games problems do not reduce to the case of two - player games . typically , multi - player partial - observation games are studied in the following setting : a set of partial - observation existential players , against a perfect - observation universal player , such as for distributed synthesis .the problem of deciding if the existential players can ensure a reachability ( or a safety ) objective is undecidable in general , even for two existential players .however , if the information of the existential players form a chain ( i.e. , existential player 1 more informed than existential player 2 , existential player 2 more informed than existential player 3 , and so on ) , then the problem is decidable .* games with a weak adversary .* one aspect of multi - player games that has been largely ignored is the presence of weaker universal players that do not have perfect observation .however , it is natural in the analysis of composite reactive systems that some universal players represent components that do not have access to all variables of the system . in this workwe consider games where adversarial players can have partial observation . if there are two existential ( resp ., two universal ) players with incomparable partial observation , then the undecidability results follows from ; and if the information of the existential ( resp . ,universal ) players form a chain , then they can be reduced to one partial - observation existential ( resp . ,universal ) player .we consider the following case of partial - observation games : one partial - observation existential player ( player 1 ) , one partial - observation universal player ( player 2 ) , one perfect - observation existential player ( player 3 ) , and one perfect - observation universal player ( player 4 ) .roughly , having more partial - observation players in general leads to undecidability , and having more perfect - observation players reduces to two perfect - observation players .we first present our results and then discuss two applications of our model .* results . *our main results are as follows : _ player 1 less informed ._ we first consider the case when player 1 is less informed than player 2 .we establish the following results : a 2-exptime upper bound for parity objectives and a 2-exptime lower bound for reachability objectives ( i.e. , we establish 2-exptime - completeness ) ; an expspace upper bound for parity objectives when player 1 is blind ( has only one observation ) , and expspace lower bound for reachability objectives even when both player 1 and player 2 are blind . in all these cases , if the objective can be ensured then the upper bound on memory requirement of winning strategies is at most doubly exponential ._ player 1 is more informed ._ we consider the case when player 1 can be more informed as compared to player 2 , and show that even when player 1 has perfect observation there is a non - elementary lower bound on the memory required by winning strategies .this result is also in sharp contrast to distributed games , where if only one player has partial observation then the upper bound on memory of winning strategies is exponential .* applications . *we discuss two applications of our results : the sequential synthesis problem , and new complexity results for partial - observation _stochastic _ games .the sequential synthesis problem consists of a set of partially implemented modules , where first a set of modules needs to be refined , followed by a refinement of some modules by an external source , and then the remaining modules are refined so that the composite open reactive system satisfies a specification .given the first two refinements can not access all private variables , we have a four - player game where the first refinement corresponds to player 1 , the second refinement to player 2 , the third refinement to player 3 , and player 4 is the environment . in partial - observation stochastic games ,there are two partial - observation players ( one existential and one universal ) playing in the presence of uncertainty in the transition function ( i.e. , stochastic transition function ) .the qualitative analysis question is to decide the existence of a strategy for the existential player to ensure the parity objective with probability 1 ( or with positive probability ) against all strategies of the universal player .the witness strategy can be randomized or deterministic ( pure ) .while the qualitative problem is undecidable , the practically relevant restriction to finite - memory pure strategies reduces to the four - player game problem .moreover , for finite - memory strategies , the decision problem for randomized strategies reduces to the pure strategy question . by the results we establish in this paper , new decidability and complexity resultsare obtained for the qualitative analysis of partial - observation stochastic games with player partially informed but more informed than player .the complexity results for almost - sure winning are summarized in table [ tab : complexity ] .surprisingly for reachability objectives , whether player 2 is perfectly informed or more informed than player 1 does not change the complexity for randomized strategies , but it results in an exponential increase in the complexity for pure strategies . * organization of the paper . * in section[ sec : definitions ] we present the definitions of three - player games , and other related models ( such as partial - observation stochastic games ) . in section [ sec : player - one - less ] we establish the results for three - player games with player 1 less informed , and in section [ sec : player - one - perfect ] we show hardness of three - player games with perfect observation for player 1 ( which is a special case of player 1 more informed ) .finally , in section [ sec : more - than - three ] we show how our upper bounds for three - player games from section [ sec : player - one - less ] extend to four - player games , and we discuss multi - player games .we conclude with the applications in section [ sec : applications ] .we first consider three - player games with parity objectives and we establish new complexity results in section [ sec : player - one - less ] that we later extend to four - player games in section [ sec : more - than - three ] . in this section , we also present the related models of alternating tree automata that provide useful technical results , and two - player stochastic games for which our contribution implies new complexity results . [ [ games ] ] games + + + + + given alphabets of actions for player ( ) , a _ three - player game _ is a tuple where : is a finite set of states with the initial state ; and is a deterministic transition function that , given a current state , and actions , , of the players , gives the successor state .the games we consider are sometimes called _ concurrent _ because all three players need to choose simultaneously an action to determine a successor state . the special class of _turn - based _games corresponds to the case where in every state , one player has the turn and his sole action determines the successor state . in our framework , a turn - based state for player is a state such that for all , , and .we define analogously turn - based states for player and player .a game is turn - based if every state of is turn - based ( for some player ) .the class of two - player games is obtained when is a singleton . in a game ,given , , , let .[ [ observations ] ] observations + + + + + + + + + + + + for , a set of _ observations _ ( for player ) is a partition of ( i.e. , is a set of non - empty and non - overlapping subsets of , and their union covers ) .let be the function that assigns to each state the ( unique ) observation for player that contains , i.e. such that .the functions are extended to sequences of states in the natural way , namely .we say that player is _ blind _ if , that is player has only one observation ; player has _ perfect information _if , that is player can distinguish each state ; and player is _ less informed _ than player ( we also say player 2 is more informed ) if for all , there exists such that .[ [ strategies ] ] strategies + + + + + + + + + + for , let be the set of _ strategies _ of player that , given a sequence of past observations , give an action for player .equivalently , we sometimes view a strategy of player as a function satisfying for all such that , and say that is _ observation - based_. [ [ outcome ] ] outcome + + + + + + + given strategies ( ) in , the _ outcome play _ from a state is the infinite sequence such that for all , we have where ( for ) . [ [ objectives ] ] objectives + + + + + + + + + + an _ objective _ is a set of infinite sequences of states .a play _ satisfies _ the objective if .an objective is _ visible _ for player if for all , if and , then . we consider the following objectives : _ reachability_. given a set of target states , the _ reachability _ objective requires that a state in be visited at least once , that is , . _safety_. given a set of target states , the _ safety _ objective requires that only states in be visited , that is , . _parity_. for a play we denote by the set of states that occur infinitely often in , that is , . for , let be a priority function , which maps each state to a nonnegative integer priority .the parity objective requires that the minimum priority occurring infinitely often be even .formally , .parity objectives are a canonical way to express -regular objectives . if the priority function is constant over observations of player , that is for all observations we have for all , then the parity objective is visible for player . [ [ decision - problem ] ] decision problem + ++ + + + + + + + + + + + + + given a game and an objective , the _ three - player decision problem _ is to decide if .the results for the three - player decision problem have tight connections and implications for decision problems on alternating tree automata and partial - observation stochastic games that we formally define below . [[ trees ] ] trees + + + + + an -labeled tree consists of a prefix - closed set ( i.e. , if with and , then ) , and a mapping that assigns to each node of a letter in . given and such that , we call the _ successor _ in direction of .the node is the _ root _ of the treeinfinite path _ in is an infinite sequence of directions such that every finite prefix of is a node in .[ [ alternating - tree - automata ] ] alternating tree automata + + + + + + + + + + + + + + + + + + + + + + + + + given a parameter , we consider input trees of rank , i.e. trees in which every node has at most successors .let = \{0,\dots , k-1\} ] is a transition function .intuitively , the automaton is executed from the initial state and reads the input tree in a top - down fashion starting from the root . in state ,if is the letter that labels the current node of the input tree , the behavior of the automaton is given by the formulas .the automaton chooses a _ satisfying assignment _ of , i.e. a set ] are replaced by .then , for each a copy of the automaton is spawned in state , and proceeds to the node of the input tree .in particular , it requires that belongs to the input tree .for example , if , then the automaton should either spawn two copies that process the successor of in direction ( i.e. , the node ) and that enter the states and respectively , or spawn three copies of which one processes and enters state , and the other two process and enter the states and respectively .[ [ language - and - emptiness - problem ] ] language and emptiness problem + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a run of over a -labeled input tree is a tree labeled by elements of , where a node of labeled by corresponds to a copy of the automaton processing the node of the input tree in state . formally , a _ run _ of over an input tree is a -labeled tree such that and for all , if , then the set is a satisfying assignment for .hence we require that , given a node in labeled by , there is a satisfying assignment ] on , such that . given a current state and actions for the players , the transition probability to a successor state is .observation - based strategies are defined as for three - player games .outcome play _ from a state under strategies is an infinite sequence such that , , and for all .[ [ qualitative - analysis ] ] qualitative analysis + + + + + + + + + + + + + + + + + + + + given an objective that is borel measurable ( all borel sets in the cantor topology and all objectives considered in this paper are measurable ) , a strategy for player is _ almost - sure winning _ ( resp . , _ positive winning _ ) for the objective from if for all observation - based strategies for player , we have ( resp ., ) where is the unique probability measure induced by the natural probability measure on finite prefixes of plays ( i.e. , the product of the transition probabilities in the prefix ) .we consider the three - player ( non - stochastic ) games defined in section [ sec : three - players ] .we show that for reachability and parity objectives the three - player decision problem is decidable when player is less informed than player .the problem is expspace - complete when player is blind , and 2-exptime - complete in general .[ rem : final - player ] observe that for three - player ( non - stochastic ) games , once the strategies of the first two players are fixed we obtain a graph , and in graphs perfect - information coincides with blind for construction of a path ( see ( * ? ? ?* lemma 2 ) that counting strategies that count the number of steps are sufficient which can be ensured by a player with no information ) .hence without loss of generality we consider that player 3 has perfect observation , and drop the observation for player 3 .our results for upper bounds are obtained by a reduction of the three - player game problem to an equivalent exponential - size partial - observation game with only two players , which is known to be solvable in exptime for parity objectives , and in psapce when player is blind .our reduction preserves the number of observations of player ( thus if player is blind in the three - player game , then player is also blind in the constructed two - player game ) .hence , the 2-exptime and expspace bounds follow from this reduction .[ theo : one - less - informed - upper - bound ] given a three - player game with player less informed than player and a parity objective , the problem of deciding whether can be solved in 2-exptime .if player is blind , then the problem can be solved in expspace .the proof is by a reduction of the decision problem for three - player games to a decision problem for partial - observation two - player games with the same objective .we present the reduction for parity objectives that are visible for player ( defined by priority functions that are constant over observations of player ) . the general case of not necessarily visible parity objectivescan be solved using a reduction to visible objectives , as in ( * ? ? ?* section 3 ) . given a three - player game over alphabet of actions ( ) , and observations for player and player , with player less informed than player , we construct a two - player game over alphabet of actions ( ) , and observations and perfect observation for player 2 , where ( intuitive explanations follow ) : ; , and ; , and let be the corresponding observation function ; . intuitively , the state space is the set of knowledges of player about the current state in , i.e. , the sets of states compatible with an observation of player . along a play in ,the knowledge of player is updated to represent the set of possible current states in which the game can be . in player perfect observation and the role of player in the game is to simulate the actions of both player and player in . since player fixes his strategy before player in , the simulation should not let player know action , but only the observation that player will actually see while playing the game .the actions of player in are pairs where is a simple action of player in , and gives the observation received by player after the response of player to the action of player when the knowledge of player is . in ,player has partial observation , as he can not distinguish knowledges of player that belong to the same observation of player in .the transition relation updates the knowledges of player as expected .note that , and therefore if player is blind in then he is blind in as well . given a visible parity objective where is constant over observations of player , let where for all and .note that the function is well defined since is a subset of an observation of player and thus for all .however , the parity objective may not be visible to player in .we show that given a witness strategy in we can construct a witness strategy in and vice - versa .let be the set of observation - based strategies of player ( ) in , and let be the set of observation - based strategies of player ( ) in .we claim that the following statements are equivalent : * in , . * in , .the 2-exptime result of the theorem follows from this equivalence because the game is at most exponentially larger than the game , and two - player partial - observation games with a parity objective can be solved in exptime , and when player is blind they can be solved in pspace .observe that when player has perfect information , his observations are singletons and is no bigger than , and an exptime bound follows in that case . to show that implies ,let be a strategy for player such that for all strategies , there is a strategy such that . from , we construct an ( infinite ) dag over state space with edges labeled by elements of defined as follows .the root is .there is an edge labeled by from to if where , and where is the ( unique ) observation of player such that .note that for every node in the dag , for all states , for all , there is a successor of such that where .consider a perfect - information turn - based game played over this dag , between player choosing actions and player choosing observations , resulting in an infinite path in the dag as expected , and that is defined to be winning for player if the sequence satisfies . we show that in this game , for all strategies of player ( which naturally define functions ) , there exists a strategy of player ( a function ) to ensure that the resulting play satisfies .the argument is based on saying that given the strategy is fixed , for all strategies , there is a strategy such that .given a strategy for player in the game over the dag , we use to choose observations as follows .we define a labelling function over the dag in a top - down fashion such that .first , let , and given with an edge labeled by to , let where and .note that indeed .now we define a strategy for player that , in a node of the dag , chooses the observation where , , is the action chosen by player at that node ( remember we fixed a strategy for player ) , and . since , it follows that the resulting play satisfies since satisfies . by determinacy of perfect - information turn - based games , in the game over the dag thereexists a strategy for player such that for all player- strategies , the outcome play satisfies .using , we construct a strategy for player in as follows .first , by a slight abuse of notation , we identify the observations with the observation such that for all . forall , let where and is defined by . by construction of the dag and of the strategy , for all strategies of player in the outcome play satisfies the parity objective , and thus is a winning observation - based strategy in . to show that implies , let be a winning observation - based strategy for the objective in .consider the dag over state space with edges labeled by elements of defined as follows .the root is .for all nodes , for all , there is an edge labeled by from to if and where and , and is the ( unique ) observation of player such that .we say that is the -successor of .note that for all , there exists and such that .this dag mimics the unraveling of under , and since is a winning strategy , for all infinite paths of the dag , the sequence satisfies .define the strategy such that if ( again identifying the observations in and ) . to show that holds , fix an arbitrary observation - based strategy for player .the outcome play of and in is the sequence where is the root , and such that for all , the node is the -successor of where ( where is naturally defined as the unique observation such that ) . from this path in the dag , we construct an infinite path in using knig s lemma as follows .first , it is easy to show by induction ( on ) that for every finite prefix and for every there exists a path in such that for all .note that since and that by definition of the dag , for each ( ) , there exist , , and such that .hence , given , there exist and such that . arranging all these finite paths in a tree, we obtain an infinite finitely - branching tree which by knig s lemma contains an infinite branch that is a path in and such that for all .now we can construct the strategy such that .since satisfies , it follows that satisfies , which completes the proof .[ theo : one - less - informed - lower - bound ] given a three - player game with player less informed than player and a reachability objective , the problem of deciding whether is 2-exptime - hard . if player is blind ( and even when player 2 is also blind ) , then the problem is expspace - hard .the proof of 2-exptime - hardness is obtained by a polynomial - time reduction of the membership problem for exponential - space _ alternating _ turing machines to the three - player problem . the same reduction for the special case of exponential - space _ nondeterministic _turing machines shows expspace - hardness when player is blind ( because our reduction yields a game in which player is blind when we start from a nondeterministic turing machine ) .the membership problem for turing machines is to decide , given a turing machine and a finite word , whether accepts .the membership problem is 2-exptime - complete for exponential - space alternating turing machines , and expspace - complete for exponential - space nondeterministic turing machines .an alternating turing machine is a tuple where the state space consists of the set of or - states , and the set of and - states .the input alphabet is , the tape alphabet is where is the blank symbol .the initial state is , the accepting state is , and the rejecting state is .the transition relation is , where a transition intuitively means that , given the machine is in state , and the symbol under the tape head is , the machine can move to state , replace the symbol under the tape head by , and move the tape head to the neighbor cell in direction .a configuration of is a sequence with exactly one symbol in , which indicates the current state of the machine and the position of the tape head .the initial configuration of on is .given the initial configuration of on , it is routine to define the execution trees of where at least one successor of each configuration in an or - state , and all successors of the configurations in an and - state are present ( and we assume that all branches reach either or ) , and to say that accepts if all branches of some execution tree reach . note that for nondeterministic turing machines , and in that case the execution tree reduces to a single path .a turing machine uses exponential space if for all words , all configurations in the execution of on contain at most non - blank symbols .we present the key steps of our reduction from alternating turing machines .given a turing machine and a word , we construct a three - player game with reachability objective in which player and player have to simulate the execution of on , and player has to announce the successive configurations and transitions of the machine along the execution .player announces configurations one symbol at a time , thus the alphabet of player is . in an initialization phase , the transition relation of the game forces player to announce the initial configuration ( this can be done with states in the game , where ) .then , the game proceeds to a loop where player keeps announcing symbols of configurations . at all times along the execution, some finite information is stored in the finite state space of the game : a window of the last three symbols announced by player , as well as the last symbol announced by player ( that indicates the current machine state and the position of the tape head ) .after the initialization phase , we should have and .when player has announced a full configuration , he moves to a state of the game where either player or player has to announce a transition of the machine : for , if , then player chooses the next transition , and if , then player chooses .note that the transitions chosen by player are visible to player and this is the only information that player observes .hence player is less informed than player , and both player and player are blind when the machine is nondeterministic . if a transition is chosen by player , and either or , then player loses ( i.e. , a sink state is reached to let player lose , and the target state of the reachability objective is reached to let player lose ) .if at some point player announces a symbol with , then player wins the game .the role of player is to check that player faithfully simulates the execution of the turing machine , and correctly announces the configurations . after every announcement of a symbol by player , the game offers the possibility to player to compare this symbol with the symbol at the same position in the next configuration .we say that player _ checks _ ( and whether player checks or not is not visible to player ) , and the checked symbol is stored as .note that player can be blind to check because player fixes his strategy after player .the window stored in the state space of the game provides enough information to update the middle cell in the next configuration , and it allows the game to verify the check of player . however , the distance ( in number of steps ) between the same position in two consecutive configurations is exponential ( say for simplicity ) , and the state space of the game is not large enough to check that such a distance exists between the two symbols compared by player .we use player to check that player makes a comparison at the correct position . when player decides to check , he has to count from to by announcing after every symbol of player a sequence of bits , initially all zeros ( again , this can be enforced by the structure of the game with states ) .it is then the responsibility of player to check that player counts correctly . to check this, player can at any time choose a bit position and store the bit value announced by player at position .the value of and is not visible to player .while player announces the bits at position , the finite state of the game is used to flip the value of if all bits are equal to , hence updating to the value of the -th bit in what should be the next announcement of player . in the next bit sequence announced by player , the -th bit is compared with . if they match , then the game goes to a sink state ( as player has faithfully counted ) , and if they differ then the game goes to the target state ( as player is caught cheating ) .it can be shown that this can be enforced by the structure of the game with states , that is states for each value of .as before , whether player checks or not is not visible to player .note that the checks of player and player are one - shot : the game will be over ( either in a sink or target state ) when the check is finished .this is enough to ensure a faithful simulation by player , and a faithful counting by player , because partial observation allows to hide to a player the time when a check occurs , and player fixes his strategy after player ( and player after player ) , thus they can decide to run a check exactly when player ( or player ) is not faithful .this ensures that player does not win if he does not simulate the execution of on , and that player does not win if he does not count correctly .hence this reduction ensures that accepts if and only if the answer to the three - player game problem is yes , where the reachability objective is satisfied if player eventually announces that the machine has reached ( that is if accepts ) , or if player cheats in counting , which can be detected by player .when player 2 is less informed than player 1 , we show that three - player games get much more complicated ( even in the special case where player has perfect information ) .we note that for reachability objectives , the three - player decision problem is equivalent to the qualitative analysis of positive winning in two - player stochastic games , and we show that the techniques developed in the analysis of two - player stochastic games can be extended to solve the three - player decision problem with safety objectives as well . for reachability objectives , the three - player decision problem is equivalent to the problem of positive winning in two - player stochastic games where the third player is replaced by a probabilistic choice over the action set with uniform probability .intuitively , after player and player fixes their strategy , the fact that player can construct a ( finite ) path to the target set is equivalent to the fact that such a path has positive probability when the choices of player are replaced by uniform probabilistic transitions . given a three - player game , let be the two - player partial - observation _stochastic _ game ( with same state space , action sets , and observations for player and player ) where for all , , and .formally , the equivalence result is presented in lemma [ lem : uniform ] , and the equivalence holds for all three - player games ( not restricted to three - player games where player 1 has perfect information ) .however , we will use lemma [ lem : uniform ] to establish results for three - player games where player 1 has perfect information .[ lem : uniform ] given a three - player game and a reachability objective , the answer to the three - player decision problem for is yes if and only if player is positive winning for in the two - player partial - observation stochastic game . [ [ reachability - objectives . ] ] reachability objectives .+ + + + + + + + + + + + + + + + + + + + + + + + even in the special case where player has perfect information , and for reachability objectives , non - elementary memory is necessary in general for player to win in three - player games .this result follows from lemma [ lem : uniform ] and from the result of ( * ? ? ?* example 4.2 journal version ) showing that non - elementary memory is necessary to win with positive probability in two - player stochastic games .it also follows from lemma [ lem : uniform ] and the result of ( * ? ? ?* corollary 4.9 journal version ) that the three - player decision problem for reachability games is decidable in non - elementary time .[ [ safety - objectives . ] ] safety objectives .+ + + + + + + + + + + + + + + + + + we show that the three - player decision problem can be solved for games with a safety objective when player has perfect information .the proof is using the _ counting abstraction _ of ( * ? ? ?* section 4.2 journal version ) and shows that the answer to the three - player decision problem for safety objective is yes if and only if there exists a winning strategy in the two - player counting - abstraction game with the safety objective to visit only counting functions ( i.e. , essentially tuples of natural numbers ) with support contained in the target states . intuitively , the counting abstraction is as follows : with every knowledge of player 2 we store a tuple of counters , one for each state in the knowledge .the counters denote the number of possible distinct paths to the states of the knowledge , and the abstraction treats large enough values as infinite ( value ) .the counting - abstraction game is monotone with regards to the natural partial order over counting functions , and therefore it is well - structured and can be solved by constructing a self - covering unraveling tree , i.e. a tree in which the successors of a node are constructed only if this node has no greater ancestor .the properties of well - structured systems ( well - quasi - ordering and knig s lemma ) ensure that this tree is finite , and that there exists a strategy to ensure only supports contained in the target states are visited if and only if there exists a winning strategy in the counting - abstraction game ( in a leaf of the tree , one can copy the strategy played in a greater ancestor ) .it follows that the three - player decision problem for safety games is equivalent the problem of solving a safety game over this finite tree .[ theo : player - one - perfect ] when player 1 has perfect information , the three - player decision problem is decidable for both reachability and safety games , and for reachability games memory of size non - elementary is necessary in general for player .we show that the results presented for three - player games extend to games with four players ( the fourth player is universal and perfectly informed ) .the definition of four - player games and related notions is a straightforward extension of section [ sec : three - players ] . in a four - player game with player less informed than player , and perfect information for both player and player , consider the _ four - player decision problem _ which is to decide if for a parity objective . since player and player have perfect information , we assume without loss of generality that the game is turn - based for them , that is there is a partition of the state space into two sets and ( where ) such that the transition function is the union of and .strategies and outcomes are defined analogously to three - player games .a strategy of player is of the form . by determinacy of perfect - information turn - based games with countable state space , the negation of the four - player decision problem is equivalent to .once the strategies and are fixed , the condition can be viewed as the membership problem for a tree in the language of an alternating parity tree automaton with state space where is the -labeled tree where and for all . by the results of , if there exists an accepting -labeled run tree for an input tree in an alternating parity tree automaton, then there exists a _memoryless _ accepting run tree , that is such that for all nodes such that and , the subtrees of rooted at and are isomorphic . since the membership problem is equivalent to a two - player parity game played on the structure of the alternating automaton , a memoryless accepting run tree can be viewed as a winning strategy , or equivalently such that for all strategies , the resulting infinite branch in the tree satisfies the parity objective .it follows from this that the ( negation of the ) original question is equivalent to where is the set of strategies of a player ( call it player 24 ) with observations and action set , and the outcome is defined as expected in a three - player game ( played by player , player , and player ) with transition function defined by .hence the original question ( and its negation ) for four - player games reduces in polynomial time to solving a three - player game with the first player less informed than the second player .hardness follows from the special case of three - player games .[ theo : player - four ] the four - player decision problem with player less informed than player , and perfect information for both player and player is 2-exptime - complete for parity objectives .we now discuss the various possibilities of strategy quantifiers and information of the players in multi - player games .first , if there are two existential ( resp . ,universal ) players with incomparable information , then the decision question is undecidable ; and if there is a sequence of existential ( resp . , universal ) quantification over strategies players such that the information of the players form a chain ( i.e. , in the sequence of quantification over the players , let the players be such that is more informed than , more informed than and so on ) , then with repeated subset construction , the sequence can be reduced to one quantification .note however that if there is a quantifier alternation between existential and universal , then even if the information may form a chain , subset construction might not be sufficient : for example , if player 1 is perfect and player 2 has partial - information , non - elementary memory might be necessary ( as shown in section [ sec : player - one - perfect ] ) .we now discuss the various possibilities of strategy quantification in four - player games . without loss of generalitywe consider that the first strategy quantifier is existential .the above argument for sequence of quantifiers ( either undecidability with incomparable information or the sequence reduces to one ) shows that we only need to consider the following strategy quantification : , where the subscripts denote the quantification over strategies for the respective player .first , note that once the strategies of the first three players are fixed we obtain a graph , and similar to remark [ rem : final - player ] without loss of generality we consider that player 4 has perfect observation .we now consider the possible cases for player 3 in presence of player 4 ._ perfect observation ._ the case when player 3 has perfect observation has been solved in the main paper ( results of section [ sec : more - than - three ] ) ._ partial observation ._ we now consider the case when player 3 has partial observation . if player 2 is less informed than player 1 , then the problem is at least as hard as the problem considered in section [ sec : player - one - perfect ] .if player 3 is less informed than player 2 , then even in the absence of player 1 , the problem is as hard as the negation of the question considered in section [ sec : player - one - perfect ] ( where first a more informed player plays , followed by a less informed player , just the strategy quantifiers are as compared to considered in section [ sec : player - one - perfect ] ) .finally , if player 1 is less informed than player 2 , and player 2 is less informed than player 3 , then we apply our construction of section [ sec : player - one - less ] twice and obtain a double exponential size two - player partial - observation game which can be solved in 3-exptime . recall that in absence of player 4 , by remark [ rem : final - player ] whether player 3 has partial or perfect information does not matter and we obtain a 2-exptime upper bound ; whereas in presence of player 4, we obtain a 3-exptime upper bound if player 3 has partial information ( but more informed than player ) , and a 2-exptime upper bound if player 3 has perfect information ( theorem [ theo : player - four ] ) .we now discuss applications of our results in the context of synthesis and qualitative analysis of two - player partial - observation stochastic games .* sequential synthesis .* the _ sequential synthesis _problem consists of an open system of partially implemented modules ( with possible non - determinism or choices ) that need to be refined ( i.e. , the choices determined by strategies ) such that the composite system after refinement satisfy a specification .the system is open in the sense that after the refinement the composite system is reactive and interact with an environment .consider the problem where first a set of modules are refined , then a set are refined by an external implementor , and finally the remaining set of modules are refined .in other words , the modules are refined sequentially : first a set of modules whose refinement can be controlled , then a set of modules whose refinement can not be controlled as they are implemented externally , and finally the remaining set of modules . if the refinements of modules do not have access to private variables of the remaining modules we obtain a partial - observation game with four players : the first ( existential ) player corresponds to the refinement of modules , the second ( universal ) player corresponds to the refinement of modules , the third ( existential ) player corresponds to the refinement of the remaining modules , and the fourth ( adversarial ) player is the environment . if the second player has access to all the variables visible to the first player , then player 1 is less informed .* two - player partial - observation stochastic games . * our results forfour - player games imply new complexity results for two - player stochastic games .for qualitative analysis ( positive and almost - sure winning ) under finite - memory strategies for the players the following reduction has been established in ( * ? ? ?* lemma 1 ) ( see lemma 2.1 of the arxiv version ) : the probabilistic transition function can be replaced by a turn - based gadget consisting of two perfect - observation players , one angelic ( existential ) and one demonic ( universal ) .the turn - based gadget is the same as used for perfect - observation stochastic games . in ,only the special case of perfect observation for player 2 was considered , and hence the problem reduced to three - player games where only player 1 has partial observation and the other two players have perfect observation . in case where player 2 has partial observation ,the reduction of requires two perfect - observation players , and gives the problem of four - player games ( with perfect observation for player 3 and player 4 ) .hence when player 1 is less informed , we obtain a 2-exptime upper bound from theorem [ theo : player - four ] , and obtain a 2-exptime lower bound from theorem [ theo : one - less - informed - lower - bound ] since the three - player games problem with player 1 less informed for reachability objectives coincides with _ positive _ winning for two - player partial - observation stochastic games ( lemma [ lem : uniform ] ) . for _ almost - sure _ winning ,a 2-exptime lower bound can also be obtained by an adaptation of the proof of theorem [ theo : one - less - informed - lower - bound ] .we use the same reduction from exponential - space alternating turing machines , with the following changes : the third player is replaced by a uniform probability distribution over player- s moves , thus the reduction is now to two - player partial - observation stochastic games ; instead of reaching a sink state when player detects a mistake in the sequence of configurations announced by player , the game restarts in the initial state ; thus the target state of the reachability objective is not reached , but player gets another chance to faithfully simulate the turing machine .it follows that if the turing machine accepts , then player has an almost - sure winning strategy by faithfully simulating the execution .indeed , either player never checks , or checks and counts correctly , and then player wins since no mistake is detected , or player checks and cheats counting , and then player is caught with positive probability ( player wins ) , and with probability smaller than the counting cheat is not detected and thus possibly a ( fake ) mismatch in the symbol announced by player is detected. then the game restarts .hence in all cases after finitely many steps , either player wins with ( fixed ) positive probability , or the game restarts .it follows that player wins the game with probability .if the turing machine rejects , then player can not win by a faithful simulation of the execution , and thus he should cheat .the strategy of player is then to check and to count correctly , ensuring that the target state of the reachability objective is not reached , and the game restarts .hence for all strategies of player , there is a strategy of player to always avoid the target state ( with probability ) , and thus player can not win almost - surely ( he wins with probability ) .this completes the proof of the reduction for almost - sure winning .[ theo : stochastic - games ] the qualitative analysis problems ( almost - sure and positive winning ) for two - player partial - observation stochastic parity games where player 1 is less informed than player 2 , under finite - memory strategies for both players , are 2-exptime - complete .note that the lower bounds for theorem [ theo : stochastic - games ] are established for reachability objectives .moreover , it was shown in ( * ? ? ?* section 5 ) that for qualitative analysis of two - player partial - observation stochastic games with reachability objectives , finite - memory strategies suffice , i.e. , if there is a strategy to ensure almost - sure ( resp .positive ) winning , then there is a finite - memory strategy .thus the results of theorem [ theo : stochastic - games ] hold for reachability objectives even without the restriction of finite - memory strategies .
we consider multi - player graph games with partial - observation and parity objective . while the decision problem for three - player games with a coalition of the first and second players against the third player is undecidable in general , we present a decidability result for partial - observation games where the first and third player are in a coalition against the second player , thus where the second player is adversarial but weaker due to partial - observation . we establish tight complexity bounds in the case where player is less informed than player , namely 2-exptime - completeness for parity objectives . the symmetric case of player more informed than player is much more complicated , and we show that already in the case where player has perfect observation , memory of size non - elementary is necessary in general for reachability objectives , and the problem is decidable for safety and reachability objectives . our results have tight connections with partial - observation stochastic games for which we derive new complexity results .
there exists various physical systems for which a centralized control implementation is not a suitable solution .the two main reasons for this are : ( i ) the physical system is spread over a wide area , which makes communication to a central hub expensive and prompt to data loss ( power networks , traffic networks , etc . ) , ( ii ) the physical system is a composition of individual subsystems with clearly defined physical boundaries ( autonomous vehicles , swarms of robots , etc ) . in the context of model - based predictive controllers ,there is a third situation in which centralized control is not a viable solution , and it has to do with computational requirements .model predictive control ( mpc ) is a mature control technique that provides safe and stabilizing control under appropriate design .however , in order to compute the control action to be applied to the plant , it needs to solve an optimization problem at each time instant . clearly then , plants with fast dynamics and a large number of control variables are out of the scope of standard centralized mpc implementations . a natural solution to this problem is to split the plant into smaller subsystems , and then to design local controllers .however , this creates residual interaction between different subsystems , which must be taken into account if control guarantees are to be delivered .many distributed mpc ( dmpc ) implementations have been devised to tackle this problem ( for a detailed review see ) .the main aspect in which they differ is in which type of interaction is allowed between subsystems , and how this is treated . in recent years , robust mpc techniques such as tube mpc ( tmpc ) and others ( for example ) , have inspired several dmpc approaches .the main idea of such approaches is to treat the interactions between subsystems as local disturbances that may be handled ( conservatively ) by robust local controllers .communication and iteration between subsystems is then used in different ways to reduce the conservativeness induced by use of a robust method .all of these approaches , either implicitly or explicitly , rely on stabilizability assumptions .these usually require the existence of a global linear feedback with a certain structure affine to the interaction pattern of the global system .this type of assumption is also present in dmpc techniques that are not based on robust approaches such as . in this paper , a fundamental relation that exists between this type of stabilizability assumption , and the various concepts of invariance employed by ( tube - based ) dmpc architectures ,is made explicit . to the best of the authors knowledge, this relation has not been studied explicitly before ( although some related results have been published , which we survey in section 3 ) , albeit the global stabilizability assumption is present ( either explicitly or implicitly ) in many of the dmpc approaches proposed to date .first , a collection of linear local robust controllers is introduced in order to study the stabilizability of the global plant from a decentralized perspective .the main result states that a sufficient condition for global stabilizability is that these local controllers are locally stabilizing and constraint admissible .although very simple , the proposed controllers sit at the core of many tube - based dmpc architectures , which makes this result relevant to the state of the art in dmpc .the remainder of the paper is organized as follows : section [ sec.2 ] defines the preliminaries and standard assumptions found in many tube - based dmpc implementations . in order to contextualize the results given in this paper , section [ sec.3 ] discusses two approaches that are commonly used to meet the previously stated assumptions .the main result is given in section [ sec.4 ] , and three examples are presented in section [ sec.5 ] in order to illustrate the result , and highlight its advantages and limitations . finally , section [ sec.6 ] provides some conclusions .the set of consecutive integers is denoted by .a block - diagonal matrix with blocks and is denoted as . a vector that is formed by vertically stacked vectors with is denoted as . if is a matrix , is the spectral radius of ; if then s is schur . if and are sets , then and ( minkowski sum and pontryagin difference respectively ) . if is a collection of sets with , then .the matrix is the identity of dimension .if is the value of at time , then is the prediction of made at time .first consider the general problem of regulating a network of , possibly heterogeneous , linear time invariant ( lti ) systems that interact with each other via states and inputs .for all the dynamics of subsystem are represented by the following state space model where and are the state and input vectors belonging to subsystem at time , with matrices of corresponding dimension .the set is referred to as the set of neighbours of subsystem , and contains the indexes of all the other subsystems that affect the dynamics of .given the assumed non - centralized nature of the plant , consider the case in which each subsystem is subject to local constraints of the form [ ass.1 ] for all , the pair is stabilizable and the set is compact , convex and contains the origin in its interior .to simplify the analysis , assume that the subsystems do not share states and/or inputs .the collection of local models forms the following global constrained lti model : where , matrices and are compose by the blocks , and one of the main questions in distributed control is how to guarantee certain properties at the global scale ( stability , constraint satisfaction , etc . ) , from a possibly distributed design . in order to achieve these global properties , many dmpc implementations employ synthesis procedures that are either centralized , or require the solution of problems that can be computationally expensive . in this context , and in particular with tube - based dmpc controllers , the following assumption is usually required ( see for example ) , [ ass.2 ] there exists a collection of local linear feedbacks such that is schur for all and is schur , with .assumption [ ass.2 ] demands the existence of a globally stabilizing linear feedback with a block - diagonal structure .this definitively limits the class of systems that can be controlled with such techniques , but a more pressing issue in the context of decentralized control , is that searching for a collection of local feedbacks that fulfils assumption [ ass.2 ] usually requires centralized computations .there is a vast literature dedicated to analysing the impact that _ naive _ local control design has over global behaviour ( see for example ) , but perhaps an example is enough to clarify the problem .[ exmp.1 ] consider the coupled integrators the local linear feedbacks result in , however , .example [ exmp.1 ] makes clear that careful ( perhaps centralized ) design is required to meet assumption [ ass.2 ] .the main result of this paper is to make explicit the fundamental relation that exists between the stabilizability requirements in assumption [ ass.2 ] and the standard notions of invariance that are usually implicit in dmpc controllers . to aid this ,various concepts of invariance are now defined .[ def.1 ] a set is said to be a pi set for the dynamics if . [ def.3 ] a set is said to be a rpi set for the disturbed dynamics and disturbance if .[ rem.4 ] non - compact or singleton invariant sets are not considered a valid solution in the following analysis . [ rem.3 ] in view of remark [ rem.4 ] , unstable closed - loop dynamics do not accept invariant sets as described by definitions [ def.1 ] and [ def.3 ] .the task of finding local linear feedbacks that fulfil assumption [ ass.2 ] ( or a similar one ) appears in different steps of the implementation of dmpc controllers .some of the techniques used to compute these feedbacks also exploit notions of invariance , and are implicitly related to the more fundamental result shown in this paper . in order to put this result into context , two of these approaches are briefly discussed . in the general dmpc framework( not necessarily tube - based approaches ) , a common practice is to tackle the distributed controller synthesis problem from a centralized perspective , and to pose a set of lmis whose solution(s ) provides an adequate candidate for the required controller ( these lmis represent standard lyapunov stability conditions in the context of mpc ) . in a global linear feedback , which is forced to be zero whereverthere is no dynamical coupling , is proposed to play the role of the local terminal controller often used in the mpc framework . in order to find this feedback , a set of local lmisis posed alongside with a single global lmi of dimension ( where n is the overall dimension of the plant ) .a similar problem is found in , where a distributed optimization approach is proposed to find the solution to the local lmis , in the presence of a system - wide coupled lmi .the problem of finding _ separable _ pi sets is tackled in a similar way in , where a set of lmis is proposed to find simultaneously a set of local independent feedbacks that fulfil assumption [ ass.2 ] , and a corresponding collection of joint pi sets .this procedure is proposed to tackle disturbed local dynamics , but even when no disturbance is considered , the smaller lmi is of dimension . although not explicitly stated , the concept of positively invariant families of sets ( pifs ) , introduced by , can also be used to analyse the link between local and global closed - loop behaviour ( in a centralized fashion ) .the concept of pifs is more general than definition [ def.1 ] but it is based on the same invariance properties .the main advantage of the specific parametrization proposed in is that , in the context of global to local dynamics , the dimension of the problem can be considerably reduced ( from to ) by looking at what the authors call a _comparison _ system .it is shown in that : ( i ) a stable comparison system is a sufficient condition for the true system to accept a pifs , and ( ii ) if a system admits a pifs , then it admits a pi set , which therefore means that the closed - loop dynamics are stable . in this context, it might prove _ easier _ to find a stable comparison system ( of dimension ) and then relate it to the global system in closed loop form with a particularly structured .this notion is the underlying idea of the approach proposed in , where local lmis are constructed in order to find , in a non - centralized fashion , a collection of local gains that fulfils assumption [ ass.2 ] , and a corresponding pifs . in all of the these approaches ,there is no guarantee that a solution does exist for the proposed set of lmis ( or stable comparison system ) .this should not be a surprise , given that the successful synthesis of a non - centralized controller depends greatly on the size of the interaction between neighbouring subsystems , and how these are dealt with ( communication , iterative optimization , etc . ) .the fundamental result shown in this paper stems from the invariance notions implicit in the robust control technique known as tube mpc , and its application to distributed control .suppose that subsystem has no means of obtaining information about what its neighbours plans are .a sensible , yet conservative , way of performing non - centralized control is to view the dynamical interaction between subsystems as disturbances that must be rejected . in order to move forward ,the following assumption is required .[ ass.4 ] state and input constraints are satisfied by all subsystems : and for all .if assumption [ ass.1 ] is fulfilled , then the following _ disturbance _ sets are compact for all moreover , if assumption [ ass.4 ] is fulfilled , the sets represent a suitable bound for the interaction between subsystems : in view of this and definition [ def.3 ] , for any linear feedback that renders schur , there exists a compact rpi set for the local disturbed dynamics given and the invariance of , it follows that [ prop.2 ] if for all ( i ) and , and ( ii ) , then and for all and for all . in the context of tmpc , the sets are the cross section of the local tubes .proposition [ prop.2 ] provides a decentralized perspective to the problem of finding stabilizing and constraint admissible local linear feedbacks .indeed , as long as the hypotheses of proposition [ prop.2 ] are met ( for all ) , independent rpi sets could be designed for each subsystem .the closed - loop global system would then take the form , which as shown in example [ exmp.1 ] , could be unstable .this seems to defy the invariance properties of the collection of sets . in spite of the local invariance arguments , the analysis of the local dynamics does evidence an unexpected behaviour .consider a network with and suppose that are indeed chosen such that are schur and that the hypotheses of proposition [ prop.2 ] are met , but is not schur .given and unstable closed - loop global dynamics , there must exist a such that , but .this means that the robust invariant property of has been broken , which can only mean that the disturbances affecting subsystem 1 ( see ) have been _ larger _ than initially assumed .a similar argument can be made for , which prompts the following conclusion : [ prop.1 ] suppose are chosen such that are schur but is not , and that the hypotheses of proposition [ prop.2 ] are met .then there exists a finite time such that but or .[ rem.2 ] proposition [ prop.1 ] implies either a state _ jump _ from inside to outside , or an input _ jump _ from inside to outside .this , in some cases , might mean a discontinuity of the state trajectories , which is not characteristic of the type of ( linear ) systems being analysed . the uncharacteristic behaviour made explicit by remark [ rem.2 ] can be explained by the fundamental relation that exists between assumption [ ass.2 ] and the notions of invariance employed in proposition [ prop.2 ] ( and by tmpc ) .consider the following definition : [ def.2 ] a tube corresponding to a particular linear feedback is said to be constraint admissible if the first part of the hypothesis of proposition [ prop.2 ] holds , i.e. , if and .the main result of this paper is now stated : [ thm.2 ] if there exists constraint admissible tubes for all subsystems , then the collection of local gains related to these admissible tubes fulfils assumption [ ass.2 ] .first , in view of remark [ rem.3 ] , if admits a rpi set , then is schur .given definition [ def.3 ] , and standard minkowski sum properties , it is clear that if is rpi for the disturbed dynamics in , then moreover , for all with , [ eq.14 ] where the first inclusion follows from minkowski sum properties and the second inclusion from the constraint admissibility of tube .equations and imply and hence , is a pi set for the closed - loop dynamics . in view of this and remark [ rem.3 ] , is schur .theorem [ thm.2 ] presents sufficient conditions for a collection locally stabilizing linear feedbacks to be globally stabilizing .unstable global dynamics , as demanded by proposition [ prop.1 ] , imply assumption [ ass.4 ] is not met , and therefore ( some of ) the disturbance sets are unbounded .the corresponding rpi sets are also unbounded , thus , the set inclusions required for constraint admissibility of the tubes can not be met .this implies that the behaviour described by remark [ rem.2 ] is not possible , because the tubes would not have been admissible .a collection of constraint admissible tubes is a family of jointly rpi sets .this collection forms what can be referred to as a _ square _ pi set , given that is defined by the cross product of subsets of the local dynamics state space .this is also pointed out in .[ rem.1 ] the existence of such a particular pi set is a necessary condition for the tubes to be system - wide admissible .this clarifies the main source of conservativeness of theorem [ thm.2 ] .large constraint sets for a single subsystem might render impossible to find admissible tubes for the affected neighbours .however , theorem [ thm.2 ] remains useful for the task of analysing local - to - global stabilizability ( the constraint sets could be shrunk , similar to the approach taken in ) . in the parametrization of pifs proposed by , each element in the family of sets is also _square_. a marginally stable comparison system that admits the corresponding eigenvector as an initial scaling factor , also provides a _ square _ pi , and therefore , a collection of local linear feedbacks that fulfils assumption [ ass.2 ] .the main purpose of the linear robust controller presented in section [ sec.1 ] was to show the fundamental result given by theorem [ thm.2 ] . on its own ,this controller can only guarantee robust stabilizability of its region of attraction ( the set ) .however , such a controller is at the core of many tube - based dmpc architectures . in order to make this clear ,tmpc is now briefly described .tmpc aims to solves the regulation problem for a nominal undisturbed version of the plant , while securing that the state of the true plant remains bounded inside a _ tube _ centred ( at each time ) at the nominal undisturbed trajectory .define the nominal model for subsystem as then , the control action to be applied to the true plant , at each time step , is computed via the following control policy the first term in is a stabilizing control action for the nominal model , and the second one is designed to reject the disturbances .the local feedback is any stabilizing gain for the pair .the dynamics of the trajectory deviation are therefore defined by where the disturbance represents the unknown dynamical coupling , and therefore belongs to as defined in ( exact same dynamics as ) .the pair is obtained from the following open loop optimal control problem , which is standard in nominal mpc implementations , subject to the dynamics in and , for : [ eq.9 ] [ thm.1 ] if ( i ) is an rpi set for the dynamics in and disturbance , ( ii ) , and are designed following standard mpc arguments to ensure asymptotic stability of the nominal undisturbed system , and ( iii ) the set fulfils assumption [ ass.1 ] , then ( a ) the set is robustly asymptotically stable for system in closed loop with and ( b ) constraints are met at all times .the reader is referred to for the proof .note that , for all , constraint may be replaced by ( i.e. , independent time evolution of the nominal system , see ) .given , the closed - loop dynamics reduce to , which , again , could show the behaviour described by proposition [ prop.2 ] and remark [ rem.2 ] .however , theorem [ thm.2 ] shows that this is not the case .if the hypotheses of theorem [ thm.1 ] are met for all , then is schur . hypotheses ( i ) and ( iii ) of theorem [ thm.1 ] are equivalent to the existence of a constraint admissible tube for subsystem , hence , if met for all , the hypothesis of theorem [ thm.2 ] is met , and the collection of local linear feedbacks fulfils assumption [ ass.2 ] .in order to illustrate the main result of this paper , consider a global plant composed by four trucks modelled by point masses , aligned in an horizontal plane ( see fig .[ fig.1 ] ) .each truck is dynamically coupled to its immediate neighbours through springs and dampers .the control objective is to steer the whole system towards an arbitrary equilibrium point using locally applied horizontal forces .this plant is used as a numerical example for various dmpc algorithms , see for example .= [ draw , fill = none , rectangle , minimum width=1.4 cm , minimum height=1.4 cm ] = [ draw , fill = black , circle , minimum size=0.2cm , inner sep=0pt ] = [ thick , decorate , decoration = zigzag , pre length=0.3cm , post length=0.3cm , segment length=6 ] = [ thick , decoration = markings , mark connection node = dmp , mark = at position 0.5 with ( dmp ) [ thick , inner sep=0pt , transform shape , rotate=-90,minimum width=15pt , minimum height=3pt , draw = none ] ; ( ) ( dmp.south east ) ( dmp.south west ) ( ) ; ( ) ( ) ; , decorate ] = [ fill , pattern = north east lines , draw = none , minimum width=11cm , minimum height=0.3 cm ] at ( 0,0 ) [ block , name = t1 , align = center ] ; ; ; ; ( w1 ) at ( t1.south west)[wheel , align = center , xshift = 0.2cm , yshift=-0.1 cm ] ; ( w2 ) at ( t1.south east)[wheel , align = center , xshift=-0.2cm , yshift=-0.1 cm ] ; ( w3 ) at ( t2.south west)[wheel , align = center , xshift = 0.2cm , yshift=-0.1 cm ] ; ( w4 ) at ( t2.south east)[wheel , align = center , xshift=-0.2cm , yshift=-0.1 cm ] ; ( w5 ) at ( t3.south west)[wheel , align = center , xshift = 0.2cm , yshift=-0.1 cm ] ; ( w6 ) at ( t3.south east)[wheel , align = center , xshift=-0.2cm , yshift=-0.1 cm ] ; ( w7 ) at ( t4.south west)[wheel , align = center , xshift = 0.2cm , yshift=-0.1 cm ] ; ( w8 ) at ( t4.south east)[wheel , align = center , xshift=-0.2cm , yshift=-0.1 cm ] ; ( ) to node[above , yshift=0.15 cm ] ( ) ; ( ) to node[below , yshift=-0.17 cm ] ( ) ; ( ) to node[above , yshift=0.15 cm ] ( ) ; ( ) to node[below , yshift=-0.17 cm ] ( ) ; ( ) to node[above , yshift=0.15 cm ] ( ) ; ( ) to node[below , yshift=-0.17 cm ] ( ) ; ( gr ) at ( t1.south west)[ground , align = center , yshift=-0.35 cm , xshift=5 cm ] ; ( gr.north west ) to ( gr.north east ) ; ( p1 ) at ( t1.north)[xshift = 0.5cm , yshift=0.3 cm ] ; ( p2 ) at ( t2.north)[xshift = 0.5cm , yshift=0.3 cm ] ; ( p3 ) at ( t3.north)[xshift = 0.5cm , yshift=0.3 cm ] ; ( p4 ) at ( t4.north)[xshift = 0.5cm , yshift=0.3 cm ] ; ( t1.north ) |- node[above , xshift=0.15 cm ] ( p1 ) ; ( t2.north ) |- node[above , xshift=0.15 cm ] ( p2 ) ; ( t3.north ) |- node[above , xshift=0.15 cm ] ( p3 ) ; ( t4.north ) |- node[above , xshift=0.15 cm ] ( p4 ) ; the state vector of the global plant is composed of the horizontal position and velocity of all trucks , and so . a single subsystem is associated to each truck , the local state and input vectors represent the position , velocity and force of the corresponding truck .a sampling time of is used to discretize the system ( the values of the matrices and are omitted ) .suppose first that the following homogeneous local constraints are enforced for all , table [ tb.1 ] shows a collection of locally stabilizing linear feedbacks , which produce admissible tubes .these local gains correspond to the lqr feedback obtained for , and .figure [ fig.2 ] shows that the rpi set is contained inside the constraint set for all ( the inclusion is also met ) . in view of theorem [ thm.2 ]it must be then , that is globally stabilizing .this is indeed the case , with .ccccc feedback & & & & + ' '' '' value & & & & + ' '' '' & & & & + table[row sep = crcr ] x y + -2 -8 + 2 -8 + 2 8 + -2 8 + cycle ; [ plot : admx ] table[row sep = crcr ] x y + 0.368448027426730 -0.000162908593819 + 0.368448799999081 -0.000179425011670 + 0.218828342321213 -2.921402697689580 + 0.218663651886365 -2.924267462760100 + -0.041234398842243 -2.283205029442100 + -0.041503580976335 -2.282505004132050 + -0.217640499998698 -1.260309444944030 + -0.217821191678819 -1.259251310965580 + -0.308879617441970 -0.575888702299051 + -0.308972572682921 -0.575187820418232 + -0.348585120997550 -0.224382582238845 + -0.348625400493980 -0.224024521965303 + -0.363279328353610 -0.072271656277245 + -0.363294164042220 -0.072117369938229 + -0.367656350060199 -0.016330548535438 + -0.367660734983506 -0.016274080168849 + -0.368448027426730 0.000162908593819 + -0.368448799999081 0.000179425011669 + -0.218828342321213 2.921402697689580 + -0.218663651886365 2.924267462760100 + 0.041234398842243 2.283205029442100 + 0.041503580976335 2.282505004132050 + 0.217640499998698 1.260309444944030 + 0.217821191678819 1.259251310965580 + 0.308879617441970 0.575888702299050 + 0.308972572682921 0.575187820418231 + 0.348585120997550 0.224382582238845 + 0.348625400493980 0.224024521965303 + 0.363279328353610 0.072271656277244 + 0.363294164042220 0.072117369938228 + 0.367656350060199 0.016330548535438 + 0.367660734983506 0.016274080168849 + cycle ; [ plot : admz ] ( pta ) at ( axis description cs:0,1 ) ; at ( pta ) ; ( ptl3 ) at ( axis description cs:0,1 ) ; table[row sep = crcr ] x y + -2 -8 + 2 -8 + 2 8 + -2 8 + cycle ; table[row sep = crcr ] x y + 0.255008543583299 -4.518314341106630 + -0.114932146924777 -2.911986959291830 + -0.328149226588979 -1.391571553857120 + -0.425841651857804 -0.584167470209297 + -0.465908525316492 -0.227063944638343 + -0.481241185232219 -0.083612028120101 + -0.486820626334139 -0.029503832205347 + -0.488770129674639 -0.010035232242447 + -0.489427456108811 -0.003297944179057 + -0.489641762847750 -0.001046327547677 + -0.489709299098649 -0.000318618898758 + -0.489729813750556 -0.000091391676066 + -0.489735783347873 -0.000023085748628 + -0.489737427877648 -0.000003462406248 + -0.255008543583298 4.518314341106630 + 0.114932146924777 2.911986959291830 + 0.328149226588979 1.391571553857120 + 0.425841651857804 0.584167470209296 + 0.465908525316492 0.227063944638342 + 0.481241185232220 0.083612028120101 + 0.486820626334139 0.029503832205347 + 0.488770129674640 0.010035232242447 + 0.489427456108811 0.003297944179057 + 0.489641762847750 0.001046327547677 + 0.489709299098649 0.000318618898758 + 0.489729813750557 0.000091391676066 + 0.489735783347874 0.000023085748627 + 0.489737427877649 0.000003462406247 + cycle ; ( ptc ) at ( axis description cs:0,1 ) ; at ( ptc ) ; table[row sep = crcr ] x y + -2 -8 + 2 -8 + 2 8 + -2 8 + cycle ; table[row sep = crcr ] x y + 0.065641451111154 -0.000523573336477 + 0.039481013834674 -0.521884609470779 + 0.039434215714766 -0.521770303650118 + -0.006981625610713 -0.407263698795151 + -0.007013497656042 -0.407079840812772 + -0.038569751770231 -0.224746470483960 + -0.038586278105004 -0.224623377756604 + -0.054934351815940 -0.102756218115168 + -0.054941564632338 -0.102692948204965 + -0.062071587462886 -0.040107545474603 + -0.062074268936728 -0.040080123037796 + -0.064722923001641 -0.012973296075579 + -0.064723729259905 -0.012963179544620 + -0.065519140783651 -0.002970870731936 + -0.065667513571400 -0.000007215557322 + -0.065641451111154 0.000523573336477 + -0.039481013834674 0.521884609470779 + -0.039434215714766 0.521770303650118 + 0.006981625610713 0.407263698795151 + 0.007013497656042 0.407079840812772 + 0.038569751770231 0.224746470483960 + 0.038586278105004 0.224623377756604 + 0.054934351815940 0.102756218115168 + 0.054941564632338 0.102692948204965 + 0.062071587462886 0.040107545474603 + 0.062074268936728 0.040080123037796 + 0.064722923001641 0.012973296075579 + 0.064723729259905 0.012963179544620 + 0.065519140783651 0.002970870731936 + 0.065667513571400 0.000007215557322 + cycle ; ( ptb ) at ( axis description cs:0,1 ) ; at ( ptb ) ; ( ptl1 ) at ( axis description cs:0,0 ) ; ( pt3 ) at ( axis cs:0,0 ) ; table[row sep = crcr ] x y + -2 -8 + 2 -8 + 2 8 + -2 8 + cycle ; table[row sep = crcr ] x y + 0.065641451111154 -0.000523573336477 + 0.039481013834674 -0.521884609470779 + 0.039434215714766 -0.521770303650118 + -0.006981625610713 -0.407263698795151 + -0.007013497656042 -0.407079840812772 + -0.038569751770231 -0.224746470483960 + -0.038586278105004 -0.224623377756604 + -0.054934351815940 -0.102756218115168 + -0.054941564632338 -0.102692948204965 + -0.062071587462886 -0.040107545474603 + -0.062074268936728 -0.040080123037796 + -0.064722923001641 -0.012973296075579 + -0.064723729259905 -0.012963179544620 + -0.065519140783651 -0.002970870731936 + -0.065667513571400 -0.000007215557322 + -0.065641451111154 0.000523573336477 + -0.039481013834674 0.521884609470779 + -0.039434215714766 0.521770303650118 + 0.006981625610713 0.407263698795151 + 0.007013497656042 0.407079840812772 + 0.038569751770231 0.224746470483960 + 0.038586278105004 0.224623377756604 + 0.054934351815940 0.102756218115168 + 0.054941564632338 0.102692948204965 + 0.062071587462886 0.040107545474603 + 0.062074268936728 0.040080123037796 + 0.064722923001641 0.012973296075579 + 0.064723729259905 0.012963179544620 + 0.065519140783651 0.002970870731936 + 0.065667513571400 0.000007215557322 + cycle ; ( pt4 ) at ( axis description cs:0,1 ) ; at ( pt3 ) ; ( z2.east ) to node[above , draw = black , fill = white , inner sep=1pt , yshift=-0.02cm , xshift=0.65cm] ( pt4 ) ; table[row sep = crcr ] x y + -2 -8 + 2 -8 + 2 8 + -2 8 + cycle ; table[row sep = crcr ] x y + 0.027661937069151 -0.001664166370806 + 0.020341827123091 -0.147924141063745 + 0.020339896351905 -0.147925685773233 + 0.020169407013200 -0.148041392069230 + 0.005119039886861 -0.156479944975660 + 0.005117277613036 -0.156475036165270 + 0.004962900261962 -0.156037595854310 + -0.008559296312698 -0.117079376436903 + -0.008560528152203 -0.117073678691818 + -0.008667996370438 -0.116573119254879 + -0.018043066683797 -0.072604509112489 + -0.018043792649143 -0.072600089714293 + -0.018106920143348 -0.072213862560532 + -0.023595896808368 -0.038462879577266 + -0.023596261370248 -0.038460070568012 + -0.023627847325318 -0.038215436556051 + -0.026364235883706 -0.016911930224764 + -0.026376977076731 -0.016779627729080 + -0.027474222591930 -0.005296642277619 + -0.027477358058877 -0.005236796551065 + -0.027742051270691 -0.000065788372291 + -0.027741196337047 -0.000045803952192 + -0.027661937069151 0.001664166370806 + -0.020341827123091 0.147924141063745 + -0.020339896351905 0.147925685773233 + -0.020169407013200 0.148041392069230 + -0.005119039886861 0.156479944975660 + -0.005117277613036 0.156475036165270 + -0.004962900261962 0.156037595854310 + 0.008559296312698 0.117079376436903 + 0.008560528152203 0.117073678691818 + 0.008667996370438 0.116573119254879 + 0.018043066683797 0.072604509112489 + 0.018043792649143 0.072600089714293 + 0.018106920143348 0.072213862560532 + 0.023595896808368 0.038462879577266 + 0.023596261370248 0.038460070568012 + 0.023627847325318 0.038215436556051 + 0.026364235883706 0.016911930224764 + 0.026376977076731 0.016779627729080 + 0.027474222591930 0.005296642277619 + 0.027477358058877 0.005236796551065 + 0.027742051270691 0.000065788372291 + 0.027741196337047 0.000045803952192 + cycle ; ( ptd ) at ( axis description cs:0,1 ) ; at ( ptd ) ; ( pt1 ) at ( axis cs:0,0 ) ; ( ptl2 ) at ( axis description cs:1,0 ) ; table[row sep = crcr ] x y + -2 -8 + 2 -8 + 2 8 + -2 8 + cycle ; table[row sep = crcr ] x y + 0.033363073185361 0.000008128621428 + 0.021018810616548 0.045639089290155 + 0.013562819449701 0.062136162581653 + 0.008756084623581 0.072613808369807 + 0.005652951900109 0.079375862857163 + 0.003649573198847 0.083741408902742 + 0.002356193583112 0.086559801407912 + 0.001521188773532 0.088379353271889 + -0.001200857431656 0.094310940957820 + -0.001885558224860 0.095802966620624 + -0.002924778220519 0.098067522055058 + -0.004534476680283 0.101575202180993 + -0.007027816801284 0.107008413258920 + -0.010889856412426 0.115423766217299 + -0.016870732959596 0.128427798694454 + -0.026042963239301 0.146251846071511 + -0.033363073185361 -0.000008128621428 + -0.021018810616548 -0.045639089290155 + -0.013562819449701 -0.062136162581653 + -0.008756084623581 -0.072613808369807 + -0.005652951900109 -0.079375862857163 + -0.003649573198847 -0.083741408902742 + -0.002356193583112 -0.086559801407912 + -0.001521188773532 -0.088379353271889 + 0.001214638655472 -0.094340971505180 + 0.001885558224860 -0.095802966620624 + 0.002924778220519 -0.098067522055058 + 0.004534476680283 -0.101575202180993 + 0.007027816801284 -0.107008413258920 + 0.010889856412426 -0.115423766217299 + 0.016870732959596 -0.128427798694454 + 0.026042963239301 -0.146251846071511 + cycle ; ( pt2 ) at ( axis description cs:0,1 ) ; at ( pt1 ) ; ( z1.east ) to node[above , draw = black , fill = white , inner sep=1pt , yshift=-0.02cm , xshift=0.65cm] ( pt2 ) ; ( ) to ( ) ; ( ) to ( ) ; suppose now that , by any reason , truck 2 is allowed thrice the size of the constraints of the other subsystems , i.e. .this effectively means .although , this new disturbance set is still contained in the state constraint set , fig . [ fig.3 ] shows that the same from table [ tb.1 ] produces a rpi set that is not inside ( the rpi set for subsystem 3 also increases in size ) .this implies that , although the collection of local feedbacks is indeed globally stabilizing , theorem [ thm.2 ] can not be used to guarantee it .table[row sep = crcr ] x y + -2 -8 + 2 -8 + 2 8 + -2 8 + cycle ; [ plot : nadmx ] table[row sep = crcr ] x y + 1.105344082280190 -0.000488725781458 + 1.105346399997240 -0.000538275035010 + 0.656485026963636 -8.764208093068721 + 0.655990955659092 -8.772802388280260 + -0.123703196526727 -6.849615088326270 + -0.124510742929005 -6.847515012396120 + -0.652921499996090 -3.780928334832070 + -0.653463575036453 -3.777753932896710 + -0.926638852325907 -1.727666106897140 + -0.926917718048760 -1.725563461254690 + -1.045755362992650 -0.673147746716531 + -1.045876201481940 -0.672073565895905 + -1.089837985060830 -0.216814968831732 + -1.089882492126660 -0.216352109814685 + -1.102969050180590 -0.048991645606313 + -1.102982204950510 -0.048822240506545 + -1.105344082280190 0.000488725781458 + -1.105346399997240 0.000538275035009 + -0.656485026963636 8.764208093068721 + -0.655990955659092 8.772802388280260 + 0.123703196526727 6.849615088326270 + 0.124510742929005 6.847515012396120 + 0.652921499996090 3.780928334832070 + 0.653463575036453 3.777753932896710 + 0.926638852325907 1.727666106897140 + 0.926917718048760 1.725563461254690 + 1.045755362992650 0.673147746716531 + 1.045876201481940 0.672073565895905 + 1.089837985060830 0.216814968831732 + 1.089882492126660 0.216352109814684 + 1.102969050180590 0.048991645606313 + 1.102982204950510 0.048822240506545 + cycle ; [ plot : nadmz ] ( pta ) at ( axis description cs:0,1 ) ; at ( pta ) ; ( ptl1 ) at ( axis description cs:0,0 ) ; table[row sep = crcr ] x y + -2 -8 + 2 -8 + 2 8 + -2 8 + cycle ; table[row sep = crcr ] x y + 0.123804762222301 -0.000987499077658 + 0.074464190650207 -0.984314010267648 + 0.074375925841772 -0.984098420808426 + -0.013167876151850 -0.768130267347798 + -0.013227989250003 -0.767783497229134 + -0.072745481186890 -0.423888912684927 + -0.072776651109436 -0.423656750452320 + -0.103610359754112 -0.193806031634934 + -0.103623963673649 -0.193686699778980 + -0.117071728252783 -0.075645876907794 + -0.117076785716104 -0.075594156109260 + -0.122072348446130 -0.024468621712168 + -0.122073869110450 -0.024449541166435 + -0.123574075655238 -0.005603287836183 + -0.123853918001751 -0.000013609089126 + -0.123804762222301 0.000987499077658 + -0.074464190650207 0.984314010267648 + -0.074375925841772 0.984098420808426 + 0.013167876151850 0.768130267347799 + 0.013227989250003 0.767783497229134 + 0.072745481186890 0.423888912684927 + 0.072776651109436 0.423656750452320 + 0.103610359754112 0.193806031634934 + 0.103623963673649 0.193686699778980 + 0.117071728252782 0.075645876907794 + 0.117076785716104 0.075594156109260 + 0.122072348446130 0.024468621712168 + 0.122073869110450 0.024449541166435 + 0.123574075655238 0.005603287836183 + 0.123853918001751 0.000013609089126 + cycle ; ( ptb ) at ( axis description cs:0,1 ) ; at ( ptb ) ; ( ptl2 ) at ( axis description cs:1,0 ) ; ( pt3 ) at ( axis cs:0,0 ) ; table[row sep = crcr ] x y + -2 -8 + 2 -8 + 2 8 + -2 8 + cycle ; table[row sep = crcr ] x y + 0.123804762222301 -0.000987499077658 + 0.074464190650207 -0.984314010267648 + 0.074375925841772 -0.984098420808426 + -0.013167876151850 -0.768130267347798 + -0.013227989250003 -0.767783497229134 + -0.072745481186890 -0.423888912684927 + -0.072776651109436 -0.423656750452320 + -0.103610359754112 -0.193806031634934 + -0.103623963673649 -0.193686699778980 + -0.117071728252783 -0.075645876907794 + -0.117076785716104 -0.075594156109260 + -0.122072348446130 -0.024468621712168 + -0.122073869110450 -0.024449541166435 + -0.123574075655238 -0.005603287836183 + -0.123853918001751 -0.000013609089126 + -0.123804762222301 0.000987499077658 + -0.074464190650207 0.984314010267648 + -0.074375925841772 0.984098420808426 + 0.013167876151850 0.768130267347799 + 0.013227989250003 0.767783497229134 + 0.072745481186890 0.423888912684927 + 0.072776651109436 0.423656750452320 + 0.103610359754112 0.193806031634934 + 0.103623963673649 0.193686699778980 + 0.117071728252782 0.075645876907794 + 0.117076785716104 0.075594156109260 + 0.122072348446130 0.024468621712168 + 0.122073869110450 0.024449541166435 + 0.123574075655238 0.005603287836183 + 0.123853918001751 0.000013609089126 + cycle ; ( pt4 ) at ( axis description cs:0,1 ) ; at ( pt3 ) ; ( z2.east ) to node[above , draw = black , fill = white , inner sep=1pt , yshift=-0.02cm , xshift=0.65cm] ( pt4 ) ; ( ) to ( ) ; heterogeneous constraints are not the only source of conservatism that theorem [ thm.2 ] suffers from . as made explicit by remark [ rem.1 ] , a necessary condition for the tubes to be system - wide admissible is that a square pi set exists . consider the following global system with ( scalar subsystems ) : when the loop is closed with and , assumption [ ass.2 ] is fulfilled with , and .however , it is easy to show that there is no _ square _ pi set for the closed - loop , and therefore , no matter the constraints , the tubes corresponding to the block - diagonal linear feedback will never be simultaneously admissible .the properties of many distributed mpc techniques usually rely on structured stabilizability assumptions , and standard notions of invariance . in this paper, a fundamental relation between these two concepts has been made explicit .theorem [ thm.2 ] shows that the constraint admissibility of very simple local robust controllers is a sufficient condition for the global plant to be block - diagonal stabilizable .the main source of conservatism for theorem [ thm.2 ] is the structure requirement .a necessary condition for the tubes to be system - wide feasible , is that the closed - loop system accepts a _ square _ pi .future work will be focused on finding conditions over the coupling terms under which such a particular type of pi set exist .
this paper studies a fundamental relation that exists between stabilizability assumptions usually employed in distributed model predictive control implementations , and the corresponding notions of invariance implicit in such controllers . the relation is made explicit in the form of a theorem that presents sufficient conditions for global stabilizability . it is shown that constraint admissibility of local robust controllers is sufficient for the global closed - loop system to be stable , and how these controllers are related to more complex forms of control such as tube - based distributed model predictive control implementations . predictive control , invariant systems , decentralized control , stability analysis , invariance .
the characterization of isolated attosecond pulses has played an important role in the development of attosecond science .the generation and application of ever shorter attosecond extreme - ultraviolet ( xuv ) pulses relies on knowledge of their time - domain properties , which can be obtained by means of attosecond streaking measurements .so far , the main functions of attosecond streaking are ( i ) to characterize the field of an attosecond pulse and ( ii ) to temporally resolve a physical process on the attosecond scale . here, we are concerned with the former application of attosecond streaking , that of characterizing an attosecond pulse .much effort has been exerted on the development of methods for extracting physical information from the streaking measurement , with the current state - of - the - art being the frog retrieval algorithm .the frog algorithm has already been used to characterize the shortest attosecond pulses , and to uncover a measured delay of between photoemissions from the and sub - shells of neon .it is relatively robust , and provides a wealth of information about the temporal characteristics of the attosecond and laser fields .however , the application of frog to attosecond streaking requires quite stringent experimental requirements , such as a sufficient amount of recorded spectra , with a delay step between them on the order of the attosecond pulse s duration .these experimental parameters become unwieldy as the duration of attosecond pulses approaches the atomic unit of time . moreover ,the frog algorithm is a somewhat complicated numerical optimization procedure , whose output ( the attosecond field and the laser field ) is not transparently related to the input ( the set of streaked spectra ) .thus , errors in the reconstructed pulses are difficult to interpret due to the frog algorithm s black - box nature .although frog provides a complete characterization of the attosecond xuv field , the _ duration _ of the attosecond pulse is the primordial quantity that will be interrogated as attosecond streaking continues to expand beyond its original scope into various research fields . in this article, we introduce a simple and robust method for quantifying the chirp of an attosecond pulse based on an analytical formula we derive from laser - dressed photoelectron trajectories . using this formula ,we develop a method that _ directly _ evaluates the attosecond pulse s group - delay dispersion from a sequence of streaked spectra , which in turn sets the pulse s duration provided its spectrum is known .our method avoids the stringent experimental conditions required for the attosecond frog technique , and provides accurate results with very few electron spectra in a matter of seconds .we begin this article with the derivation of the analytical expression for the change in photoelectron bandwidth due to the streaking effect , and then introduce our method with a numerical example .all quantities are expressed in atomic units unless otherwise stated .let us first consider an attosecond xuv pulse with electric field given by [ attosecond_field ] where the spectrum of the attosecond pulse is centered at with small variations in frequency due to the higher - order temporal phase .the attosecond pulse launches electron trajectories that are parameterized with an initial time as well as an electron energy . due to the attosecond pulses finite _ bandwidth _ , we consider the energy as an independent variable , while the independent variable is a result of the finite _ duration _ of the attosecond pulse .thus , the set of trajectories is described by a time - energy distribution with respect to .the final energy of an electron , launched at some moment in a continuum permeated by a near - infrared ( nir ) laser field , is then where we define the instantaneous frequency due to the chirp of the attosecond pulse , and is the vector potential of the laser field .since the change in frequency over the temporal profile of the attosecond pulse is much smaller than the central frequency , the last term in ( [ final_energy_approx ] ) is comparably small and can be dropped , leading to the simple relation for the shift of the photoelectron spectrum .it is known that the spectral shift alone is not sufficient to obtain information about the attosecond pulse s chirp because the final energy is hardly sensitive to the temporal phase of the attosecond pulse .the main manifestation of the attosecond chirp in the streaking measurement is the change in breadth of the streaked photoelectron spectrum . to describe this effect , we interpret ( [ final_energy ] ) as a mapping of the initial time and energy of an electron trajectory to a final energy ( e.g. measured at the detector ) . to describe the effect of chirp, it is useful to consider small changes in the final energy with respect to small changes in the initial energy and time of the trajectory .the total differential of ( [ final_energy ] ) is then where we again neglect the small terms containing . the temporal phase of the attosecond pulse appears in ( [ total_differential ] ) as , which defines the _ chirp _ of the attosecond pulse .we have also introduced the electric field of the laser pulse .thus , the chirp of the attosecond pulse and the electric field of the laser pulse both influence the spread in final energies resulting from the streaking effect .to proceed further , we interpret the effects of the nir field on the time - energy distribution of electron trajectories , as described by ( [ total_differential ] ) , in a straightforward manner .initial inspection of ( [ total_differential ] ) shows that the nir field imparts an additional energy sweep of to the photoelectron , resulting in a total chirp .furthermore , the nir field re - scales the energy spread by a factor . as a result , both the nir electric field and the nir vector potential have a role in modifying the breadth of the photoelectron spectrum . in order to account for the effects of the streaking field , we recall that the attosecond electron wave packet can be viewed as a replica of the attosecond pulse .we model this photoelectron replica as where is the central photoelectron energy . naturally , since the electron trajectories are launched by the attosecond pulse , the duration of the electron wave packet should be nearly the same as that of the attosecond pulse ; and as is a replica of , its chirp is the same as that of the attosecond pulse . for simplicity ,we assume to be constant and we also assume that the attosecond pulse is shorter than any relevant time scale of the nir field , so that and are evaluated at the central time of the attosecond pulse .now , in order to include the effects of the streaking field , we first consider the shift of the photoelectron spectrum due to and the change in bandwidth due to the chirp induced by . to this end ,we modify the wave packet s central energy and chirp as follows : [ modelling_a_streaked_wave_packet ] with these substitutions , the _ streaked _ photoelectron wave packet is modeled as to obtain an expression for the bandwidth of the streaked photoelectron spectrum , we note that the streaked photoelectron spectrum is just a fourier - transform of .since the streaked wave packet is a gaussian , the fourier transform of can be carried out analytically , yielding the following expression for the bandwidth of the streaked spectrum : [ streaked_bandwidth ] where and represent the bandwidth and group - delay dispersion ( gdd)defined as the second derivative of the spectral phase of the attosecond pulse .the quantity is the attosecond pulse s time - bandwidth product , with a fourier - limited time - bandwidth product ( for a gaussian spectrum ) .the quantities , and are all taken as standard deviations of their respective distributions . according to ( [ streaked_bandwidth ] ), determines the width of the streaked spectrum as a function of .provided that the characteristics of the field - free spectrum ( , and ) as well as those of the laser field ( and ) are known , remains the only free parameter . in writing ( [ streaked_bandwidth ] ) , we also explicitly included the energy re - scaling pre - factor .similar but less general expressions for the streaked photoelectron bandwidth were previously derived in from the semi - classical expression for streaking .these expressions consider photoionization at the zero - crossing of the vector potential , , where there is no spectral shift but only a change in spectral bandwidth due to the nir field .these expressions therefore do not contain the bandwidth re - scaling factor , which is needed to accurately represent the bandwidth of the streaked spectra at arbitrary delay times , when the nir field simultaneously shifts the photoelectron spectrum and changes its bandwidth .although ( [ streaked_bandwidth ] ) was deduced assuming a gaussian wave packet , it actually applies to more general pulse shapes owing to the fact that the relation holds for arbitrary spectra with a constant gdd ( see appendix a ) .the following section presents numerical examples in further support of this claim .equation ( [ streaked_bandwidth ] ) serves as the basis for our method to extract the attosecond chirp from a streaking measurement .our procedure is very straightforward : we evaluate the first moments ( ) of the streaked spectra to obtain the laser field s vector potential , which in turn gives us the laser s electric field .we also compute a curve of standard deviations of the measured streaked spectra as a function of the xuv - nir delay .lastly , we find the attosecond chirp only free parameter in ( [ streaked_bandwidth])which minimizes the discrepancy between the widths obtained from the set of streaked spectra and those given by the model ( [ streaked_bandwidth ] ) . to compare these two , we define a figure of merit where the sums range over the xuv - nir delays .the goal of our procedure is to find that best reproduces the measured curve according to model ( [ streaked_bandwidth ] ) .as an example , we consider the case of a non - gaussian xuv pulse .this pulse has a constant gdd of .however , since its spectrum ( figure [ streaking_example]-b ) is irregular ( ) , i.e. it is asymmetric and contains some fine structure , its chirp is time - dependent . the streaking field is a nir pulse given by with , yielding a full width at half maximum ( fwhm ) duration , corresponding to a central wavelength of and with , giving a peak intensity of .for this example , we consider carrier - envelope phase values of ( figure [ streaking_example]-a ) and ( figure [ streaking_example]-b ) . the simulated streaking measurements , shown in figure [ streaking_example]-a and figure [ streaking_example]-b , are composed of a sequence of streaked spectra computed for different delays between the xuv and nir fields by propagating the time - dependent schrdinger equation ( tdse ) using a split - step fft scheme .the hamiltonian is that of a single electron in one dimension , assuming a soft - core potential with an ionization energy .the results of our analytical chirp evaluation ( ace ) procedure , applied to the spectrograms shown in figures [ streaking_example]-a and [ streaking_example]-b , are shown in figures [ chirp_extraction ] and [ chirp_extraction_sine_pulse ] . in both cases ,we have applied ace to different subsets of streaked spectra , by considering a varying number of spectra about the central delay value . for the case , figure [ chirp_extraction]-a shows a false - color plot of the figure of merit as defined in ( [ figure_of_merit ] ) .darker areas correspond to a smaller value of .when too few spectra are considered , figure [ chirp_extraction]-a shows a local minimum near which disappears as more spectra ( ) are considered . nonetheless , figure [ chirp_extraction]-b shows that we recover the exact gdd ( the dashed line ) from the global minimum to within with as few as three spectra .as increases , the global minimum eventually stabilizes around the red dashed line representing the exact gdd , and ace converges nearly to the exact value .figure [ chirp_extraction]-c shows that the model ( [ streaked_bandwidth ] ) reproduces the correct curve for the exact gdd .in contrast , we found that the attosecond frog retrieval fails to converge when fewer than 25 spectra are included , for which it recovers a gdd . for , figure [chirp_extraction_sine_pulse]-ashows that the figure of merit has only one minimum as a function of gdd .this minimum quickly converges to the correct gdd as more spectra are considered in the evaluation , as displayed in figure [ chirp_extraction_sine_pulse]-b , and is already accurate to within for spectra .figure [ chirp_extraction_sine_pulse]-c shows that the model ( [ streaked_bandwidth ] ) once again reproduces the correct curve ( hollow circles ) of streaked breadths for the exact gdd .the main advantage of the ace procedure is that it requires very few spectra .as long as is properly sampled by the delay step between the spectra , there is enough information for ace to recover the gdd of the attosecond pulse .in contrast , frog requires the delay step to be on the order of the attosecond pulse s duration . to illustrate this point, we apply ace to a subset of the spectra shown in figure [ streaking_example]-a and [ streaking_example]-b . specifically , we consider spectra over the interval ] .even with so few spectra , ace still recovered accurate gdd s of and for and , respectively . on the other hand, frog failed to converge to anything meaningful in both cases , most likely because the delay step was too large . to further demonstrate ace s robustness against a non - gaussian spectrum, we consider a clipped version of the xuv spectrum shown in figure [ streaking_example]-c , for which we remove energy components above .experimentally , such a sharp edge in the xuv spectrum might result from the beam s transmission through a metallic filter . using the clipped xuv spectrum , we compute sets of streaked photoelectron spectra , with the same parameters as those displayed in figure [ streaking_example]-a and [ streaking_example]-b . in spite of this heavy clipping ,ace recovers gdd s of and for and , respectively . in comparison, frog recovers a gdd of for both and .as previously mentioned , these examples assume a constant gdd over an irregular spectral distribution , resulting in a chirp that depends on time .since expression ( [ streaked_bandwidth])which is at the core of the ace procedure assumes a constant chirp in time , then the chirp parameter is interpreted as the _ average _ chirp over the attosecond pulse s temporal profile .conversely , if a non - uniform gdd was considered , then ace would have recovered the _ average _ gdd over the spectral profile . as an additional verification of ace s robustness ,we investigate the effect of noise in the streaked spectra . to this end , we add noise to the sets of spectra shown in figure [ streaking_example]-a and [ streaking_example]-b .we assume that the number of counts in a spectral bin follows a poisson distribution , with an expectation value proportional to the spectral intensity ( we set for the peak of the spectrogram , corresponding to a very low count rate ) . from these considerations, we compute the noisy spectra which are shown in figure [ noisy_spectrograms ] . even under such nefarious conditions , ace recovers accurate values of the gdd : from the spectrogram shown in figure [ noisy_spectrograms]-a , and from the one in figure [ noisy_spectrograms]-b . in comparison ,frog recovers gdd s of and , respectively .this example demonstrates that ace can tolerate very noisy spectra , and moreover that it is robust against errors in the vector potential , as determined from the streaked spectra .in conclusion , we have derived a general analytical expression ( [ streaked_bandwidth ] ) for the change in spectral breadth due to the streaking effect by considering the trajectories of a photoelectron ejected by an isolated attosecond pulse in a laser field .we have used this equation as a basis for a method to directly extract the attosecond chirp from a sequence of streaked spectra .in contrast to the attosecond frog retrieval , the ace procedure does not require streaked spectra to be recorded with a delay step on the order of the attosecond pulse duration : it only requires the delay step to properly sample the streaking field .this alleviates many of the experimental constraints related to the current approaches to characterize isolated attosecond pulses .in addition , the ace procedure is simple to implement , robust against experimental artifacts , and fast taking seconds to execute and requiring very few ( ) streaked spectra .this makes ace ideal for real - time diagnostics in attosecond streaking measurements .the authors are grateful for discussions with f. krausz .this work was supported by the max planck society and the dfg cluster of excellence : munich centre for advanced photonics ( map ) .the final publication is available at www.springerlink.comthe following is a proof of the general relation between the duration , the fourier - limited duration , the bandwidth and the group - delay dispersion ( gdd ) for a pulse with an arbitrary spectrum and a constant gdd ; , and are taken as standard deviations of their respective distributions .the duration is defined as the standard deviation of , which is the square - root of the variance (t)\right|^2\mathrm{d}t.\end{aligned}\ ] ] in the following derivation , the prime symbol ( `` '' ) denotes differentiation with respect to the argument and the pulse is normalized according to .assuming is frequency - independent , then ( [ spec_profile ] ) implies . inserting this expression for into the rightmost - hand - side of ( [ duration_tau ] ) , we obtain in analogy to ( [ duration_tau ] ) , the bandwidth - limited duration is given by (t)\right|^2\mathrm{d}t.\end{aligned}\ ] ] now , and are fourier transforms of each other .thus , from parseval s theorem , we have where ( [ bandwidth - limited_duration_tau0 ] ) in combination with parseval s theorem was used for the last equation on the rhs of ( [ parseval_i ] ) . \mathrm{d}t\\ \label{duration_tau_4 } & = \tau_0 ^ 2+\gamma^2\delta^2 + 2\gamma\int_{-\infty}^{\infty}\mathfrak{i}[\omega \tilde{f}'_0(\omega)\tilde{f}^*_0(\omega)]\mathrm{d}\omega,\end{aligned}\ ] ] where we have used (t) ] to obtain ( [ duration_tau_4 ] ) .now , if the pulse s gdd is constant over its spectrum , is a strictly _ real _ quantity , and therefore the last term on the rhs of ( [ duration_tau_4 ] ) is equal to zero , yielding ( [ bandiwdth - dispersion - duration_relation ] ) . relation ( [ bandiwdth - dispersion - duration_relation ] ) is the reason why the ace procedure can be applied for arbitrary xuv spectra ( of course , provided that the attosecond pulse is short compared to the half - period of the streaking field ) .
we derive an analytical expression that relates the breadth of a streaked photoelectron spectrum to the group - delay dispersion of an isolated attosecond pulse . based on this analytical expression , we introduce a simple , efficient and robust procedure to instantly extract the attosecond pulse s chirp from the streaking measurement . we show that our method is robust against experimental artifacts .
in a language understanding system where full , linguistically - motivated analyses of utterances are desired , the linguistic analyser needs to generate possible semantic representations and then choose the one most likely to be correct .if the analyser is a component of a pipelined speech understanding system , the problem is magnified , as the speech recognizer will typically deliver not a word string but an n - best list or a lattice ; the problem then becomes one of choosing between multiple analyses of several competing word sequences . in practice, we can only come near to satisfactory disambiguation performance if the analyser is trained on a corpus of utterances from the same source ( domain and task ) as those it is intended to process . since this needs to be done afresh for each new source , and since a corpus of several thousand sentences will normally be needed , economic considerations mean it is highly desirable to do it as automatically as possible .furthermore , those aspects that can not be automated should as far as possible not depend on the attention of experts in the system and in the representations it uses . the spoken language translator ( slt ; becket _ et al _ , forthcoming ; rayner and carter , 1996 and 1997 ) is a pipelined speech understanding system of the type assumed here .it is constructed from general - purpose speech recognition , language processing and speech synthesis components in order to allow relatively straightforward adaptation to new domains .linguistic processing in the slt system is carried out by the core language engine ( cle ; alshawi , 1992 ) . given an input string , n - best list or lattice, the cle applies unification - based syntactic rules and their corresponding semantic rules to create zero or more quasi - logical form ( qlf , described below ; alshawi , 1992 ; alshawi and crouch , 1992 ) analyses of it ; disambiguation is then a matter of selecting the correct ( or at least , the best available ) qlf .this paper describes the treebanker , a program that facilitates supervised training by interacting with a non - expert user and that organizes the results of this training to provide the cle with data in an appropriate format .the cle uses this data to analyse speech recognizer output efficiently and to choose accurately among the interpretations it creates .i assume here that the coverage problem has been solved to the extent that the system s grammar and lexicon license the correct analyses of utterances often enough for practical usefulness ( rayner , bouillon and carter , 1995 ) .the examples given in this paper are taken from the atis ( air travel inquiry system ; hemphill _ et al _ , 1990 ) domain .however , wider domains , such as that represented in the north american business news ( nab ) corpus , would present no particular problem to the treebanker as long as the ( highly non - trivial ) coverage problems for those domains were close enough to solution .the examples given here are in fact all for english , but the treebanker has also successfully been used for swedish and french customizations of the cle ( gambck and rayner , 1992 ; rayner , carter and bouillon , 1996 ) .in the version of qlf output by the cle s analyser , content word senses are represented as predicates and predicate - argument relations are shown , so that selecting a single qlf during disambiguation entails resolving content word senses and many structural ambiguities .however , many function words , particularly prepositions , are not resolved to senses , and quantifier scope and anaphoric references are also left unresolved .some syntactic information , such as number and tense , is represented .thus qlf encodes quite a wide range of the syntactic and semantic information that can be useful both in supervised training and in run - time disambiguation .qlfs are designed to be appropriate for the inference or other processing that follows utterance analysis in whatever application ( translation , database query , etc . )the cle is being used for .however , they are not easy for humans to work with directly in supervised training .even for an expert , inspecting all the analyses produced for a sentence is a tedious and time - consuming task .there may be dozens of analyses that are variations on a small number of largely independent themes : choices of word sense , modifier attachment , conjunction scope and so on .further , if the representation language is designed with semantic and computational considerations in mind , there is no reason why it should be easy to read even for someone who fully understands it . andindeed , as already argued , it is preferable that selection of the correct analysis should as far as possible not require the intervention of experts at all .the treebanker ( and , in fact , the cle s preference mechanism , omitted here for space reasons but discussed in detail by becket _et al _ , forthcoming ) therefore treats a qlf as completely characterized by its_ properties _ : smaller pieces of information , extracted from the qlf or the syntax tree associated with it , that are likely to be easy for humans to work with .the treebanker presents instances of many kinds of property to the user during training . however , its functionality in no way depends on the specific nature of qlf , and in fact its first action in the training process is to extract properties from qlfs and their associated parse trees , and then never again to process the qlfs directly .the database of analysed sentences that it maintains contains only these properties and not the analyses themselves. it would therefore be straightforward to adapt the treebanker to any system or formalism from which properties could be derived that both distinguished competing analyses and could be presented to a non - expert user in a comprehensible way .many mainstream systems and formalisms would satisfy these criteria , including ones such as the university of pennsylvania treebank ( marcus _ et al _ , 1993 ) which are purely syntactic ( though of course , only syntactic properties could then be extracted ) .thus although i will ground the discussion of the treebanker in its use in adapting the cle system to the atis domain , the work described is of much more general application .many of the properties extracted from qlfs can be presented to non - expert users in a form they can easily understand . those properties that hold for some analyses of a particular utterance but not forothers i will refer to as _ discriminants _ ( dagan and itai , 1994 ; yarowsky , 1994 ) .discriminants that fairly consistently hold for correct but not ( some ) incorrect analyses , or vice versa , are likely to be useful in distinguishing correct from incorrect analyses at run time . thus for training on an utterance to be effective , we need to provide enough `` user - friendly '' discriminants to allow the user to select the correct analyses , and as many as possible `` system - friendly '' discriminants that , over the corpus as a whole , distinguish reliably between correct and incorrect analyses . ideally , a discriminant will be both user - friendly and system - friendly , but this is not essential . in the rest of this paper we will only encounter user - friendly properties and discriminants .the treebanker presents properties to the user in a convenient graphical form , exemplified in figure [ snapshot1 ] for the sentence `` show me the flights to boston serving a meal '' .( 6.5,3.0 ) initially , all discriminants are displayed in inverse video to show they are viewed as undecided . through the disambiguation process , discriminants and the analyses they apply to can be undecided , correct ( `` good '' , shown in normal video ) , or incorrect ( `` bad '' , normal video but preceded a negation symbol `` ` ~ ` '' ) .the user may click on any discriminant with the left mouse button to select it as correct , or with the right button to select it as incorrect .the types of property currently extracted , ordered approximately from most to least user - friendly , are as follows ; examples are taken from the six qlfs for the sentence used in figure [ snapshot1 ] . *_ constituents _ : advp for `` serving a meal '' ( a discriminant , holding only for readings that could be paraphrased `` show me the flights to boston while you re serving a meal '' ) ; vp for `` serving a meal '' ( holds for all readings , so not a discriminant and not shown in figure [ snapshot1 ] ) . * _ semantic triples _ : relations between word senses mediated usually by an argument position , preposition or conjunction ( alshawi and carter , 1994 ) .examples here ( abstracting from senses to root word forms , which is how they are presented to the user ) are `` flight to boston '' and `` show -to boston '' ( the `` - '' indicates that the attachment is not a low one ; this distinction is useful at run time as it significantly affects the likelihood of such discriminants being correct ) .argument - position relations are less user - friendly and so are not displayed .+ when used at run time , semantic triples undergo abstraction to a set of semantic classes defined on word senses . for example , the obvious senses of `` boston '' , `` new york '' and so on all map onto the class name cc_city .these classes are currently defined manually by experts ; however , only one level of abstraction , rather than a full semantic hierarchy , seems to be required , so the task is not too arduous . *_ word senses _ : `` serve '' in the sense of `` fly to '' ( `` does united serve dallas ? '' ) or `` provide '' ( `` does that flight serve meals ? '' ) . *_ sentence type _ : imperative sentence in this case ( other moods are possible ; fragmentary sentences are displayed as `` elliptical np '' , etc ) .* _ grammar rules used _ :the rule name is given .this can be useful for experts in the minority of cases where their intervention is required . in all, 27 discriminants are created for this sentence , of which 15 are user - friendly enough to display , and a further 28 non - discriminant properties may be inspected if desired .this is far more than the three distinct differences between the analyses ( `` serve '' as `` fly to '' or `` provide '' ; `` to boston '' attaching to `` show '' or `` flights '' ; and , if `` to boston '' does attach to `` flights '' , a choice between `` serving a meal '' as relative or adverbial ) .the effect of this is that the user can give attention to whatever discriminants he finds it easiest to judge ; other , harder ones will typically be resolved automatically by the treebanker as it reasons about what combinations of discriminants apply to which analyses .the first rule the treebanker uses in this reasoning process to propagate decisions is : * if an analysis ( represented as a set of discriminants ) has a discriminant that the user has marked as bad , then the analysis must be bad .this rule is true by definition .the other rules used depend on the assumption that there is exactly one good analysis among those that have been found , which is of course not true for all sentences ; see section [ additional ] below for the ramifications of this . *if a discriminant is marked as good , then only analyses of which it is true can be good ( since there is at most one good analysis ) .* if a discriminant is true only of bad analyses , then it is bad ( since there is at least one good analysis ) . *if a discriminant is true of all the undecided analyses , then it is good ( since it must be true of the correct one , whichever it is ) .thus if the user selects `` the flights to boston serving a meal '' as a correct np , the treebanker applies rule r2 to narrow down the set of possible good analyses to just two of the six ( hence the item `` 2 good qlfs '' at the top of the control menu in the figure ; this is really a shorthand for `` 2 possibly good qlfs '' ) .it then applies r1-r4 to resolve _ all _ the other discriminants except the two for the sense of `` serve '' ; and only those two remain highlighted in inverse video in the display , as shown in figure [ snapshot2 ] .( 6.5,3.0 ) so , for example , there is no need for the user explicitly to make the trickier decision about whether or not `` serving a meal '' is an adverbial phrase .the user simply clicks on `` serve = provide '' , at which point r2 is used to rule out the other remaining analysis and then r3 to decide that `` serve = fly to '' is bad .the treebanker s propagation rules often act like this to simplify the judging of sentences whose discriminants combine to produce an otherwise unmanageably large number of qlfs . as a further example ,the sentence `` what is the earliest flight that has no stops from washington to san francisco on friday ? '' yields 154 qlfs and 318 discriminants , yet the correct analysis may be obtained with only two selections . selecting `` the earliest flight ... on friday '' as an np eliminates all but twenty of the analyses produced , and approving `` that has no stops '' as a relative clause eliminates eighteen of these , leaving two analyses which are both correct for the purposes of translation .152 incorrect analyses may thus be dismissed in less than fifteen seconds .( 6.5,4.18 ) the utterance `` show me the flights serving meals on wednesday '' demonstrates the treebanker s facility for presenting the user with multiple alternatives for determining correct analyses .as shown in figure [ snapshot3 ] , the following decisions must be made : * does `` serving '' mean `` flying to '' or `` providing '' ? *does `` on wednesday '' modify `` show '' , `` flights '' , `` serving '' or `` meals '' ? * does `` serving '' modify `` show '' or `` flights '' ? but this can be done by approving and rejecting various constituents such as `` the flights serving meals '' and `` meals on wednesday '' , or through the selection of triples such as `` flight -on wednesday '' .whichever method is used , the user can choose among the 14 qlfs produced for this sentence within twenty seconds .although primarily intended for the disambiguation of corpus sentences that are within coverage , the treebanker also supports the diagnosis and categorization of coverage failures .sometimes , the user may suspect that _none _ of the provided analyses for a sentence is correct .this situation often becomes apparent when the treebanker ( mis-)applies rules r2-r4 above and insists on automatically assigning incorrect values to some discriminants when the user makes decisions on others ; the coverage failure may be confirmed , if the user is relatively accomplished , by inspecting the non - discriminant properties as well ( thus turning the constituent window into a display of the entire parse forest ) and verifying that the correct parse tree is not among those offered .then the user may mark the sentence as `` not ok '' and classify it under one of a number of failure types , optionally typing a comment as well . at a later stage , a system expert may ask the treebanker to print out all the coverage failures of a given type as an aid to organizing work on grammar and lexicon development . for some long sentences with many different readings , more discriminants may be displayed than will fit onto the screen at one time . in this case , the user may judge one or two discriminants ( scrolling if necessary to find likely candidates ) , and ask the treebanker thereafter to display only _ undecided _ discriminants ; these will rapidly reduce in number as decisions are made , and can quite soon all be viewed at once .if the user changes his mind about a discriminant , he can click on it again , and the treebanker will take later judgments as superceding earlier ones , inferring other changes on that basis . alternatively , the `` reset '' button may be pressed to undo all judgments for the current sentence .it has proved most convenient to organize the corpus into files that each contain data for a few dozen sentences ; this is enough to represent a good - sized corpus in a few hundred files , but not so big that the user is likely to want to finish his session in the middle of a file .once part of the corpus has been judged and the information extracted for run - time use ( not discussed here ) , the treebanker may be told to resolve discriminants automatically when their values can safely be inferred . in the atis domain , `` show ` -to ` ( city ) '' is a triple that is practically never correct , since it only arises from incorrect pp attachments in sentences like `` show me flights to new york '' .the user can then be presented with an initial screen in which that choice , and others resulting from it , are already made .this speeds up his work , and may in fact mean that some sentences do not need to be presented at all . in practice , coverage development tends to overlap somewhat with the judging of a corpus . in view of this , the treebanker includes a `` merge '' option which allows existing judgments applying to an old set of analyses of a sentence to be transferred to a new set that reflects a coverage change .properties tend to be preserved much better than whole analyses as coverage changes ; and since only properties , and not analyses , are kept in the corpus database , the vast bulk of the judgments made by the user can be preserved .the treebanker can also interact directly with the cle s analysis component to allow a user or developer to type sentences to the system , see what discriminants they produce , and select one analysis for further processing .this configuration can be used in a number of ways .newcomers can use it to familiarize themselves with the system s grammar .more generally , beginning students of grammar can use it to develop some understanding of what grammatical analysis involves .it is also possible to use this mode during grammar development as an aid to visualizing the effect of particular changes to the grammar on particular sentences .using the treebanker , it is possible for a linguistically aware non - expert to judge around 40 sentences per hour after a few days practice .when the user becomes still more practised , as will be the case if he judges a corpus of thousands of sentences , this figure rises to around 170 sentences per hour in the case of our most experienced user .thus it is reasonable to expect a corpus of 20,000 sentences to be judged in around three person weeks .a much smaller amount of time needs to be spent by experts in making judgments he felt unable to make ( perhaps for one per cent of sentences once the user has got used to the system ) and in checking the user s work ( the treebanker includes a facility for picking out sentences where errors are mostly likely to have been made , by searching for discriminants with unusual values ) . from these figuresit would seem that the treebanker provides a much quicker and less skill - intensive way to arrive at a disambiguated set of analyses for a corpus than the manual annotation scheme involved in creating the penn treebank ; however , the treebanker method depends on the prior existence of a grammar for the domain in question , which is of course a non - trivial requirement .engelson and dagan ( 1996 ) present a scheme for selecting corpus sentences whose judging is likely to provide useful new information , rather than those that merely repeat old patterns .the treebanker offers a related facility whereby judgments on one sentence may be propagated to others having the same sequence of parts of speech .this can be combined with the use of _ representative corpora _ in the cle ( rayner , bouillon and carter , 1995 ) to allow only one representative of a particular pattern , out of perhaps dozens in the corpus as a whole , to be inspected .this already significantly reduces the number of sentences needing to be judged , and hence the time required , and we expect further reductions as engelson s and dagan s ideas are applied at a finer level . in the current implementation, the treebanker only makes use of _ context - independent _ properties : those derived from analyses of an utterance that are constructed without any reference to the context of use .but utterance disambiguation in general requires the use of information from the context .the context can influence choices of word sense , syntactic structure and , most obviously , anaphoric reference ( see e.g. carter , 1987 , for an overview ) , so it might seem that a disambiguation component trained only on context - independent properties can not give adequate performance .however , for qlfs for the atis domain , and presumably for others of similar complexity , this is not in practice a problem .as explained earlier , anaphors are left unresolved at the stage of analysis and disambiguation we are discussing here ; and contextual factors for sense and structural ambiguity resolution are virtually always `` frozen '' by the constraints imposed by the domain .for example , although there are certainly contexts in which `` tell me flights to atlanta on wednesday '' could mean `` wait until wednesday , and then tell me flights to atlanta '' , in the atis domain this reading is impossible and so `` on wednesday '' must attach to `` flights '' . for a wider domain such as nab , one could perhaps attack the context problem either by an initial phase of topic - spotting ( using a different set of discriminant scores for each topic category ) , or by including some discriminants for features of the context itself among these to which training was applied .i am very grateful to martin keegan for feedback on his hard work of judging 16,000 sentences using the treebanker , and to manny rayner , david milward and anonymous referees for useful comments on earlier versions of this paper .the work reported here was begun under funding from by the defence research agency , malvern , uk , under strategic research project as04bp44 , and continued with funding from telia research ab under the slt-2 project .becket , ralph , and 19 others ( forthcoming ) . _ spoken language translator : phase two report_. joint report by sri international and telia research .alshawi , hiyan , and richard crouch ( 1992 ) .`` monotonic semantic interpretation '' . in _ proceedings of 30th annual meeting of the association for computational linguistics _ , pp . 3239 , newark , delaware .* engelson , sean , and ido dagan ( 1996 ) .`` minimizing manual annotation cost in supervised training from corpora '' . in _ proceedings of 34th annual meeting of the association for computational linguistics _ , pp . 319 - 326 , santa cruz , ca .murveit , hy , john butzberger , vassilios digalakis and mitchell weintraub ( 1993 ) .`` large vocabulary dictation using sri s decipher(tm ) speech recognition system : progressive search techniques '' . in_ proceedings of icassp-93_. rayner , manny , and david carter ( 1996 ) .`` fast parsing using pruning and grammar specialization '' . in _ proceedings of 34th annual meeting of the association for computational linguistics _ , pp . 223230 , santa cruz , ca .
i describe the treebanker , a graphical tool for the supervised training involved in domain customization of the disambiguation component of a speech- or language - understanding system . the treebanker presents a user , who need not be a system expert , with a range of properties that distinguish competing analyses for an utterance and that are relatively easy to judge . this allows training on a corpus to be completed in far less time , and with far less expertise , than would be needed if analyses were inspected directly : it becomes possible for a corpus of about 20,000 sentences of the complexity of those in the atis corpus to be judged in around three weeks of work by a linguistically aware non - expert .
the problem of traffic congestions in communication networks is undoubtedly an important issue .the problem is related to the geometry of the underlying network , the rate that messages are generated and delivered , and the routing strategy .many studies have been focused on spatial structures such as regular lattices and the cayley tree .random networks and scale - free ( sf ) networks have also been widely studied .the former is homogeneous with a poisson degree distribution ; while the latter typically exhibits a power - law degree distribution of the form signifying the existence of nodes with large degrees .sf networks are found in many real - world networks , such as the internet , world wide web ( www ) , and metabolic network .a standard model of sf networks is the barabsi and albert ( ba ) model of growing networks with preferential attachments . the ba model gives a degree distribution of and is non - assortative , i.e. , the chance of two nodes being connected is independent of the degrees of the nodes concerned . while there are other variations on the ba models that give a degree exponent that deviates from , the ba model still serves as the basic model for sf networks . in the present work ,we study how a dynamical and adaptive routing strategy would enhance the performance in delivering messages over a ba scale - free network . in communication models ,the nodes are taken to be both hosts and routers , and the links serve as possible pathways through which messages or packets are forwarded to their destination .early studies assumed a constant ( degree - independent ) packet generation rate and a constant rate of delivering one packet per time step at each node .as increases , the traffic goes from a free - flow phase to a congested or jamming phase .obviously , such models are too simple for real - world networks .more realistic models should incorporate the fact that nodes with higher degrees would have higher capability of handling information packets and at the same time generate more packets .models with degree - dependent packet - delivery rate of the form and degree - dependent packet - generating rate of the form have recently been proposed and studied , with routing strategy based on forwarding messages through the shortest path to their destination . implementing this routing strategy in sf and random networksindicate that it is easier to lead to congestions in sf networks than in random networks .it is because the nodes with large degrees in sf networks are on many shortest - paths between two arbitrarily chosen nodes , i.e. , large betweenness .many packets will be passing by and queueing up at these nodes enroute to their destination . in a random network , jamming is harder to occur as the packets tend to be distributed quite uniformly to each node .a good routing algorithm is essential for sustaining the proper functioning of a network .the shortest - path routing approach is based on _ static _ information ,i.e. , once the network is constructed , the shortest - paths are fixed . to improve routing efficiency , echenique _ et al . _ proposed an approach in which a node would choose a neighboring node to deliver a packet by considering the shortest - path from the neighboring node to the destination _ and _ the waiting time at the neighboring node .the waiting time depends on the number of packets in the queue at a neighboring node at the time of decision and thus corresponds to a _ dynamical _ or _ time - dependent _ information . this algorithm performs better than the shortest - path approach , as packets may be delivered not necessarily through the shortest - path and thus the loading at the higher degree nodes in a sf network is reduced .the approach has also been applied to networks with degree - dependent packet generation rate .recently , wang _ proposed an algorithm that tends to spread the packets evenly to nodes by considering information on nearest neighbors .however , the delivering time turns out to be much longer than that in the shortest - path approach as the packets tend to wander around the network . in the present work, we propose an efficient routing strategy that is based on the projected waiting time along the shortest - path from a neighboring node to the destination .the algorithm is implemented in ba scale - free networks , with degree - dependent packet generating and delivering rates .results show that jamming is harder to occur using the present strategy , when compared with both the shortest - path approach and the echenique s approach .key features observed in numerical results are explained within a mean field treatment .the present approach has the advantage of spreading the packets among the nodes according to the degrees of the nodes . in this way, every node can contribute to the packet delivery process .the paper is organized as follows .the model , including the underlying network , the packet generation and delivery mechanisms , and routing strategy , is introduced in sec.ii . in sec.iii , we present numerical results and compared them with those of the other routing strategies .we also explain key features within a mean field theory .we summarize our results in sec.iv .the underlying network structure is taken to be the barabasi - albert ( ba ) scale - free growing network with nodes . starting with nodes , each new node entering the network is allowed to establish new links to existing nodes .preferential attachment whereby an existing node with a higher degree has a higher probability to attract a new link is imposed .the mean degree of the network is and the degree distribution follows a power - law behavior of the form . the dynamics of packet generation and delivery is implemented as follows . due to the inhomogeneous nature of the ba network , it is more natural to impose a packet generation rate that is proportional to the degree of a node . at each time step ,a node creates new packets .the fractional part of is implemented probabilistically .a destination is randomly assigned to each created packet .the newly created packets will be put in a queue at the node and delivery will be made on the first - in - first - out basis .the packets in the queue may consist of those which are created at previous time steps and received from neighboring nodes enroute to their destination .we also assume a packet delivery rate that is proportional to the degree of a node . at each time step, a node delivers at most packets to its neighbors .the fractional part of is implemented probabilistically .a larger implies a higher packet - handling capability , but it would translate into higher cost or capital . here , the parameters and are taken to be node - independent .a packet is removed from the system upon arrival at its destination .for a given generation rate characterized by , there exists a critical value of the delivery rate such that for delivery rates , packets tend to accumulate in the network resulting in a jamming phase ; while for , a non - jamming phase results as there are as many packets delivered to their destination as created .a better performance is thus characterized by a smaller value of .the novel feature of the present work is the routing strategy or the selection of a neighbor in delivering a packet .the idea is to choose a neighbor that would give the shortest time , including waiting time , to deliver the packets along the shortest path from the chosen neighbor to the destination .consider a packet with destination node leaving node .each of the neighbors of node has a shortest path to the destination node .the shortest path refers to the smallest number of links from a node to another .however , due to the possible accumulation of packets at each node , the number of time steps it takes to deliver the message may be different from the number of links along the shortest path .consider a neighbor labelled of the node .we label the shortest path from node to by . along this path , we evaluate the following quantity for the node : where the sum is over the nodes along the shortest path , excluding the destination . here , is the number of packets accumulated at node , at the moment of decision .thus , is an estimate of the time that a packet would take to go from node to the destination through the shortest path .node would choose a neighboring node with the minimum to forward the packet , i.e. , the selection is based on , where is the set of nodes consisting of the neighbors of node .this procedure is repeated for each node and each packet in every time step . for a network far from jamming , each node can handle all the packets in every time step . in this free - flow situation ,the quantity simply measures the shortest path from to .when packets are queueing up at the nodes , however , a delivery mechanism based on takes into account of the queueing time and may not pass the packet to a neighboring node that is closest to the destination .to justify our routing scheme , we will compare results with two other routing strategies widely studied in the literature .using the same packet generating mechanism , the shortest - path approach selects a neighbor with the shortest path to the destination for forwarding a packet . proposed an approach that takes into account of the waiting time . for a delivering rate of one packet per time step, they proposed to choose a neighbor that has a minimum value of , where is the shortest path length from node to .the parameter is a weighing factor , which can be taken as a variational parameter and is found to give the best performance .the echenique s approach thus accounts for the waiting time only at the neighboring nodes . for a delivery rate of ,a modified echenique s approach is to choose a neighboring node with a minimum value of we have checked that for a given value of , the smallest value of is attained for values of to . in what follows , we will use a value of for the echenique s approach given by eq.(2 ) .the different phases in a network can be illustrated by looking at the average number of packets per node at a given time and the average time for a packet to remain in the network or the delivering time .we take and and construct a ba scale - free network of nodes .figure [ evolution ] shows the results of and as a function of time for a fixed value of .as increases , there are distinct behavior . for values of smaller than some critical value , grows almost linearly with time after the transient ( see fig.[evolution](a ) ) .this corresponds to a jamming phase . as increases , the slope in the long time behavior decreases , indicating a slower accumulation of packets in the network as the ability of handling packets increases . for , becomes independent of time in the long time limit .this corresponds to a non - jamming phase .similarly behavior is exhibited in . in the jamming phase , increases with time monotonically , due to the increasing waiting time in the queues at intermediate nodes as a packet is forwarded to its destination .fewer packets are delivered to their destination than generated . in the non - jamming phase , becomes independent of time in the long time limit . in this regime , further increasing will lead to smaller and shorter in the long time limit until these quantities saturate .this is possible since a non - jamming phase corresponds either to the case in which all the packets at the nodes are forwarded every time step or steady queues of packets exist at the nodes . in both cases ,the number of packets does not increase in the long time limit .the former case is the free - flow phase , while the latter is reminiscent of the synchronized phase in vehicular traffic flows in which the packets undergo a stop - and - go behavior . for for example , , which is somewhat larger than the average shortest distance or diameter of the network .this indicates that , due to the routing strategy in forwarding a packet , the dynamics in the free - flow phase is different from that of the shortest - path approach .the critical value can be determined by considering the quantity where and the average is over all the nodes at a time .this quantity $ ] is basically the slope of in the long time limit . in the non - jamming phase ,the slope vanishes and ; while in the jamming phase , . figure [ order ] shows as a function of , for a fixed value of .the critical value can be identified as the value that separates the and behavior .we carried out similar calculations for different values of and determined .the results are shown in fig.[betac ] ( circles ) .we will explain the form of using a mean field theory .the curve can also be regarded as a phase boundary in the - plane , separating the jamming phase below the curve and the non - jamming phase above the curve . to show the superior performance of our routing strategy, we also performed calculations using the shortest - path approach and the echenique s approach with in eq.(2 ) .the same degree - dependent packet generating mechanism is used .results of for these two models are also shown in fig.[betac ] for comparison .the present approach gives the best performance . for a given , we see the improvement in performance from the shortest - path approach through the echenique s approach to the present approach , signified by the drop of . for the shortest - path approach , it has been shown that follows the functional form of where and with being the diameter and the maximum degree of the network . for , . with the present approach, follows a similar functional form , but with a _ higher _ value of and a smaller prefactor that gives the slope .both the present approach and the echenique s approach perform better than the shortest - approach approach because packets are re - directed to other nodes when there are long queues at the hubs .the better performance of the present approach is achieved by spreading the packets among the nodes so that the number of packets at a node is proportional to the degree of the node in the free - flow phase .we use a mean field approach to illustrate this point .let be the average number of packets at the nodes with degree . in the free - flow phase where , we have the first and second terms denote the packets generated at the node and delivered to neighboring nodes , respectively .the third term accounts for the packets delivered _ into _ the node from its neighboring nodes . here is the conditional probability that a node of degree has a neighbor of degree and the sum runs from the minimum degree to in the network . in the free - flow regime ,the packets that are removed upon arrival at their destination can be assumed to be -independent and approximated by the term .the non - assortative feature of ba networks gives , where is the degree distribution .after the transient behavior , and we have where is the mean number of packets per node .thus for , in the free - flow phase after the transient .figure [ stationary](a ) shows the numerical results obtained by averaging the number of packets on the nodes with degree at different times ( time , , time steps ) of a run . in the free - flow phase , andbecomes time - independent after the transient , as shown in fig.[stationary](a ) for the case of and .this behavior is consistent with that in eq.([eq : stationary ] ) . for the jamming phase , numerical results ( see fig.[stationary](b ) )show that ( i ) at a fixed instant _ and _ ( ii ) increases with time for fixed value of .this behavior can be understood provided that the packets are still distributed among the nodes in proportion to the degree of a node via our strategy . in this phase , the long time behavior is characterized by an increasing accumulation of packets and the delivery to destinations becomes negligible compared with packet generation . with for all nodes and ignoring the removal of packets , eq.(6 ) is modified to it follows that increases with time as which describes very well the features in fig.[stationary](b ) .thus , the present approach has the effect of reducing ( increasing ) the probability of passing packets to neighbors with high ( low ) degrees when there are long ( no or short ) queues , resulting in a distribution of packets according to the degrees of the nodes .a rough estimate of can be obtained by equating in the free - flow phase to . in particular , taking , we get from eq.([eq : stationary ] ) that where is the average number of nodes that a packet passes through from its origin to the destination , which is the diameter of the network in the free - flow phase .the last line is valid for . comparing with eq.([eq : sp ] ) for the shortest - path approach , we note that and the prefactor , which gives the slope in fig.[betac ] , is smaller than that in the shortest - path approach .these features are consistent with numerical results . in particular , for nodes , we found that and , giving , which is in reasonable agreement with numerical results in fig.[betac ] .in summary , we have proposed an efficient routing strategy on forwarding packets in a scale - free network .the strategy accounts not only for the physical separation from the destination but also on the waiting time along possible paths .we showed that our strategy performs better than both the shortest - path approach and the echenique s approach .analytically , we construct a mean field treatment which gives results in agreement with observed features in numerical results .our routing strategy has the merit of distributing the packets among the nodes according to the degree , and hence handling capability , of the nodes .although our discussion was carried out on ba networks , we believe that our approach is also applicable in other spatial structures .we end by comparing the three different routing strategies in more general terms .the shortest - path approach depends entirely on geometrical information that is _static_. once the origin and the destination of a packet is known , the shortest - path is fixed .this strategy is _ non - adaptive _ , i.e. , it will not be change with time .the echenique s approach considers both geometrical and local dynamical information . by considering the waiting time at a neighboring node, a packet from a node to a destination will not always follow the same path .thus , the echenique s approach is a strategy that is _ adaptive _ , i.e. , a decision based on the current situation .the present strategy , like the echenique s approach , is also adaptive and makes use of _ global _ information in which all the waiting times along a path are taken into consideration .we see that by allowing for adaptive strategies and taking more information into consideration , a better performance results .this line of thought is in accordance with that in complex adaptive systems whereby active agents may adapt , interact , and learn from past experience .it should be , however , noted that it pays to be better .the shortest - path approach does not require update of the routing strategy .the echenique s approach and the present approach require continuing update of the number of packets accumulated at the nodes .such updating plays the role of a cost , with the payoff being the better performance .practical implementation would have to consider the balance between the cost and the payoff .this work was supported by the nnsf of china under grant no .10475027 and no . 10635040 , by the pps under grant no .05pj14036 , and by sps under grant no .p.m.h . acknowledges the support from the research grants council of the hong kong sar government under grant number cuhk-401005 .
we present an efficient routing approach for delivering packets in complex networks . on delivering a message from a node to a destination , a node forwards the message to a neighbor by estimating the waiting time along the shortest path from each of its neighbors to the destination . this projected waiting time is dynamical in nature and the path through which a message is delivered would be adapted to the distribution of messages in the network . implementing the approach on scale - free networks , we show that the present approach performs better than the shortest - path approach and another approach that takes into account of the waiting time only at the neighboring nodes . key features in numerical results are explained by a mean field theory . the approach has the merit that messages are distributed among the nodes according to the capabilities of the nodes in handling messages .
circadian oscillators are prevalent in organisms from bacteria to human and serve to synchronize bodies with the environmental 24 h cycle . although the molecular implementation of oscillation is species specific , every circadian clocks satisfies two requirements to achieve the reliable synchronization to the environment : * entrainability * to synchronize internal time with periodic stimuli and * regularity * to oscillate with a precise period .circadian clocks are acquired through evolution independently in bacteria , fungi , plants and animals .nonetheless , entrainability and regularity constitute major characteristics conserved in all circadian clocks , which strongly suggests that these two properties are essential for survival .a main source of interference with regularity is discreteness of molecular species , i.e. , molecular noise .many studies have analyzed the resistance mechanisms of circadian oscillators against the noise .regarding entrainability , circadian clocks synchronize their internal time with the environmental cycle via sunlight , and its effect depends on the wavelength or fluence , as well as on the phase of the stimulation .however , entrainability and regularity are conflicting factors , because circadian clocks with better entrainability are sensitive not only to the periodic light stimuli , but also to the molecular noise which interferes with regularity .since both regularity and entrainability are important adaptive values , we expect actual circadian oscillators to optimally satisfy these two factors ( fig .[ fig : pareto_optimal ] ) . herewe investigate the optimal phase - response curve ( prc ) , which is both entrainable and regular , in the phase oscillator model by using the euler lagrange variational method .our main finding is the inherent existence of a dead zone in the prc : optimality is achieved only when the prcs have a time period during which light stimuli neither advance nor delay the clock ( fig . [fig : type_1_2_prcs](a ) ) .in other words , a prc with a dead zone ( fig . [fig : type_1_2_prcs](a ) ) is better adapted than those without a dead zone ( fig .[ fig : type_1_2_prcs](b ) ) .this result is intriguing because a dead zone , with which oscillators tend to be unaffected by stimuli ( i.e. lower entrainability ) , achieves better entrainability .we also tested this with two types of input stimuli : a solar radiation - type input that simulated the time course of solar radiation intensity ( cf .eq . and fig .[ fig : radiation_fig](a ) ) and a simple sinusoidal input ( sine curve ) .surprisingly , the dead zone in the optimal prc only emerges for the solar radiation - type input , not for the sinusoidal input .many experimental studies reported the existence of a dead zone in various species ( figs .[ fig : type_1_2_prcs](c ) and ( d ) show experimentally observed prcs of ( c ) fruitfly and ( d ) mouse , respectively ) .our results indicate that circadian oscillators in various species have adapted to solar radiation for reliable synchronization . ,width=264 ] , width=491 ], the solid and dashed lines describing a limit - cycle trajectory and its isochron drawn at intervals of , respectively .( b ) relation between the phase variance and the period variance in langevin equation ( the solid lines represent trajectories of the langevin equation ) . is the variance of the phase at time and is the variance of the first passage time from to , which can be approximated by .( d ) arnold tongue ( colored region ) , which shows the parameter region for synchronization to an input signal , with respect to the signal angular frequency ( vertical axis ) and the signal strength ( horizontal axis ) .the dashed line is a linear approximation ( eq . ) of the border of the arnold tonguewhen the input strength is sufficiently small .[ fig : intuitive_fig],width=491 ] circadian oscillators basically comprise interaction between mrnas and proteins , whose dynamics can be modeled by differential equations . a circadian oscillator of -molecular species can be represented by where the -dimensional vector denotes the concentration of molecular species ( mrnas or proteins ) .the effect of noise on genetic oscillators has been a subject of considerable interest , and noise - resistant mechanisms have been extensively studied . in general , the dynamics of the -th molecular concentration in a circadian oscillator subject to molecular noise is described by the following langevin equation ( stratonovich interpretation ) : where is an arbitrary function representing the multiplicative terms of the noise , is white gaussian noise with the correlation ( a bracket denotes expectation ) , and is a model parameter .circadian oscillators synchronize to environmental cycles by responding to a periodic input signal ( light stimuli ) .we let in eq .be stimulated by the input signal : for example , can be the degradation rate ( for simplicity , we consider that the input signal affects only one parameter ) .we use eq . for calculating regularity and entrainability of circadian oscillators .because the circadian oscillator of eq . is subject to noise , its period varies cycle to cycle .we use the term regularity for the period variance of the oscillation ( higher regularity corresponds to smaller period variance ) .let us first consider the case without input signals ( i.e. , is constant ) . as eq .exhibits periodic oscillation , we can naturally define the phase on eq . by where is the angular frequency of the oscillation ( is a period of the oscillation ) .the phase in eq .is only defined on a closed orbit of the unperturbed limit - cycle oscillation .however , we can expand the definition into the entire space , where the equiphase surface is referred to as the isochron ( fig .[ fig : intuitive_fig](a ) ) . by using standard stochastic phase reduction , eq . can be transformed into the following langevin equation with respect to the phase variable ( stratonovich interpretation ) : where is an infinitesimal prc ( iprc ) , and we abbreviated as .iprc quantifies the extent of phase advance or delay when perturbed along an coordinate direction at phase .the -dimensional vector denotes a point on the limit - cycle trajectory at phase , where lc stands for limit cycle .the value of iprc is calculated as a solution of an adjoint equation or as the set of eigenvectors of a monodromy matrix in the floquet theory for arbitrary oscillators .let be the probability density function of at time . from eq . , the fokker planck equation ( fpe ) of is given by where introducing a slow variable , the fpe of the probability density function is given by with sufficiently weak noise , is a slowly fluctuating function of . in such cases , and much faster than , thus these two terms can be averaged for one period while keeping constant ( phase averaging ) .in other words , we separate time scales between , and . by phaseaveraging , vanishes because of the periodicity ( use integration by parts ) , yielding with please see ref . for further details of stochastic phase reduction and the phase - averaging procedure . from eq . , because ] yields the optimal iprc and the pprc is calculated with eq . : since and themselves depend on , they have to satisfy a self - consistent condition , i.e. , eq .is maximal with and .consequently , we maximize the following function : with where and . the optimal iprc can be obtained by first finding the maximum solution of with respect to and , and then substituting the obtained solution and into eqs . and . ) .( c ) gene regulatory circuit of hypothetical circadian clock . in this example , and describe mrna and protein , respectively , and represses the transcription of .light stimuli increases the translational efficiency .( d ) time course of ( eq . ) , which is a variable to be multiplied by the parameter ( eq . ) .[ fig : radiation_fig],width=491 ] optimal prcs depend on input signals , as seen in eqs . and .the most common synchronizer in circadian oscillators is sunlight , for which the strength is determined by 24 h - periodic solar irradiance .the solar irradiance is calculated by and when the sun is above the horizon ( ) and below the horizon ( ) , respectively , where is the zenith angle and is the maximum irradiance .it can be approximated by where is the ramp function defined by for and for .we call eq .the _ solar radiation input _ , whose plot is shown in fig . [fig : radiation_fig](a ) ( the shaded region represents night ) .in order to show the validity of the solar radiation modeling , we compare eq . with observed irradiance data from ref . , which are shown in a dual axis plot of fig .[ fig : radiation_fig](b ) . in fig .[ fig : radiation_fig](b ) , eq .is plotted by the solid line ( left axis ) and the observed data by the dashed line ( right axis ) , where a unit of the observed data is watt per square meter ( ) .the solar radiation input of eq .is shifted horizontally so that eq .becomes a good fit to the data . from fig .[ fig : radiation_fig](b ) , the solar radiation input is in good agreement with the observed data , which verifies the validity of eq . as a solar radiation model . for comparison, we also employ a sinusoidal input , which is common in nonlinear sciences : note that , where is an arbitrary constant , also yields the same optimal prcs as eq . because a constant in the signal is offset in eqs . .although a constant does not play any roles in formation of the optimal prcs , different result in different arnold tongues in general . for calculating the optimal prcs , we use eqs . and .light stimuli generally affect the oscillatory dynamics multiplicatively , i.e. , they act on the rate constants or transcriptional efficiency of the gene regulatory circuits .we assume that the -th molecular species includes a parameter as where represents the terms that do not include , and is the concentration of the -th molecular species . here , can take any value regardless of ( both and are allowed ) .for example , let fig .[ fig : radiation_fig](c ) be a gene regulatory circuit of a hypothetical circadian clock , where symbols and represent positive and negative regulations and are molecular species ( please see ref . for typical motifs of biochemical oscillators ) .suppose and are mrna and corresponding protein , respectively , and light stimuli increase the translational efficiency . in this case , the dynamics of light entrainment can be described by eq . with , and being the translation rate . in eq ., although we can also consider an alternative case ( a negative sign ) , the optimal pprcs remain unchanged under the inversion which is seen from eqs . and .consequently , we only consider the positive case to calculate the optimal prcs ( i.e. eq . ) .however , note that relation between iprcs and pprcs are affected by the inversion of the sign , and the difference matters when considering biological feasibility .when using phase reduction , the dynamics of the limit cycle are considered on the unperturbed limit - cycle trajectories , and hence the points on the limit cycle can be uniquely determined by the phase .consequently , under the phase reduction , is replaced by in eq . , where is the -th coordinate of ( i.e. , in eq ., corresponds to the time course of the concentration of the -th molecular species . because constitutes a core clock component andis generally a smooth -periodic function , we approximate it with a sinusoidal function : where is the initial phase and denotes the amplitude of the oscillation ( fig . [fig : radiation_fig](d ) ) . to ensure , we set , and recovers the additive case . since the initial phase does not play any role ( is offset by in eq . )when the white gaussian noise is additive ( i.e. , ) , we also set .the parametric approximation of eq .enables an almost closed form for the overall calculations .although we assumed in eq .that effects of only depend on , we can generalize eq . to where is a nonlinear function ( a -periodic function ) and is assumed to be well approximated by . by this generalization ,our theory can be applied to other possible light entrainment mechanisms such as the inter - cellular coupling .our model only needs details about molecular species which have light input entry points but not about a whole molecular network .however , this advantage in turn means that we can not specify iprcs of molecular species not having light input entry points .consequently , for a noise term , we assume that the white gaussian noise is additive and is present only in the -th coordinate equation ( , where is the noise intensity and for ) .figures [ fig : optimal_insolation](a)(c ) show the landscape of as functions of and , and figs .[ fig : optimal_insolation](d)(f ) express the optimal iprcs and pprcs for the solar radiation input ( an explicit expression of is given in appendix a ) .the optimal prc shape does not depend on the model parameters such as the period , its variance , or noise intensity .these three parameters only act on the magnitude of the prcs ( i.e. , the vertical scaling of the prcs ) .consequently , we normalized , , and , as shown in fig .[ fig : optimal_insolation ] .as the optimal prcs depend on , is plotted for three cases : ( a ) , ( b ) , and ( c ) , where the maximal points yield the optimal prcs using eqs . and .the maximal parameters and are calculated numerically .figures [ fig : optimal_insolation](d)(f ) describe the optimal iprcs ( solid line ) and pprcs ( dashed line ) for , , and , respectively . when , i.e. , the input signal is additive , achieves a maximum for and arbitrary , yielding sinusoidal prcs as the optimal solution ( fig .[ fig : optimal_insolation](d ) ) .although the input signal is not sinusoidal , the optimal prcs obtained using the variational method become sinusoidal . in other words , considering optimality , resonator - type oscillators have an advantage over integrator - type oscillators . for ,the input signal depends on the concentration of the -th molecular species . from fig .[ fig : optimal_insolation](b ) , the optimal parameters for are and , which are different from ( these two sets yield symmetric prcs with respect to the horizontal axis ) . figure [ fig : optimal_insolation](e ) shows the optimal iprcs and pprcs for .interestingly , the optimal iprcs and pprcs for have a dead zone ( region of in fig .[ fig : optimal_insolation](e ) ) in which the input signal neither advances nor delays the clock . from eqs . and the solar radiation input of eq ., the optimal prcs inevitably include a dead zone if the optimal is not .for , there are four sets of parameters that give optimal prcs : , , , and ( prcs with these four sets are symmetric each other with respect to the horizontal axis or ) .consequently , the optimal prcs shown in fig .[ fig : optimal_insolation](f ) have a dead zone as in the case of .dependence of the dead - zone length .( b ) dependence of the entrainability ratios ( solid line ) and ( dashed line ) ( eq . ) . and are the ratios of the entrainability of the optimal prc to that of the sinusoidal iprc ( eq . ) and the pprc ( eq . ) , respectively .[ fig : deadzone_length],width=491 ] from the results discussed above , the optimal prcs have a dead zone when .we next studied the length of the dead zone as a function of ( fig .[ fig : deadzone_length](a ) ) and improvements in the entrainability induced by the dead zone ( fig .[ fig : deadzone_length](b ) ) for the solar radiation input . because the dead zone , which is a null interval in prcs , emerges when the optimal parameter is , we can naturally define its length as where is the maximum value of . as seen in fig .[ fig : deadzone_length](a ) , a dead zone clearly exists when , and the length increases with increasing for .even for , when the oscillation amplitude of ( the concentration of a molecular species modulated by the light - sensitive parameter .[ fig : radiation_fig](d ) ) is very small , we observe a dead zone with a length of , which corresponds to about 3 h within 24 h , indicating the universality of having a dead zone in order to attain optimality .the improvement in the entrainability that is induced by a dead zone is calculated by comparing the entrainability of the optimal prcs with that of typical sinusoidal prcs .we consider sinusoidal functions for both the iprc and pprc by setting where is the parameter to be optimized so that entrainability is maximized for each ( see appendix b for the explicit expressions ) .equations and are scaled so that they satisfy the constraints on the period variance ( eq . ) .we calculated the ratios where and represent the entrainabilities for the cases of the sinusoidal iprc and pprc , respectively , calculated for the solar radiation input .for the sinusoidal iprc of eq . , the entrainability is calculated with pprc via eq . . and quantify the improvement rate of the optimal prcs over the sinusoidal iprc ( ) and pprc ( ) . in fig .[ fig : deadzone_length](b ) , the dashed and dot - dashed lines show and , respectively , as a function of .both ratios monotonically increase as increases , which shows that the optimal prc with a dead zone exhibits better entrainability when the oscillation of has a larger amplitude .when the concentration of is low , the effects of the input signal on the circadian oscillators are smaller .this is because pprc , which quantifies the extent of the phase shift due to the stimulation of the parameter , depends on the concentration ( see eq . ) .however , even within the range where has smaller values , the iprc contributes to an increase in the variance of the period , regardless of the concentration . from this, we see that having an iprc with a smaller magnitude when the concentration of is smaller results in a smaller variance , which results in a larger entrainability for a constant variance of the period .although this qualitatively explains the benefit of a dead zone , for some input values , the optimal prcs may not contain a dead zone for any value of .this will be shown in the following . since the optimal prcs depend on input signals ( eqs . and ) , we next consider a typical periodic input signal , a sinusoidal function ( eq . ) . in this case , is calculated in a closed form ( an explicit expression of is given in appendix a ) , which is plotted as functions of and in fig .[ fig : optimal_sinusoidal](a)(c ) for three cases : ( a ) , ( b ) and ( c ) . as can been seen from fig .[ fig : optimal_sinusoidal](a)(c), yields the maximal value for for , where is an integer and when , can take any value .figures [ fig : optimal_sinusoidal](d)(f ) express the optimal iprcs and pprcs for the sinusoidal input . for ,the optimal prc is sinusoidal ( fig .[ fig : optimal_sinusoidal](d ) ) and for , the optimal prc is still close to a sinusoidal function ( fig .[ fig : optimal_sinusoidal](e ) ) .when increasing to , the prc diverges from the sinusoidal function and exhibits almost positive values ( fig .[ fig : optimal_sinusoidal](f ) ) .we see that the optimal prcs due to eqs . and do not exhibit a dead zone for any values ( figs .[ fig : optimal_sinusoidal](d)(f ) ) when the input signal is a simple sinusoidal function .the existence of a dead zone optimizes both entrainability and regularity .it is rather obvious that optimization of regularity alone leads to a dead zone , because null response means no effect by any kind of fluctuations .our result instead shows that optimality of both entrainability and regularity , which are in a trade - off relationship , is uniquely achieved by a dead zone .our finding is fairly general since a dead zone always exists in an optimal prc unless ( additive stimulation ) .along with the fact that , , and affect only the scaling of the optimal prcs , when the input signal affects the dynamics multiplicatively ( i.e. , ) , the existence of a dead zone always provides a synchronization advantage .this is supported by many experimental studies of various species , that report the existence of a dead zone in the prc ( cf .[ fig : type_1_2_prcs](c)(d ) ) .our general result suggests that circadian oscillators have fully adapted to solar radiation to improve synchronization .indeed , many experimental findings imply that circadian oscillators have adapted to actual solar radiation : for various animals , light - dark ( ld ) cycles that include a twilight period result in better entrainability than do abrupt ld cycles ( on - off protocols ) . in this regard ,another interesting problem is optimal entrainment of circadian clocks by light stimuli . as two different input signals , the solar radiation and sinusoidal inputs ,yield the same optimal prcs for , optimal inputs and optimal prcs do not have one - to - one correspondence . thus the optimal inputs are not trivial and this problem should be pursued in our future studies .the solar radiation input plays an essential role , since it yields a dead zone in the optimal prc while a sinusoidal signal does not ( see fig .[ fig : optimal_sinusoidal ] ) .in other words , oscillators that are entrained by stimuli other than solar radiation may not exhibit a dead zone in their prcs .this is indeed found in mammals .mammals possess a master clock in their suprachiasmatic nucleus ( scn ) , which receives light stimuli via retinal photoreceptors , and peripheral clocks in body cells .the peripheral oscillators are entrained by several stimuli such as feeding and signals from the scn through chemical pathways ( e.g. , hormones ) . by injection experiments of the hormone , balsalobre _ et al . _ reported that the prcs of the peripheral oscillators in the liver did not have a dead zone .our result also agrees with other experimental observations .our theory implies that a dead zone should be located where the concentration is low ( in fig .[ fig : radiation_fig](d ) ) , and that to achieve optimality , the concentration of should be maximal in the region where the prcs exhibit a large phase shift . in _ drosophila _, the _ timeless _ ( _ tim _ ) gene is regarded as the molecular implementation of .it is experimentally known that light enhances the degradation of the gene product ( the tim protein ) , and the tim protein peaks during the late evening .figure [ fig : type_1_2_prcs](c ) shows observations of the prc of _ drosophila _ against light pulses as a function time ( hour ) from ref . ; circles describe the experimental data and the solid line expresses a trigonometric curve fitting ( 4th order ) , respectively . because the center of the part of the prc that can be phase shifted approximately corresponds to the peak of the concentration , as denoted above , when estimated from the prc alone ,the concentration peak of the tim protein should occur at about 18 h. this time is also close to the experimental evidence ( i.e. late evening ) .therefore , our theory can be used to hypothesize further molecular behavior affected by light stimuli . in summary, we have constructed a model that regards circadian oscillators as a global optimization of entrainability and regularity .we have shown that our model is consistent with much experimental evidence as mentioned above .the extension and improvement of our method are possible and they are left as an area of future study .for the solar radiation input case ( eq . ) , is given by with ,\end{aligned}\ ] ] where .we showed eq .as functions of and in fig .[ fig : optimal_insolation](d)(f ) . for the sinusoidal input case ( eq . ) , is given by .\label{eq : psi_mult_sin}\ ] ] we plotted eq .as functions of and in fig .[ fig : optimal_sinusoidal](d)(f ) .an explicit expression sinusoidal iprc ( eq . ) is which yields the period variance of .then the corresponding pprc is given by where we used eq . .for the pprc to be a sinusoidal function , the iprc must be where we used eq . .an explicit expression of eq . is where is a normalizing term equation is normalized so that the period variance becomes . using eq ., the corresponding pprc is a sinusoidal function : which is an explicit expression of the sinusoidal pprc ( eq . ) .this work was supported by the global coe program `` deciphering biosphere from genome big bang '' from mext , japan ( yh and ma ) ; grant - in - aid for young scientists b ( # 25870171 ) from mext , japan ( yh ) ; grant - in - aid for scientific research on innovative areas `` biosynthetic machinery '' from mext , japan ( ma ) a. balsalobre , s. a. brown , l. marcacci , f. tranche , c. kellendonk , h. m. reichardt , g. schtz , and u. schibler .resetting of circadian time in peripheral tissues by glucocorticoid signaling ., 29:23442347 , 2000 .
circadian oscillation provides selection advantages through synchronization to the daylight cycle . however , a reliable clock must be designed through two conflicting properties : entrainability to synchronize internal time with periodic stimuli such as sunlight , and regularity to oscillate with a precise period . these two aspects do not easily coexist because better entrainability favors higher sensitivity , which may sacrifice the regularity . to investigate conditions for satisfying the two properties , we analytically calculated the optimal phase - response curve with a variational method . our result indicates an existence of a dead zone , i.e. , a time period during which input stimuli neither advance nor delay the clock . a dead zone appears only when input stimuli obey the time course of actual solar radiation but a simple sine curve can not yield a dead zone . our calculation demonstrates that every circadian clock with a dead zone is optimally adapted to the daylight cycle .
visual object detection could be viewed as the combination of two tasks : object localization ( where the object is ) and visual recognition ( what the object looks like ) .while the deep convolutional neural networks ( cnns ) has witnessed major breakthroughs in visual object recognition , the cnn - based object detectors have also achieved the state - of - the - arts results on a wide range of applications , such as face detection , pedestrian detection and etc .currently , most of the cnn - based object detection methods could be summarized as a three - step pipeline : firstly , region proposals are extracted as object candidates from a given image .the popular region proposal methods include selective search , edgeboxes , or the early stages of cascade detectors ; secondly , the extracted proposals are fed into a deep cnn for recognition and categorization ; finally , the bounding box regression technique is employed to refine the coarse proposals into more accurate object bounds . in this pipeline , the region proposal algorithm constitutes a major bottleneck in terms of localization effectiveness , as well as efficiency . on one hand , with only low - level features ,the traditional region proposal algorithms are sensitive to the local appearance changes , e.g. , partial occlusion , where those algorithms are very likely to fail . on the other hand , a majority of those methodsare typically based on image over - segmentation or dense sliding windows , which are computationally expensive and have hamper their deployments in the real - time detection systems .loss and loss for pixel - wise bounding box prediction.,scaledwidth=45.0% ] to overcome these disadvantages , more recently the deep cnns are also applied to generate object proposals . in the well - known faster r - cnn scheme ,a region proposal network ( rpn ) is trained to predict the bounding boxes of object candidates from the _ anchor _ boxes . however , since the scales and aspect ratios of _ anchor _ boxes are pre - designed and fixed , the rpn shows difficult to handle the object candidates with large shape variations , especially for small objects .another successful detection framework , densebox , utilizes every pixel of the feature map to regress a 4-d distance vector ( the distances between the current pixel and the four bounds of object candidate containing it ). however , densebox optimizes the four - side distances as four independent variables , under the simplistic loss , as shown in figure [ fig : introduction ] .it goes against the intuition that those variables are correlated and should be regressed jointly .besides , to balance the bounding boxes with varied scales , densebox requires the training image patches to be resized to a fixed scale . as a consequence, densebox has to perform detection on image pyramids , which unavoidably affects the efficiency of the framework .the paper proposes a highly effective and efficient cnn - based object detection network , called unitbox .it adopts a fully convolutional network architecture , to predict the object bounds as well as the pixel - wise classification scores on the feature maps directly .particularly , unitbox takes advantage of a novel intersection over union ( ) loss function for bounding box prediction .the loss directly enforces the maximal overlap between the predicted bounding box and the ground truth , and jointly regress all the bound variables as a whole unit ( see figure [ fig : introduction ] ) .the unitbox demonstrates not only more accurate box prediction , but also faster training convergence .it is also notable that thanks to the loss , unitbox is enabled with variable - scale training .it implies the capability to localize objects in arbitrary shapes and scales , and to perform more efficient testing by just one pass on singe scale .we apply unitbox on face detection task , and achieve the best performance on fddb among all published methods .before introducing unitbox , we firstly present the proposed loss layer and compare it with the widely - used loss in this section . some important denotations are claimed here : for each pixel in an image , the bounding box of ground truth could be defined as a 4-dimensional vector : where , , , represent the distances between current pixel location and the top , bottom , left and right bounds of ground truth , respectively . for simplicity , we omit footnote in the rest of this paper .accordingly , a predicted bounding box is defined as , as shown in figure [ fig : introduction ] . loss is widely used in optimization . in , loss is also employed to regress the object bounding box via cnns , which could be defined as : where is the localization error . however , there are two major drawbacks of loss for bounding box prediction .the first is that in the loss , the coordinates of a bounding box ( in the form of , , , ) are optimized as four independent variables .this assumption violates the fact that the bounds of an object are highly correlated .it results in a number of failure cases in which one or two bounds of a predicted box are very close to the ground truth but the entire bounding box is unacceptable ; furthermore , from eqn .[ eqn : l2 ] we can see that , given two pixels , one falls in a larger bounding box while the other falls in a smaller one , the former will have a larger effect on the penalty than the latter , since the loss is unnormalized .this unbalance results in that the cnns focus more on larger objects while ignore smaller ones . to handle this , in previous work the cnnsare fed with the fixed - scale image patches in training phase , while applied on image pyramids in testing phase . in this way , the loss is normalized but the detection efficiency is also affected negatively . in the following , we present a new loss function , named the loss , which perfectly addresses above drawbacks . given a predicted bounding box ( after relu layer , we have ) and the corresponding ground truth , we calculate the loss as follows : * input : * as bounding box ground truth + * input : * as bounding box prediction + * output : * as localization error + in algorithm 1 , represents that the pixel falls inside a valid object bounding box ; is area of the predicted box ; is area of the ground truth box ; , are the height and width of the intersection area , respectively , and is the union area . note that with , is essentially a cross - entropy loss with input of : we can view as a kind of random variable sampled from bernoulli distribution , with , and the cross - entropy loss of the variable is .compared to the loss , we can see that instead of optimizing four coordinates independently , the loss considers the bounding box as a unit .thus the loss could provide more accurate bounding box prediction than the loss .moreover , the definition naturally norms the to $ ] regardless of the scales of bounding boxes .the advantage enables unitbox to be trained with multi - scale objects and tested only on single - scale image . to deduce the backward algorithm of loss , firstly we need to compute the partial derivative of w.r.t . , marked as ( for simplicity , we notate for any of , , , if missing ) : to compute the partial derivative of w.r.t , marked as : finally we can compute the gradient of localization loss w.r.t . : from eqn .[ eqn : gradient ] , we can have a better understanding of the loss layer : the is the penalty for the predict bounding box , which is in a positive proportion to the gradient of loss ; and the is the penalty for the intersection area , which is in a negative proportion to the gradient of loss .so overall to minimize the loss , the eqn .[ eqn : gradient ] favors the intersection area as large as possible while the predicted box as small as possible .the limiting case is the intersection area equals to the predicted box , meaning a perfect match .based on the loss layer , we propose a pixel - wise object detection network , named unitbox .as illustrated in figure [ fig : network ] , the architecture of unitbox is derived from vgg-16 model , in which we remove the fully connected layers and add two branches of fully convolutional layers to predict the pixel - wise bounding boxes and classification scores , respectively . in training , unitbox is fed with three inputs in the same size : the original image , the confidence heatmap inferring a pixel falls in a target object ( positive ) or not ( negative ) , and the bounding box heatmaps inferring the ground truth boxes at all positive pixels . to predict the confidence ,three layers are added layer - by - layer at the end of vgg stage-4 : _ a convolutional layer _ with stride , kernel size ; _ an up - sample layer _ which directly performs linear interpolation to resize the feature map to original image size ; _ a crop layer _ to align the feature map with the input image .after that , we obtain a 1-channel feature map with the same size of input image , on which we use the sigmoid cross - entropy loss to regress the generated confidence heatmap ; in the other branch , to predict the bounding box heatmaps we use the similar three stacked layers at the end of vgg stage-5 with convolutional kernel size 512 x 3 x 3 x 4 . additionally , we insert a relu layer to make bounding box prediction non - negative .the predicted bounds are jointly optimized with loss proposed in section [ sec : iou_loss ] .the final loss is calculated as the weighted average over the losses of the two branches .some explanations about the architecture design of unitbox are listed as follows : 1 ) in unitbox , we concatenate the confidence branch at the end of vgg stage-4 while the bounding box branch is inserted at the end of stage-5 .the reason is that to regress the bounding box as a unit , the bounding box branch needs a larger receptive field than the confidence branch . and intuitively , the bounding boxes of objects could be predicted from the confidence heatmap . in this way, the bounding box branch could be regarded as a bottom - up strategy , abstracting the bounding boxes from the confidence heatmap ; 2 ) to keep unitbox efficient , we add as few extra layers as possible .compared to densebox in which three convolutional layers are inserted for bounding box prediction , the unitbox only uses one convolutional layer . as a result ,the unitbox could process more than 10 images per second , while densebox needs several seconds to process one image ; 3 ) though in figure [ fig : network ] the bounding box branch and the confidence branch share some earlier layers , they could be trained separately with unshared weights to further improve the effectiveness . with the heatmaps of confidence and bounding box, we can now accurately localize the objects . taking the face detection for example , to generate bounding boxes of faces ,firstly we fit the faces by ellipses on the thresholded confidence heatmaps .since the face ellipses are too coarse to localize objects , we further select the center pixels of these coarse face ellipses and extract the corresponding bounding boxes from these selected pixels . despite its simplicity, the localization strategy shows the ability to provide bounding boxes of faces with high accuracy , as shown in figure [ fig : observation ] .in this section , we apply the proposed loss as well as the unitbox on face detection task , and report our experimental results on the fddb benchmark .the weights of unitbox are initialized from a vgg-16 model pre - trained on imagenet , and then fine - tuned on the public face dataset widerface .we use mini - batch sgd in fine - tuning and set the batch size to 10 . following the settings in ,the momentum and the weight decay factor are set to 0.9 and 0.0002 , respectively .the learning rate is set to which is the maximum trainable value .no data augmentation is used during fine - tuning .first of all we study the effectiveness of the proposed loss . to train a unitbox with loss , we simply replace the loss layer with the loss layer in figure [ fig : network ] , and reduce the learning rate to ( since loss is generally much larger , is the maximum trainable value ) , keeping the other parameters and network architecture unchanged .figure [ fig : convergence ] compares the convergences of the two losses , in which the x - axis represents the number of iterations and the y - axis represents the detection miss rate .as we can see , the model with loss converges more quickly and steadily than the one with loss . besides , the unitbox has a much lower miss rate than the unitbox- throughout the fine - tuning process . in figure[ fig : performance_vs ] , we pick the best models of unitbox ( 16k iterations ) and unitbox- ( 29k iterations ) , and compare their roc curves . though with fewer iterations , the unitbox with loss still significantly outperforms the one with loss .loss , the loss is much more robust to scale variations for bounding box prediction.,scaledwidth=45.0% ] moreover , we study the robustness of loss and loss to the scale variation .as shown in figure [ fig : scale_invariant ] , we resize the testing images from 60 to 960 pixels , and apply unitbox and unitbox- on the image pyramids . given a pixel at the same position ( denoted as the red dot ) , the bounding boxes predicted at this pixel are drawn . from the resultwe can see that 1 ) as discussed in section [ ssec : ssec2.1 ] , the loss could hardly handle the objects in varied scales while the loss works well ; 2 ) without joint optimization , the loss may regress one or two bounds accurately , e.g. , the up bound in this case , but could not provide satisfied entire bounding box prediction ; 3 ) in the x960 testing image , the face size is even larger than the receptive fields of the neurons in unitbox ( around 200 pixels ) .surprisingly , the unitbox can still give a reasonable bounding box in the extreme cases while the unitbox- totally fails . to demonstrate the effectiveness of the proposed method, we compare the unitbox with the state - of - the - arts methods on fddb . as illustrated in section [ sec : sec3 ] , here we train an unshared unitbox detector to further improve the detection performance .the roc curves are shown in figure [ fig : performance ] . as a result, the proposed unitbox has achieved the best detection result on fddb among all published methods . except that , the efficiency of unitbox is also remarkable . compared to the densebox which needs seconds to process one image, the unitbox could run at about 12 fps on images in vga size .the advantage in efficiency makes unitbox potential to be deployed in real - time detection systems .the paper presents a novel loss , i.e. , the loss , for bounding box prediction .compared to the loss used in previous work , the loss layer regresses the bounding box of an object candidate as a whole unit , rather than four independent variables , leading to not only faster convergence but also more accurate object localization .based on the loss , we further propose an advanced object detection network , i.e. , the unitbox , which is applied on the face detection task and achieves the state - of - the - art performance .we believe that the loss layer as well as the unitbox will be of great value to other object localization and detection tasks .v. belagiannis , x. wang , h. beny ben shitrit , k. hashimoto , r. stauder , y. aoki , m. kranzfelder , a. schneider , p. fua , s. ilic , h. feussner , and n. navab .parsing human skeletons in an operating room . , 2016 .h. li , z. lin , x. shen , j. brandt , and g. hua . a convolutional neural network cascade for face detection . in _ proceedings of the ieee conference on computer vision and pattern recognition _ ,pages 53255334 , 2015 .
in present object detection systems , the deep convolutional neural networks ( cnns ) are utilized to predict bounding boxes of object candidates , and have gained performance advantages over the traditional region proposal methods . however , existing deep cnn methods assume the object bounds to be four independent variables , which could be regressed by the loss separately . such an oversimplified assumption is contrary to the well - received observation , that those variables are correlated , resulting to less accurate localization . to address the issue , we firstly introduce a novel intersection over union ( ) loss function for bounding box prediction , which regresses the four bounds of a predicted box as a whole unit . by taking the advantages of loss and deep fully convolutional networks , the unitbox is introduced , which performs accurate and efficient localization , shows robust to objects of varied shapes and scales , and converges fast . we apply unitbox on face detection task and achieve the best performance among all published methods on the fddb benchmark . = 10000 = 10000
in may 2014 , the object management group ( omg ) formally released version 1.0 of the case management modeling and notation ( cmmn ) standard specification .the specification is intended to support case management applications .cmmn is based on two models , a behavioral model and an informational model .the cmmn specification indicates that the information model can be implemented using the content management interoperability services ( cmis ) specification , however no details are given .this paper addresses that gap by describing how an cmmn implementation can use cmis effectively .this paper is intended for implementors of cmmn , and should be read in conjunction with the cmmn specification and the cmis specification .familiarity with the cmmn and cmis specifications is assumed .case management is intended to support the needs of knowledge workers when engaged in knowledge intensive goal oriented processes .it is common for knowledge workers to interact via documents ( e.g. text documents , word processor documents , spreadsheets , presentations , correspondence , memos , videos , pictures , etc . ) .case management shares most of the knowledge intensive processes characteristics as defined by di ciccio _ et .al . _ which are knowledge driven , collaboration oriented , unpredictable , emergent , goal oriented , event driven , constraint and rule driven , and non repeatable .therefore , it makes sense that a platform to support knowledge workers provide content management and collaboration capabilities .case management is defined by forrester as : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a highly structured , but also collaborative , dynamic , and information - intensive process that is driven by outside events and requires incremental and progressive responses from the business domain handling the case .examples of case folders include a patient record , a lawsuit , an insurance claim , or a contract , and the case folder would include all the documents , data , collaboration artifacts , policies , rules , analytics , and other information needed to process and manage the case . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this paper starts with a short introduction to cmmn in section [ sec : main - cmmn ] and cmis in section [ sec : main - cmis ] .these introductions describe the main concepts , classes , and objects that will be used in the rest of the paper .section [ sec : main - alternatives ] describes the two implementation alternatives .section [ sec : main - interaction ] describes how the cmmn information model could be implemented in a cmis repository .section [ sec : main - models ] describes the implications for the cmmn models and for process interchange of case models .an example is given in section [ sec : main - example ] .the example describes some of the functionality the end users will observe in a cmmn implementation that uses a cmis repository as described in this paper .conclusions are presented in section [ sec : main - conclusions ] .two appendixes are included .appendix [ sec : app - metamodels ] shows the cmmn and cmis meta - models for reference purposes .finally , appendix [ sec : app - pseudocode ] provides an example java pseudocode showing a possible implementation of the cmmn information model in cmis . .mapping cmmn information model to cmis meta - model [ cols="<,<",options="header " , ]this section describes how to store the cmmn models in the cmis repository .it also describe the effects of using cmis as described in this paper on process interchange .the cmis repository can be used by the cmmn modeler tool to store the models .the modeler tool can take advantage of the versioning offered by most cmis repositories to maintain the versions of its models .it can also take advantage of the cmis folders to create project folders with the ability to create sub - folders to store the multiple assets of a project .in general , the cmis repository can be used as the modeler repository for cmmn models and other modeling artifacts .the cmmn models and other artifacts can be represented as ` cmis : document`s and stored in specialized ` cmis : document type`s and ` cmis : folder type`s .the cmmn model documents can have specialized meta - data for the cmmn modeler tool to use .for example , project name , department , etc .standard cmis meta - data can also be used by the cmmn modeler tool to keep track of its models .for example , ` cmis : name ` , ` cmis : description ` , ` cmis : createdby ` , ` cmis : creationdate ` , ` cmis : lastmodifiedby ` , ` cmis : lastmodificationdate ` , ` cmis : versionlabel ` , etc . in order for the cmmn implementation to take full advantage of the capabilities offered by cmis ,few extensions to cmmn are required , as follows .` property ` types : : can be extended as shown in table [ table : fieldtypes ] to support ` xsd : decimal ` , ` i d ` , and ` html ` types .note that if a cmmn application is exclusively using a cmis repository then it would never encounter one of these types .so these extensions may be optional . `casefileitem ` types : : may need to be extended as shown in table [ table : objtypes ] .this is optional , because not all implementations will need to support all the cmis objec types .implementations that need to support ` cmis : policy ` , ` cmis : item ` , or ` cmis : secondary ` will need to extend the ` casefileitemdefinition definitiontype ` s uri as described in table [ table : objtypes ] .extended attributes : : are needed in both alternatives . the _ embedded _ alternative requires extended attributes to support , + * ` index ` as an attribute of ` casefileitem ` + the _ integration _ alternative requires extended attributes to support , + * ` cmisobjectid ` as an attribute of ` casefile ` and ` casefileitem ` * ` index ` as an attribute of ` casefileitem ` * ` cmistypeid ` as an attribute of ` casefileitemdefinition ` * ` cmispropertyid ` as an attribute of ` property ` in cmmn 1.0 , these extensions affect process interchange .future versions of the cmmn specification may introduce extensible attributes and rules on how to preserve extended uris in ` casefileitemdefinition definitiontype ` s uri and ` property type ` s uri .currently , tools wishing to preserve cmis 1.0 process interchange may need to introduce an option when saving cmmn models to indicate if the model must be cmmn 1.0 compatible , and if so , the following transformations will be required , to remove extensions : * remove the extended attributes as follows , + ` index ` : : from ` casefileitem ` ` cmisobjectid ` : : from ` casefile ` and ` casefileitem ` ` cmistypeid ` : : from ` casefileitemdefinition ` ` cmispropertyid ` : : from ` property ` * map extended ` property type`s as follows , + ` xsd : decimal ` : : ( ` http://www.omg.org/spec/cmmn/propertytype/decimal ` ) to + ` double ` ( ` http://www.omg.org/spec/cmmn/propertytype/double ` ) ` xsd : id ` : : ( ` http://www.omg.org/spec/cmmn/propertytype/id ` ) to + ` string ` ( ` http://www.omg.org/spec/cmmn/propertytype/string ` ) ` xsd : html ` : : ( ` http://www.omg.org/spec/cmmn/propertytype/html ` ) to + ` string ` ( ` http://www.omg.org/spec/cmmn/propertytype/string ` ) * map extended ` casefileitemdefinition definitiontype`s as follows , + ` cmis : policy ` : : ( ` http://www.omg.org/spec/cmmn/definitiontype/cmispolicy ` ) to + ` unknown ` ( ` http://www.omg.org/spec/cmmn/definitiontype/unknown ` ) ` cmis : item ` : : ( ` http://www.omg.org/spec/cmmn/definitiontype/cmisitem ` ) to + ` unknown ` ( ` http://www.omg.org/spec/cmmn/definitiontype/unknown ` ) ` cmis : secondary ` : : ( ` http://www.omg.org/spec/cmmn/definitiontype/cmissecondary ` ) to + ` unknown ` ( ` http://www.omg.org/spec/cmmn/definitiontype/unknown ` ) * review the generalizations from cmis classes in the _ embedded _ alternative , which are , + ` casefile ` : : generalization of ` cmis : folder ` ` casefileitem ` : : generalization of ` cmis : object ` ` casefileitemdefinition ` : : generalization of ` cmis : object type ` ` property ` : : generalization of ` cmis : property type `this example describes an hypothetical cmmn implementation using a cmis repository to implement the case file and to store cmmn models , as described in this paper . in this example, the implementation has two end user front end tools , the modeling tool and the client tool .both front ends may be integrated into a single user interface .the modeling tool allows users to create cmmn case models , and so , implements the design time aspects of cmmn .the modeling tool is used by business analysts or case workers to create , update , and manage cmmn models .case models are serialized into machine readable files as described in the cmmn specification .the files could be xmi or cmmn xml - schema ( xsd ) compliant files .those files are stored in the cmis repository as documents .the client tool allows case workers to interact with a case instance , and so , implements the runtime aspects of a cmmn implementation .case workers using the client tool are able to create case instances , interact with case instances by adding content , executing tasks and stages , engaging in planning by adding discretionary items to the case instance plan , collaborating with other case workers to complete case instances , etc .the case instance information model is implemented in cmis as ` cmis : folder ` representing the case file .therefore , each case instance will have its unique cmis folder .the user using the client tool can see the state of the case instance in the cmis folder and associated content .an example of a case file is shown in figure [ fig : example ] . in that figure ,the case instance for project xx has a ` casefileitem ` data 1 with some properties , and a sub - folder for incoming documents with two documents , a house picture and a report document . in a system with a clear separation between design and runtime ,a business analyst may create a case model and save it in the cmis repository using the modeling tool .the modeling tool may expose the cmis versioning capability .taking advantage of these capabilities , the business analysts may maintain multiple versions of the case model and may decide to deploy to a production system one of those versions . in a system with no separation between design and runtime ,a case worker may create a cmmn model starting from scratch or using a template stored in the cmis repository . in both cases ,the resulting model may be stored in the cmis repository for future usage as a template . in systems with no clear separation between design and runtime , models will normally start incomplete and will evolve as the case workers process the instance .these case models will continually evolve , and so , the version capabilities of cmis will be used to keep track of the evolution of the model . eventually a case instance will be created and case workers will collaborate to complete the case using the client tool . documents of multiple types maybe required to process the case instance .for example , emails , word processing documents , spreadsheets , pictures , videos , voice recordings , case comments , etc . those documents will be stored in the case folder . to organize those documents , the case workers may decide to create a folder structure under the case folder . for example, it may be useful to create a sub - folder for correspondence .that correspondence sub - folder may be further subdivided into an incoming correspondence sub - folder and an outgoing correspondence sub - folder .in addition to the client tool that allows the case workers to interact with the case instance , other cmis client programs could also interact with the case folder .documents in the case instance may be created by the case workers or it may be placed in the case instance by computer programs using the cmis api to access the case file .events are raised when documents are added to the case , are modified , or are removed . because both documents and folders are ` casefileitem`s , those events can be used in entry or exit criterion to tasks , stages , or milestones .so , as the case file is modified by either the case workers using the client tool or cmis clients interact with the case file , then entry or exit criterion may be triggered .this paper described how to implement the cmmn information model using cmis .there is no need to extend cmis to be used by cmmn , and only minor extensions to cmmn are proposed in this paper .two implementation alternatives were described .an _ integration _alternative where an external cmis repository is used and an _ embedded _ alternative where a cmis repository is embedded within the cmmn engine .the _ integration _ alternative will be appealing to process technology vendors , and the _ embedded _ alternative will be appealing to content management vendors . in both cases ,the cmis repository can be used to store the cmmn models to take advantage of cmis versioning and meta - data .extensive sample java pseudocode is provided and analysis of the meta - models was done to guide implementors . 10 . .http://chemistry.apache.org/ , 2015 .http://chemistry.apache.org/java/opencmis.html , 2015 .j. brown and f. muller . .https://github.com/cmisdocs/serverdevelopmentguidev2 , 2nd edition edition , 2014 . .l. c. clair , c. moore , and r. vitti . .technical report , forrester , cambridge , ma , 2009 . c. di ciccio , a. marrella , and a. russo . .4(1):2957 , 2015 . j. b. hill . .technical report june , gartner , 2012 .m. a. marin , r. hull , and r. vaculn . .in m. rosa and p. soffer , editors , _ business process management workshops _ ,volume 132 , pages 2430 .springer berlin heidelberg , tallinn , estonia , sept .f. muller , j. brown , and j. potts . .manning publications co. , 2013 .isbn 978 - 1 - 617 - 29115 - 9 . oasis . .http://docs.oasis-open.org/cmis/cmis/v1.1/csprd01/cmis-v1.1-csprd01.pdf , 2012 .omg . , 2014 .document formal/2014 - 05 - 05 . k. d. swenson . .landmark books . meghan - kiffer press , tampa ,florida , usa , 2010 .w3c . , october 2004 .the cmmn and the cmis meta - models are provided here for reference purposes .the four figures shown here have been copied from the formal specifications .figure [ fig : cmmnhighlevel ] describes the cmmn high level meta - model showing the relationship between the ` case ` and the ` casefile ` that implement the cmmn information model .figure [ fig : cmmncasefile ] describes how the ` casefile ` contains all the ` casefileitem`s in the case .in addition , it shows that ` casefileitem`s can be used to create a folder structure using the composition relationship between ` parent ` and ` children ` ; and it also shows that relationships between ` casefileitem`s can be implemented using the reflexive association between ` sourceref ` and ` targetref ` .these cmmn meta - models describe a cmmn model at modeling time and can be used for process interchange .figure [ fig : cmismetamodel ] describes the cmis objects meta - model , and figure [ fig : cmistypes ] describes the cmis type system .these cmis meta - models describe a content repository runtime , by describing the objects stored in the content repository at execution time .all the sample java pseudocode present here is uses apache chemistry opencmis which is a standard cmis reference client library for java .this pseudocode is an example of how to use opencmis to implement the cmmn information model .it is not intended for production usage and so it lacks error recovery pseudocode .there are few methods that use ` system.out.println ` in areas that are left as exercise to the reader to complete the methods .this section describes the cmmn standard set of ` casefileitem ` operations for the behavioral model to navigate the information model ( see the cmmn specification section 7.3.1 casefileitem operations ) . the class ` casefileitemoperations ` is used to define all the methods described in this paper .the class constructor requires a cmis session and a root folder that serves as the ` casefile ` for the case instance .most of the methods operate on a case instance ( ` casefile ` ) . for illustration purposes , some methods in this class can operate outside the case instance .get a ` casefileitem ` ( a ` cmis : object ` most likely a document or folder ) instance with ` itemname ` ( ` cmis : name ` ) within the ` casefile ` container .if no ` casefileitem ` instance for the given ` itemname ` exists , an empty ` cmis : document ` ( ` casefileitem ` ) instance is returned .if more than one ` casefileitem ` instance name has the same ` itemname ` ( ` cmis : name ` ) , an arbitrary one should be returned .this java pseudocode provides three implementations for this operation .one returning a ` cmis : object ` ( ` getcasefileiteminstance ` ) , one returning a ` cmis : document ` ( ` getcasefileitemdocumentinstance ` ) , and finally one returning a ` cmis : folder ` ( ` getcasefileitemfolderinstance ` ) .get a ` casefileitem ` ( a ` cmis : object ` most likely a document or folder ) instance with ` itemname ` ( ` cmis : name ` ) and ` casefileitem ` s ` index ` ( see figure [ fig : casefileintegrated ] and figure [ fig : casefileembeded ] ) within the ` casefile ` container .this operation is to be used for ` casefileitem ` ( a ` cmis : object ` instances with a multiplicity greater than one .the ` index ` is used to identify a concrete ` casefileitem ` ( a ` cmis : object ` most likely a document or folder ) instance from the collection of ` casefileitem ` instances . if no ` casefileitem ` instance for the given ` itemname ` exists , or if the ` index ` is out of the range of ` casefileitem ` instances , an empty ` casefileitem ` instance is returned .note that java does not provide methods overloading , so a number 2 was appended to the method names .this java pseudocode provides three implementations for this operation .one returning a ` cmis : object ` ( ` getcasefileiteminstance2 ` ) , one returning a ` cmis : document ` ( ` getcasefileitemdocumentinstance2 ` ) , and finally one returning a ` cmis : folder ` ( ` getcasefileitemfolderinstance2 ` ) . `getcasefileiteminstanceproperty ( ` * * ` in ` * * ` item : casefileitem instance , ` + ` propertyname : string , ` + ` ` * * ` out ` * * ` element ) ` get the value of a ` casefileitem ` instance property .if ` propertyname ` refers to a non - existing property of the ` casefileitem ` instance , an empty ` element ` must be returned .the element returned must be of the specified property type for the ` casefileitem ` instance .the methods in this section are used to navigate ` cmis : folders ` when they implement the ` casefileitem ` self - referencing composition relationship between ` parent ` and ` children ` ( see figure [ fig : cmmncasefile ] ) .get a child ` casefileitem ` instance for a given ` casefileitem ` instance .this operation is valid for ` casefileitem`s implemented as ` cmis : folder`s ( ` cmis : folder ` ) .the value of parameter ` childname ` specifies the name ( ` cmis : name ` ) of the child to get with in the ` cmis : folder ` .if no child of the given name exists for the ` casefileitem ` instance , an empty ` casefileitem ` instance is returned .this operation is provided to navigate the composition relationship between ` casefileitem`s used to implement a folder structure .they are represented in the cmmn meta - model ( see [ fig : cmmncasefile ] ) by the ` parent ` and ` children ` composition relationship .this operation navigates from the ` parent ` ( always a ` cmis : folder ` ) to the ` child ` ( most likely a ` cmis : document ` or folder ) .this java pseudocode provides three implementations for this operation .one returning a ` cmis : object ` ( ` getcasefileiteminstancechild ` ) , one returning a ` cmis : document ` ( ` getcasefileitemdocumentinstancechild ` ) , and finally one returning a ` cmis : folder ` ( ` getcasefileitemfolderinstancechild ` ) .get the parent ` casefileitem ` ( ` cmis : folder ` ) instance of a ` casefileitem ` instance .note in the worse case , the parent will be the ` casefile ` , which is the parent of all the ` casefileitem`s in a case .this operation is provided to navigate the composition relationship between ` casefileitem`s used to implement a folder structure .they are represented in the cmmn meta - model ( see [ fig : cmmncasefile ] ) by the ` parent ` and ` children ` composition relationship .this operation navigates from the ` child ` ( most likely a ` cmis : document ` or folder ) to the ` parent ` ( always a ` cmis : folder ` ) .the methods in this section are used to navigate the ` cmis : relationship ` used to implement the ` casefileitem ` self - referencing reflexive association between ` sourceref ` and ` targetref ` ( see figure [ fig : cmmncasefile ] ) .this operation is provided to navigate relationships between ` casefileitem`s .they are represented in the cmmn meta - model ( see [ fig : cmmncasefile ] ) by the ` sourceref ` and ` targetref ` relationship .this operation navigates from the ` targetref ` to the ` sourceref ` . `getcasefileiteminstancetarget(`**`in ` * * ` item : casefileitem instance , ` + ` targetname : string , ` + ` ` * * ` out ` * * ` casefileitem instance ) ` get a target ` casefileitem ` instance for a given ` casefileitem ` instance .the value of parameter ` childname ` specifies the name ( ` cmis : name ` ) of the target to get .if no target of the given name exists for the ` casefileitem ` instance , an empty ` casefileitem ` instance will be returned .this operation is provided to navigate relationships between ` casefileitem`s .they are represented in the cmmn meta - model ( see [ fig : cmmncasefile ] ) by the ` sourceref ` and ` targetref ` relationship .this operation navigates from the ` sourceref ` to the ` targetref ` .this section shows some examples on how to use cmis to modify the case instance ( ` casefile ` ) information model .three creation methods are included here , two of them allow to create folders and documents in the root folder representing the case instance ( ` casefile ` ) , and one to create relationships between cmis objects. they can be used as examples of how the case information model can be modified .this section describes how to receive the events from the cmis repository .the following methods are included in this class for illustration purposes , but these methods are not case instance specific .they will receive events from all the case instances in the cmis repository .these methods should be executed in their own thread , because ` getcontentchangesforeventpropagation ` will go into a infinite loop .most implementations will encapsulate the two methods shown in this section in another class to be executed in its own thread .
this paper describes how a case management modeling and notation ( cmmn ) implementation can use content management interoperability services ( cmis ) to implement the cmmn information model . the interaction between cmmn and cmis is described in detail , and two implementation alternatives are presented . an _ integration _ alternative where any external cmis repository is used . this alternative is useful to process technology vendors looking to integrate with cmis compliant repositories . an _ embedded _ alternative where a cmis repository is embedded within the cmmn engine . this alternative is useful to content management vendors implementing cmmn . in both alternatives a cmis folder is used as the case file containing the case instance data . the cmis repository can also be used to store the cmmn models to take advantage of cmis versioning and meta - data . extensive java pseudocode is provided as an example of how a cmmn implementation can use a cmis repository to implement the cmmn information model . no extensions to cmis are needed , and only minor extensions to cmmn are proposed . * keywords * : case handling , case management , case management system , case management modeling and notation , cmmn , cmmn implementation , content management , content management system , content management interoperability services , cmis
aids ( acquired immure deficiency syndrome ) , as one of the most dangerous diseases over human history , has been continuously spreading at an enormous speed with an extremely high rate of death ( from the moment when the first infection case was confirmed ) .now it has already spread to all the regions of the world and been a great threat not only to the human health but also to the human society due to its own epidemiologic characters which make the objection to aids an extremely complex and difficult task to address .a high infection rate of population will cause catastrophe to the development of national economy for two reasons . on one hand ,most infected people are in a group aged from 24 to 45 who do the main contributions to the country s productivity , thus the aids will cause a great decline of the social wealth . on the other hand , to carry out a widely covered treatment program on treating the hiv infected people will be a heavy burden for the government finance due to the great cost on expensive medicine as well as regularly and continuously implemented therapy . therefore , to investigate the edpidemic behaviors of hiv / aids appears not only of great theoretical interest in understanding the underlying spreading mechanism , but also necessary and urgent in practice .the extensively investigated models for epidemics , such as the standard susceptible - infected - removed ( sir ) and susceptible - infected - susceptible ( sis ) models , often involve two hypotheses .first , the population is assumed closed , that is , the population size is fixed .however , recent researches on the spread of hiv ( especially in africa and other worst - afflicted areas ) indicate the existence of strongly interplay between hiv epidemics and age structures , thus the demographic impact can not be neglected ( see the review papers and the references therein ) .secondly , the epidemiological models are often established based on perfect and homogeneous mixing , that is to say , all individuals are able to infect all others and the infectivity of each individual is almost the same . to replace the perfect mixing assumption ,one can introduce the _ epidemic contact network _, wherein the nodes denote individuals and edges represent the connections between individuals along which the infection may spread ( see the review papers about network epidemics and the references therein ) . the infected individual can infect a susceptible one only if they are neighboring in the network .the homogeneous mixing assumption can be implemented by using epidemic contact networks with homogeneous degree distributions , such as regular lattices , random networks , and so on .however , recent empirical data exhibit us that the real - world sexual contact patterns are far different from the homogeneous ones . the corresponding networks , similar to many other real - life networks , display the so - called scale - free property , that is , they are of power - law degree distributions .this power - law distribution falls off much more gradually than an exponential one , allowing a few nodes of very large degree to exist .these high - degree nodes are called _ hub nodes _ in network science and _ superspreaders _ in epidemiological literatures .recent theoretical researches on epidemics show that the topology of epidemic contact networks will highly affect the dynamical behaviors , and it is also demonstrated that the effect of the superspreaders on hiv epidemics can not be ignored .therefore , the structural effect should be taken into account when modeling hiv / aids epidemics .in addition , the introduction of antiretroviral ( arv ) drug therapy is also one of the important result - dependent factors .no treatment is more efficient for hiv - infected individuals than the medicine , which combines two or three antiretroviral drugs in `` cocktail '' regimens .these regimens , known as highly active arv therapy , have resulted in the reduction of hiv levels in the blood , often to undetectable levels , and have markedly improved immune function of hiv - infected individuals . the advent and widespread application of arv has dramatically changed the typical course of hiv infection and aids , especially in high - income countries . on the other hand , however , in the low - income countries , the overwhelming proportion of hiv - infected persons has no access to arv . in sub - saharan africa , for example , this lack of treatment access has transformed into rapidly escalating death rates .although the usage of arv appears effective in bating hiv , it will bring too heavy pressure in economy for poor countries .therefore , the better understanding of the effect of arv treatment may enlighten readers in allocating the financial resources .although the demographic structure and sexual contact pattern , respectively , has been taken into account in the previous hiv / aids epidemic models , there are few works simultaneously consider these two ingredients . in the present model ,both the demographic impact and heterogeneity mixing effect are considered . and many important features of real - life hiv epidemics can be naturally generated by combining these two ingredients .this article is organized as follows : in section 2 , the model is presented in details . in section 3 , the main properties of this model are shown .then , in section 4 , we will try to predict the hiv / aids epidemics by this model .finally , we sum up this article and discuss the relevance of this model to the real world in section 5 .the hiv is transmitted by body fluid through several main routes including sexual contacts , sharing injectors among drug users , perinatal transmissions , transfusion of contaminated blood products etc ., which are closely related to human beings social activities . in different regionsthe popularity of each route is variable according to the culture and social circumstance . in some areasthe homosexual contacts and injecting drug use play the main role in hiv epidemics , while the main track in hiv transmission is the heterosexual contacts in the global scope .therefore , in this model , only the heterosexual relationships are taken into account , thus the corresponding epidemic contact networks are bipartite graphs .the epidemic contact network starts with males and females , each of which is sex - active with age between 15 and 49 , and only the heterosexual contacts are permissive .since men tend to over - report their number of partners whereas women tend to under - report , the total numbers of sexual partners of males and females are not equal in existing surveys . however , for simplification , we assume the degree distribution for both male and female nodes are the same .assign each male node s degree according to a given degree distribution with minimal degree .according to the empirical data in sweden , we set . after obtaining the _ degree sequence _ of male nodes, we let the female nodes have the same degree sequence and randomly assign each female node s degree according to this sequence . herethe degree sequence means a set of all nodes degrees , one can find a more detailed and strict definition in ref .the edges are generated randomly by using the mechanism of configuration model .note that , different from most previous studies about epidemics on static networks , the present network structure evolves with time according to some followed rules of hiv epidemic dynamics .we focus on the network susceptible - infected - removed ( sir ) model in which individuals can be in three discrete states , susceptible , infected or removed ( dead ) .the infected ones can be divided into two subclasses : the hiv - positive individuals and persons with aids .since the median time from aids to death is very short ( about 7 month for adults ) compared with the median incubation time for aids , we assume that when an hiv - positive person becomes an aids - patient , she or he will immediately be in death ( i. e. within one year ) . this model is implemented by computer simulation with a time step equal to one year when mimicking the reality .the simulation processes are as follows .\(1 ) set all the nodes to be susceptible except one randomly selected infected one .\(2 ) at each time step for each susceptible node , denote and the number of its neighboring infected nodes not in process of arv treatment ( non - arv user ) or contrary ( arv user ) , respectively .the probability that the node will become infected in the next time step is if is male . here is the transmission probability per sexual partner , which is considered as a more appropriate estimate than probability per sexual act , and the subscript represents whether the corresponding hiv - positive person has taken the arv treatment . since the male - to - female transmission is about twice efficient as female - to - male transmission , if is female , the corresponding probability is where and are restricted below 0.5 .it has been estimated by an analysis of longitudinal cohort data that antiretroviral therapy reduces per - partnership infectivity by as much as 60% , thus we set .\(3 ) at each time step , each infected node ( except the newly infected ones ) may die with probability either ( for non - arv user ) or ( for arv user ) . according to the recent estimations , we set and . the dead individuals are removed from the population .repeat these processes for desired time .note that , each newly infected node will be arv user at probability , and all the existing arv users will keep using arv . in this model ,all the nodes ( sex - active persons ) are divided into 7 age - groups ( labeled a1-a7 ) : 15 - 19 , 20 - 24 , 25 - 29 , 30 - 34 , 35 - 39 , 40 - 44 , 45 - 49 . at beginning, each node chooses to be in one age - group with probability according to the age structure in the year corresponding to time step zero . at each time step, each female individual may bear a child according the corresponding age - specific fertility rates .if she is infected and has not taken arv treatment , the perinatal transmission probability is . and it reduces to if arv treatment is taken . based on some previous empirical studies , we set and .the infected elder persons ( year ) may die with probability or during each time step , and the corresponding probability for perinatally infected children is 0.2 . at the end of each time step ,1/5 randomly selected living persons in age - group a1-a6 will reach the elder group , and 1/5 randomly selected living persons in group a7 will be removed from this system . if the time step is less than 15 , we simply assume equal number ( to the number of removal nodes in a1 ) of susceptible individuals will be added to group a1 ; else if , individuals will be added to group a1 , where denotes the number of newborn babies without hiv at time . herewe simply assume all the infected babies will die before 15 years old since the mortality per year for them is much higher than adults .all these newly added ones will joint the epidemic contact network according to the rules of * section 2.1 * , that is , the female / male nodes will randomly choose sexual partners among all the young and old men / women according to their given degrees that obey the distribution .see the * appendix a * for the source of all the population and demographic data .there are three free parameters in the present model : the average degree which determine the degree distribution when the power - law exponent is given , the transmission probability , and the arv - receiving rate .the former two parameters are relative to the behaviors while the last one is partially dependent on financial conditions . in this section, we will show some simulation results and investigate the main properties about this model by adjusting the above parameters .some previous works show that for most cases , the qualitative features of epidemic dynamics will not be affected by the slightly varying of population size and age - structure , thus in this section , the age - specific fertility rates are kept unchanged .we use the age - density and age - specific fertility rates of china in 2005 for initialization , with the age - specific fertility rates unchanged all through .the network size is . denotes the ratio of hiv - positive individuals to the whole population of sex - active ones ( i. e. the network size ) .the corresponding parameters are .,width=491 ] as a function of time step ( from the peak point to nearly zero ) .the corresponding parameters are .,width=491 ] many infections including hiv / aids can persistently exist in population despite of a very low prevalence. this epidemiological phenomenon can not be illuminated by previous models with homogeneous mixing hypothesis . by using the epidemic contact network with power - law degree distribution ,the present model can reproduce the above observed phenomenon , which is in accordance with some previous theoretical studies about sis / sir models on scale - free networks .note that , since there are newly added susceptible individuals at each time step , the dynamic behaviors of present model may be closer to sis model than sir model .figure 1 reports a typical simulation result wherein the prevalence of hiv is only about .however , the infections can persistently exist for thousands years .for comparison , we exhibit the situation under homogeneous mixing hypothesis in figure 2 , where the three parameters are the same but all the nodes have fixed degree 3 .the prevalence increases in the early stage since only few hiv - positive persons die , and then dies out obeying a linear form .in addition , this model displays oscillatory behaviors , which have been observed in real world and reproduced by some previous network epidemic models based on small - world networks or scale - free networks .one can see references for the concept of small - world networks .however , since the time from first report about hiv cases to now is relative short compared with the oscillatory period , we can not make sure if the real - life hiv epidemics showing some kinds of oscillation ..,width=491 ] .,width=491 ] the transmission probability not only depends on the pathological characters of hiv , but only can be managed by government and other organizations .for example , the popularization of the usage of condoms will sharply reduce the transmission probability per sexual partner / act as observed in thailand and cambodia .figure 3 exhibits the curves for different : when is large , the prevalence fleetly increase until considerable ratio of whole population gets infected , while for smaller , the infection either persistently exists in a low prevalence - level , or vanishes .we have also investigated the effect of average degree on network epidemic behaviors . as shown in figure 4 ,the behaviors of this model are very sensitive to the mean degree .clearly , larger mean degree will statistically enlarge the probability of coming into contact with infected individuals , thus leading to more serious situation .combine this result and that of * section 3.1 * , one will find that the epidemic behaviors are highly affected by the network topology . .this value of is chosen according to the case of northern thailand .,width=491 ] .,width=491 ] the antiretroviral drug therapies have two opposite effects . on one hand, it will reduce the probabilities of both sexual transmission and perinatal transmission , thus ought to be very helpful in controlling the epidemic spreads .on the other hand , this treatment will increase the life expectancy for hiv - positive persons and these arv users can infected more individuals if they do not stop their risky behaviors , thus this treatment may on the contrary increase the incidence of hiv / aids . here , we assume the usage of arv will not change patients behaviors , and in figure 5, one can find that this treatment can substantially reduce hiv epidemics , and even be possible to eradicate high prevalence hiv epidemics under certain ideal conditions .it is worthwhile to emphasize that , the simulation results in figure 5 strongly depend on the choices of some dubious and imprecise parameters such as the ratios , and .therefore , the corresponding results are not confessed .maybe further empirical and experimental studies about antiretroviral drug therapies may lead to more accurate results .in addition , the behavior parameters and are more important than , and for very large case , the impact of antiretroviral drug therapies is very weak as shown in figure 6 .hence , to reduce the risky behaviors is much more effective in the fight against hiv / aids rather than arv treatment , especially for the poor countries .the cases of thailand and eastern zimbabwe are very good examples ..,width=491 ] in the epidemic contact networks , the degree can reflect the susceptibility of individual to some extent , that is , the node with higher degree is easier to be infected statistically . here, we investigate the behavior of the average degree of the newly infected nodes in networks at time , denoted by .we use the average of 10 realizations to reduce the fluctuations . as shown in figure 7, the dynamical spreading process is therefore clear : after the high - risk population are infected within a short time , the spread is going towards generic population ( low - risk population ) .this hierarchical spread , has been reported in some previous pure theoretical studies on si model , but not been emphasized in previous hiv / aids epidemic models . however , this phenomenon has been observed in real - life hiv epidemics : in the early stage the infection is adhered to these high - risk persons , such as sex workers , injection drug users , men who have sex with men , and so on .and then , it diffuses to generic population . as an typical example, one can see the situation in china ., and the optimal parameters in this case are and .,title="fig:",width=245 ] , and the optimal parameters in this case are and .,title="fig:",width=245 ]previous studies on the prediction of the hiv / aids epidemics mainly concentrate on the data of the number of reported hiv - positive cases . these methods , like empirical bayesian back - calculation method , can give a relatively accurate prediction in short - term .however , it can not provide useful information about the underlying dynamic mechanism .therefore , in this section , we will try to predict hiv / aids epidemics by using the present model .the lack of comprehensive and authentic data is one of the most serious problems in evaluating and predicting hiv / aids epidemics .for example , in the year 2004 , the chinese minister of health reported that the number of living hiv - positive persons is about , but in the year 2006 , it says that this number is completely incorrect due to the greatly overvaluing .actually , the veracity of the reported hiv - positive numbers is dubious . from the web site of unaids , except the data of hiv - positive numbers , one can also obtain the data about the number of aids - patients from national sentinel surveillances .these data are also dubious since the monitor policies are not professional especially in developing countries and some aids - patients do not want to report to the sentinel surveillances .however , the data from national sentinel surveillances do not involve external estimating algorithm , thus we believe they are at least more faithworthy than the hiv - positive numbers . in figure 8, we show four typical forms of the time series of the number of aids cases .although there may be some other forms , the present fours are representative .the most serious country is tanzania , wherein a considerable ratio of whole population is infected . without impelling control policies ,tanzania will be completely destroyed before long . in china, the proportion of aids cases seems very small but the amount of aids is quite large as a result of the striking huge ensemble , and its quick and monotone increasing trend brings us heavy misgivings .thailand is a successful example of external control .once , thailand , especially the northern thailand , is the most serious country in asia due to its thriving and prosperous pornographic business .delightfully , the government is cognizant of this problem and forces all the sex workers using condoms .this policy leads to a sharply decreasing of hiv - positive and aids - patient numbers .some other countries , like brazil , have also achieved successful policies in controlling hiv / aids epidemics .however , these emergent external policies bring great challenges in predicting .the most optimistic situation is that of belgium .the hiv / aids persists in a very low prevalence level and no increasing trend is observed . in our model , according the assumption in * section 2.2 * , we consider the mortality at time as the number of newly monitored aids - patients .this quantity , denoted by , can be obtained from the model by combining the death rolls of children , adults and old persons .because of the computational limit , we can at most handle the epidemic contact network with size . however , the number of people aged from 15 to 49 in some countries is much larger than . in order to compare the time series generated by our model and those of real country ,all the data are normalized by the population size aged from 15 to 49 .the normalized number of aids cases is denoted by . in addition , we assume the number of aids cases is proportional to the hiv - positive number at a given time . denote the normalized data from sentinel surveillances , and the data generated from our model , the departure is defined as ^ 2.\ ] ] since the parameter is known after the country is selected ( these data can also be obtained from unaids ) , there are only two tunable parameter and .hence this task degenerates to an optimal problem : determine the proper value of and to minimize the departure .the optimal problem is carried out by searching all the values of in the cartesian product of sets and , and choosing the one corresponding to minimal .the parameters will not change with time , that is to say , the present prediction is valid only for the cases with no additional interventions . , and the optimal parameters in this case are and .,width=491 ] , and the optimal parameters in this case are and .,width=491 ] we have tried this prediction method for many representative countries , and found the cases can be roughly divided into three patterns .the typical example for the first pattern is the united states ( us ) , wherein the curve of aids - patient number has an obvious peak before the year 2000 , and then decreases to a relative low and stable level .the similar behaviors have also been found for many other countries , such as mexico , spain , australia , belgium , thailand , and so on .the common feature of these countries is that their transmission probabilities are all small .this may be because of the high popularization rates of condoms and disposable injectors .china is a particular example , although the prevalence of aids cases are very low , it increases exponentially fast in the early stage with exponent , that is , in the early stage .however , this velocity will be slowed down and the number of aids cases will get steady after the year 2025 . this interesting behavior may due to the particular values of and in china .traditionally , chinese women are not supposed to have sex with a man other than their future spouse , thus the mean degree is very small compared to western - style society . however , since the popularization rate of the usage of condoms in china is very low , the transmission probability in china is much higher than these developed countries . in a word , although no efficient policies in controlling hiv epidemics have been implemented in chinese government or other organizations , the traditional moral sense may protect china from suffering aids .note that , the shapes of and are slightly different , which attributes to the fluctuation of population size .although the numbers of hiv - positive dwellers seems high in some countries , such as us and china , their direct and indirect effects on the demographic structure of the whole population are very weak .if we fixed for us and china from 2006 to 2050 , then the population size ( aged from 15 to 49 ) without hiv / aids will be almost the same as the original prediction results .since the departures can not be observed in plots , we have not shown here .the most serious regions suffering aids are africa ( especially sub - saharan africa ) as well as latin america and caribbean region .an typical example is botswana , which is a relative rich country in africa but has the highest hiv prevalence all over the world .if no additional interventions , the infection will kill more than 50% persons in their prime of life .the ostensible stable behavior after 2020 is on account of the existence of some no - risk people : they adhere to monogamy and do not hit the pipe , thus will not be infected . in network language , these individuals belong to some isolated clusters . in network sir model , these isolated clusters usually come into being as a result of the removal of some individuals .other than us and china , demographic impact of the hiv epidemic in botswana is striking . in figure 12, we compare the predicted population size ( aged from 15 to 49 ) with the no - aids case where we set from 2006 to 2050 . after year 2015, the population size sharply declines , which is the very reason of the decline of in figure 11 . only by ten years , the population holds down to a half level .the demographic gets stable in this new level since there are a number of no - risk people .finally , the age distribution in botswana becomes sandglass - like since many people in their prime of life ( aged 15 to 49 ) will be killed by aids .we are afraid that some other countries in africa , such as malawi , tanzania and zambia , may face the same danger .in this article , we propose a network epidemic model for hiv epidemics , wherein each individual is repreasented by a node of the transmission network and the edges are the connections between individuals along which the infection may spread .motivated by some previous empirical studies on the pattern of sexual contact , we set the sexual activity of each individual , measured by its degree , is not homogeneous but obeys a power - law distribution .many infections concluding hiv / aids can persist in population at a very low prevalence .this epidemiological phenomenon can not be illuminated by previous models with homogeneous mixing hypothesis , while our model can reproduce this feature due to the heterogeneity of activity .in addition , the model displays a clear picture of hierarchical spread : in the early stage the infection is adhered to these high - risk persons , and then , diffuses toward low - risk population .there are two main ingredients baffling the prediction obtained by dynamical model : the first is the lack of comprehensive and authentic data , and the second is the existence of unexpected interventions like the governmental action against hiv / aids epidemics .right or wrong , we try to predict hiv epidemics by using the present model , and hope it will at least capture some qualitative features .the prediction results show that the development of epidemics can be roughly categorized into three patterns for different countries : persist in a stable and low level after a peak in the early stage ( us ) , monotonously grow and then persist in a stable and low level ( china ) , infect considerable ratio of population .which class the hiv epidemic of a given country finally belongs to is mainly determined by the corresponding behavior parameters .the interplay of demographic structure and hiv epidemics is also taken into account . in most cases ,the effect of hiv epidemics on demographic structure is very weak , while for some extremely countries , like botswana , the population size can be decpressed to a half , and the age structure will become sandglass - like since many people in their prime of life ( from aged 15 to 49 ) will be killed by aids .we believe this work may have some contribution in understanding the underlying mechanism of hiv epidemic dynamics , since it can naturally reproduce some important observed characters in hiv spread that has not been emphasized in the previous models. however , is has many shortages which should be adverted to and may be considered in the future works .the first is the memory - limitation and time - complexity in simulation block the directly studies on very large systems .therefore , we have to use the normalization method to mimic the real countries with huge population .this size effect may bring additional error in prediction . a recently proposed fast algorithm may improve the situation if we have successfully modified some dynamical rules and translated this model into an equal rate - equation form .secondly , this model consider only the heterosexual contacts and perinatal transmission , however , other transmission routes , especially the homosexual contacts and sharing injectors among drug users , are also significant in hiv epidemics . finally , some details are ignored . for example , a recent study indicates the existence of large fertility differentials between hiv - infected and uninfected women , and some empirical studies show that the social networks have community structure , which will affect the epidemic dynamics .this work has been partially supported by the national natural science foundation of china under grant nos .70471033 , 10472116 , 10532060 , 70571074 and 10547004 , the specialized research fund for the doctoral program of higher education ( srfdp no.20020358009 ) , the special research founds for theoretical physics frontier problems under grant no .a0524701 , and specialized program under president funding of chinese academy of science .all the population and demographic data come from the united nations and can be obtained by decompressing the .zip file from http://www.comap.com/ undergraduate / contests / mcm/2006/problems/2006%20icm.zip . the explanation about some used data in this article are as following .class1 n. t. j. bailey , the mathematical theory of infectious diseases and its applications , new york , hafner press , 1975 .r. m. anderson , and r. m. may , infectious diseases of humans , oxford , oxford university press , 1992 . j. d. murray , mathematical biology , springer - verlag , berlin , 1993 . h. m. hethcote , siam review 42 ( 2000 ) 599 .r. m. anderson , r. m. may , m. c. boily , g. p. garnett , and j. t. rowley , nature 352 ( 1991 ) 581 .j. t. boerma , and s. s. weir , journal of infectious diseases 191 ( suppl .1 ) ( 2005 ) s61 .r. pastor - satorras , and a. vespignani , epidemics and immunization in scale - free networks . in : bornholdts , and schuster h g ( eds . )handbook of graph and networks , wiley - vch , berlin , 2003 .t. zhou , z. -q .fu , and b. -h .wang , arxiv : physics/0508096 .p. grassberger , math .biosci . 63( 1983 ) 157 .b. bollobs , random graphs , academic press inc , 1985 .f. liljeros , c. r. rdling , l. a. n. amaral , h. e. stanley , and y. , nature 411 ( 2001 ) 907 . f. liljeros , c. r. rdling , and l. a. n. amaral , microbes and infection 5 ( 2003 ) 189 .a. schneeberger , c. h. mercer , s. a. j. gregson , n. m. ferguson , c. a. nyamukapa , r. m. anderson , a. m. johnson , and g. p. garnett , sexually transmitted diseases 31 ( no.6 ) ( 2004 ) 380 .barabsi , and r. albert , science 286 ( 1999 ) 509 .r. albert , and a. -l .barabsi , rev .74 ( 2002 ) 47 . s. n. dorogovtsev , and j. f. f. mendes , adv .51 ( 2002 ) 1079 .m. e. j. newman , siam review 45 ( 2003 ) 167 .s. boccaletti , v. latora , y. moreno , m. chavez , and d. -u .hwang , phys .424 ( 2006 ) 175 .s. bassetti , w. e. bischoff , and r. j. sherertz , emerging infectious diseases 11 ( 2005 ) 637 .m. small , and c. k. tse , physica a 351 ( 2005 ) 499 .r. pastor - satorras and a. vespignani , phys .rev , lett .86 ( 2001 ) 3200 .r. m. may and a. l. lloyd , phys . rev .e 64 ( 2001 ) 066112 .j. m. hyman , j. li , and e. a. stanley , j. theor .( 2001 ) 227 .j. m. hyman , j. li , and e. a. stanley , math .biosci . 181( 2003 ) 17 .s. m. blower , h. b. gershengorn , and r. m. grant , science 287 ( 2000 ) 650 .s. m. blower , e. j. schwartz , and j. mills , aids rev . 5 ( 2003 ) 112. t. c. porco , j. n. martin , k. a. page - shafer , a. cheng , e. charlebois , r. m. grant , and d. h. osmond , aids 18 ( 2004 ) 81 .r. m. anderson , r. m. may , and a. r. mclean , nature 332 ( 1988 ) 228 . s. surasiengsunk , s. kiranandana , k. wongboonsin , g. p. garnett , r. m. anderson , and g. j. p. van griensven , aids 12 ( 1998 ) 775 .r. b. rothenberg , j. j. potterat , d. e. woodhouse , s. q. muth , w. w. darrow , and a. s. klovdahl , aids 12 ( 1998 ) 1529 .j. j. potterat , r. b. rothenberg , and s. q. muth , int .j. std aids 10 ( 1999 ) 182 .see the reports on the global aids epidemic of _ joint united nations programme on hiv / aids _ ( unaids ) from the web site http://www.unaids.org .g. ergn , physica a 308 ( 2002 ) 483 .p. holme , f. liljeros , c. r. edling , and b. j. kim , phys .e 68 ( 2003 ) 056107 .f. chung , and l. lu , annals of combinatorics 6 ( 2002 ) 125 .m. e. j. newman , s. h. strogatz , and d. j. watts , phys .e 64 ( 2001 ) 026118 .d. kitayaporn , j. acquir .. retrovirol .11 ( 1996 ) 77 .a. munoz , c. a. sabin , and a. n.phillips , aids 11 ( suppl a ) ( 1997 ) s69 .g. p. garnett , r. m. anderson , i m a j. math .11 ( 1994 ) 161 .t. d. mastro , and i. de vincenzi , aids 10 ( suppl a ) ( 1996 ) s75 . c. a. sabin , t. hill , f. lampe , r. matthias , s. bhagani , r. gilson , m. s. youle , m. a. johnson , m. fisher , s. scullard , p. easterbrook , b. gazzard , and a. n. phillips , brit . med .j. 330 ( 2005 ) 695 .t. chotpitayasunondh , et al . ,4th international congress on aids in asia and the pacific , manila , october 1997 .k. e. nelson , aids 12 ( 1998 ) 813 .y. moreno , r. pastor - satorras , and a. vespignani , eur .j. b 26 ( 2002 ) 521 .l. k. gallos , and p. argyrakis , physica a 330 ( 2003 ) 117 .a. cliff , and p. haggett , sci .250 ( no.5 ) ( 1984 ) 110 .a. tohamsen , j. theor .biol . 178 ( 1996 ) 45 .m. kuperman , and g. abramson , phys .86 ( 2001 ) 2909 .xiong , phys .e 69 ( 2004 ) 066102 .t. verdasca , m. m. t. da gama , a. nunes , n. r. bernardino , j. m. pacheco , and m. c. gomes , j. theor . biol . 233( 2005 ) 553 .y. hayashi , m. minoura , and j. matsukubo , phys .e 69 ( 2004 ) 016112 .d. j. watts , and s. h. strogatz , nature 393 ( 1998 ) 440 .d. j. watts , small worlds , princeton university press , princeton , 1999 .j. cohen , science 301 ( 2003 ) 1658 .r. w. buckingham , e. meister , and n. webb , int .j. std aids 15 ( 2004 ) 210 .j. x. velasco - hernandez , h. b. gershengorn , and s. m. blower , lancet infect .dis . 2 ( 2002 ) 374 .t. nagachinta , et al . , aids 11 ( 1997 ) 1765 .s. gregson , g. p. garnett , c. a. nyamukapa , t. b. hallett , j. l. c. lewis , p. r. mason , s. k. chandiwana , and r. m. anderson , science 311 ( 2006 ) 664 .r. m. christley , g. l. pinchbeck , r. g. bowers , d. clancy , n. p. french , r. bennett , and j. turner , am .j. epidemiol . 162( 2005 ) 1024 .m. bathlemy , a. barrat , r. pastor - satorras , and a. vespignani , phys .92 ( 2004 ) 178701 .t. zhou , g. yan , and b. -h .wang , phys .e 71 ( 2005 ) 046141 .m. bathlemy , a. barrat , r. pastor - satorras , and a. vespignani , j. theor .( 2005 ) 275 .g. yan , t. zhou , j. wang , z. -q .fu , and b. -h .wang , chin .( 2005 ) 510 .annual reports on aids epidemic in china , from the web site http://www.chinaids.org.cn .a. m. downs , s. h. heisterkamp , j. -b .brunet , and f. f. hamers , aids 11 ( 1997 ) 649 .k , sinka , j. mortimer , b. evans , and d. morgan , aids 17 ( 2003 ) 1683 . c. lau , and a. s. muula , croatian medical journal 45 ( 2004 ) 402 .s. hanson , journal of public health 33 ( 2005 ) 233 .j. m. g. calleja , n. walker , p. cuchi , s. lazzari , p. d. ghys , and f. zacarias , aids 16 ( suppl .3 ) ( 2002 ) s3 .l. m. sander , c. p. warren , i. m. sokolov , c. simon , and j. koopman , math .biosci . 180( 2002 ) 293 .k. fylkesnes , z. ndhlovu , k. kasumba , r. mubanga , and m. sichone , aids 12 ( 1998 ) 1227 .j. r. glynn , j. pnnighaus , a. c. crampin , f. sibande , l. sichali , p. nkhosa , p. broadbent , and p. e. m. fine , aids 15 ( 2001 ) 2025 . y. moreno , j. b. gmez , and a. f. pacheco , phys . rev .e 68 ( 2003 ) 035103 .y. moreno , m. nekovee , and a. vespignani , phys .e 69 ( 2004 ) 055101. y. a. amirkhanian , j. a. kelly , a. a. kukharsky , o. i. borodkina , j. v. granskaya , r. v. dyatlov , t. l. mcauliffe , and a. d. kozlov , aids 15 ( 2001 ) 407 .a. s. wade , c. t. kane , p. a. n. diallo , a. k. diop , k. gueye , s. mboup , i. ndcye , and e. lagarde , aids 19 ( 2005 ) 2133 . t. x. chu , and j. a. levy , cell research 15 ( 2005 ) 865 .c. b. mocoy , s. h. lai , l. r. metsch , s. e. messiah , and e. wei , annals of epidemiology 14 ( 2004 ) 535 .j. j. c. lewis , c. ronsmans , a. ezeh , and s. gregson , aids 18 ( suppl .2 ) ( 2004 ) s35 .m. girvan , and m. e. j. newman , proc .usa 99 ( 2002 ) 782 .g. palla , i. dernyi , i. farkas , and t. vicsek , nature 435 ( 2005 ) 814 .liu , and b. -b .hu , europhys . lett .72 ( 2005 ) 315 .g. yan , z. -q .fu , j. ren , and w. -x .wang , arxiv : physics/0602137 .
hiv / aids epidemics , scale - free networks , mathematical modeling , demography . 89.75.-k,87.23.ge,05.70.ln
it is fascinating : when a liquid jet interacts with a ball , a kind of _ juggling _ can be achieved .the ball can be trapped in air by its interaction with a fluid jet ; the ball moves up and down around at a stable position .this experiment is very simple , everyone should try it .it is a simple demonstration of the beautiful flow behaviour of fluids .the water jet impacts the bottom part of the ball ; the jet turns into a flowing fluid film that surrounds the ball . due to the change in thickness from the jet to the film , the mean film velocity is higher than that of the jet , resulting in a lower pressure region which in turn make the ball position stable . to successfully produce levitation ,the liquid in the film needs to be evacuated ( such that its weight does not break the force balance between the momentum flux of the jet and the weigh of the ball ) .this is the reason why the liquid jet has to be slightly inclined .the inclination and the flowing film induce rotation in the ball .the collision of the film with itself ( flowing around the ball ) produces an outgoing jet , which combined with the ball rotation and surface ripples , creates a beautiful series of films , threads and drops that are continuously ejected .some juggling tricks such as trap , catch , switch , rise up and swirl are shown in the video .we are eager to study this flow in more detail .to our knowledge , there has not been a methodical study of the flow shown here .two sample videos are http://ecommons.library.cornell.edu/bitstream/1813/8237/2/juggling_hi_9oct.mp4[video 1 , hr ] and http://ecommons.library.cornell.edu/bitstream/1813/8237/4/juggling_low_9oct.mpg[video 2 , lr ] .
this fluid dynamics video is an entry for the gallery of fluid motion for the 66th annual meeting of the fluid dynamics division of the american physical society . we show the curious behaviour of a light ball interacting with a liquid jet . for certain conditions , a ball can be suspended into a slightly inclined liquid jet . we studied this phenomenon using a high speed camera . the visualizations show that the object can be ` juggled ' for a variety of flow conditions . a simple calculation showed that the ball remains at a stable position due to a bernoulli - like effect . the phenomenon is very stable and easy to reproduce .
among the key insights in quantum information that follow from the statistical properties of quantum states in high - dimensional hilbert spaces is that the overwhelming majority of pure bipartite states are very close to being maximally entangled .more precisely , consider pure , normalized , quantum states , where and are hilbert spaces of finite dimensions and respectively , and for a given , let be its reduced density matrix in ; that is , where denotes the partial trace over the hilbert space .a standard measure of the entanglement of is the so - called _ entanglement entropy _ , defined as the von neumann entropy , so that when is separable and when is maximally entangled .a well known result , motivated by an initial estimate of lubkin , later conjectured by page , and finally proved by foong and kanno ( see also ) , is that the average entanglement entropy over all pure states , when , is given by which for gives , the maximal entanglement entropy , up to corrections of order ( here , and henceforth , random states are understood to be uniformly sampled on the hypersphere ; that is , according to the pushforward of the haar measure under the action of on a fixed normalized vector of ) .since then , a better understanding of the ubiquity of near - maximal entanglement in the bipartite setting has been a subject of considerable interest , fuelled in part by its relevance to several important applications in quantum information ( see e.g. , ) , such as random quantum circuits , superdense coding , random quantum channels equilibrium thermodynamics and thermalization , among others .one aspect of the problem that has earned particular attention is the statistics of the spectrum of the partial density matrix , the non - zero part of which corresponds to the so - called _ schmidt spectrum _ , the squares of the coefficients appearing in the schmidt ( i.e. , singular value ) decomposition of the state .when is sampled uniformly , as defined previously , the resulting ensemble of random reduced density matrices is an ensemble that is also known in the random matrix theory ( rmt ) literarure as the fixed trace wishart - laguerre ( ftwl ) ensemble with dyson index .the joint probability density function ( pdf ) for the unordered schmidt spectrum induced by the uniformly - sampled pure state ensemble was first obtained by lloyd and pagels and is given by where it is understood that , and ( henceforth we shall use the symbol for spectral probability densities and the standard for density matrices ) .this pdf contains all the necessary information to derive , at least in principle , the statistical properties of any function of the quantum state that is invariant under local unitary transformations .indeed , a fair amount of progress has been made in obtaining exact results for several quantities of interest , such as moments and correlations of traces of powers of and its entropy , the average eigenvalue pdf , the smallest eigenvalue pdf and the largest eigenvalue pdfs . for general values of and exact formulas tend to be rather complicated and are therefore of limited practical use .however , considerable simplifications emerge in the asymptotic limit , with the ratio fixed , using well - known asymptotic techniques in rmt .in particular , it follows that the asymptotic eigenvalue density ( aed ) for the rescaled eigenvalues , satisfies a marenko - pastur law (see also ) , as in the unconstrained wishart ensemble , with parameters given by the ratio of dimensions : }(x)dx , , \ ] ] where , and }(x) ] .reference also provides useful asymptotic results for the distribution of the renyi entanglement entropies ( including the von neumann entropy ) , and the distribution of the largest schmidt eigenvalue . motivated by the general question of how random bipartite states concentrate around the maximally entangled state , in this paper we address a closely related problem :suppose two random bipartite states and are uniformly sampled independently from , and their corresponding partial density matrices and are computed in .for appropriately large dimensions , with , we expect .our question is then : how close are the states and from each other ?more specifically , we will examine the eigenvalue statistics for the difference matrix from which various distance measures can be calculated .in fact , the ensemble of matrices defines a unitarily invariant random matrix ensemble , a fact that automatically implies that and can not be `` too close '' , given the well - known phenomenon of eigenvalue repulsion that characterizes unitarily invariant ensembles , among others .moreover , unitary invariance allows us to use powerful tools to derive exact , though cumbersome , expressions for the joint pdf of the eigenvalues for finite , as well as a relatively simple formula for the aed and its moments in the asymptotic limit with a fixed ratio .thus , the purpose of this paper is twofold : first , we derive the exact expression for the joint eigenvalue pdf in the finite case , and in the asymptotic limit , obtain closed form expressions for the aed and its moments ; second , using these results , we derive the almost sure asymptotic values of the trace distance and the operator norm distance , both of which are especially relevant for applications to quantum information theory .the structure and summary of results of the paper is as follows : in section [ section_results ] we present our main results in the form of three new theorems ( theorems [ exact_theorem ] , [ disttheorem ] , [ momentsth ] ) and two corollaries ( corollaries [ corollary_op_norm ] , [ corollary ] ) , and the remaining sections are devoted to the proofs of these results .the first result is a closed - form formula for the joint eigenvalue pdf for the difference matrix ensemble , which follows from applying a powerful technique of christandl _ _ et al.__ , the so - called _ derivative principle _ , which we reproduce here as theorem [ derivativethm ] .the derivative principle provides a connection between the joint eigenvalue pdf and the joint pdf of the diagonal matrix elements of a unitarily invariant random matrix ensemble that is then exploited in our first main result , theorem [ exact_theorem ] , to obtain a closed form expression for the joint eigenvalue pdf in terms of associated laguerre polynomials , valid for arbitrary dimensions and with .the resulting formula is quite complicated , but may nevertheless prove useful for small and ; in particular , the specialization to the case and general yields a relatively tractable formula . section [ section_exact ] provides a proof of these results , including an alternative proof of the derivative principle that uses standard random matrix theory techniques .the next set of results are concerned with the asymptotics of the eigenvalue pdf of the difference matrix .theorem [ disttheorem ] gives the aed for for all fixed values of the ratio ( here , the constraint is relaxed ) , a result that is proved in section [ section_asymptotic ] using standard results from free probability theory .the aed shows an interesting transition at the critical value . for values of ,the aed has positive support in the region , with , whereas for values of , the aed has positive support in the two regions defined by , with , and a dirac point measure at the origin .our other main result , presented in theorem [ momentsth ] , is an expression for the absolute moments ( including moments of complex order ) of the aed , again for all values of , a result that is proved in section [ section_moments ] using carlson s theorem .the two corollaries to our main results involve the asymptotic almost sure behavior of two distance measures between the independent partial density matrices and , namely the operator norm distance ( corollary [ corollary_op_norm ] ) , which is obtained from the upper support point of the aed , and the trace norm distance ( corollary [ corollary ] ) , which follows from theorem [ momentsth ] when specialized to the first absolute moment .let and be two hilbert spaces of dimensions and respectively .now let and be two normalized random pure states in the tensor product hilbert space , which are uniformly and independently sampled , and define and as the corresponding reduced density matrices for the system and .finally , define the difference matrix as for this difference matrix ensemble , our main results are concerned with the exact joint eigenvalue probability pdf , where are the ( unordered ) eigenvalues of , and the aed , the single - eigenvalue marginal of in the asymptotic limit , with fixed ratio .we begin with the exact joint eigenvalue pdf for all , but with the provision that .the result is presented in the form of two theorems , of which the first is a recasting of a previous result and the second one is the original result . both theorems will be proved in section [ section_exact ] .theorem [ derivativethm ] establishes an extremely useful connection between the joint pdf of the eigenvalues of a unitarily - invariant random matrix ensemble and the joint pdf of the matrix diagonal elements of the same ensemble , which in general is considerably simpler to compute .this result is known in more general form from the theory of duistermaat - heckman ( dh ) measures which are measures on the dual lie algebra of a lie group with a hamiltonian action on a symplectic manifold , obtained from the push - forward of the liouville measure along the corresponding moment map .it concerns the connection between the so - called _ non - abelian _ dh - measure for the hamiltonian action of a compact connected lie group and the _ abelian _ dh - measure for its maximal torus .recently , christandl _ et al _ have used this connection , under the name of the _ derivative pinciple _, for the computation of the eigenvalue pdfs of reduced density matrices of multipartite - entangled states . asthis derivative principle is surprisingly not as well known as it should be in the context of random matrices , we recast it here and prove it in the following section using the standard language of random matrix theory : [ derivativethm ] let be a random matrix drawn from a unitarily invariant random matrix ensemble , the joint eigenvalue pdf for and the joint pdf of the diagonal elements of .then where is the vandermonde determinant and the differential operator .this theorem proves to be particularly useful to compute the joint eigenvalue pdf of the sum of independent random matrices drawn from unitarily invariant ensembles , given that the joint pdf of the diagonal elements is the convolution of the respective joint pdfs .this is precisely the case at hand for the ensemble of random matrices , where and are the partial density matrices of two independent random bipartite states .theorem [ exact_theorem ] gives a contour integral expression ( or what equivalently can be cast as a constant term identity ) for the joint pdf of the diagonal elements for our difference ensemble ( an alternative expression in terms of lauricella generalized hypergeometric functions is given at the end of section [ psi_subs ] ) : [ exact_theorem ] let be the ensemble of difference matrices as defined at the beginning of the section .then , the joint pdf of diagonal elements of is given by the contour integral where the contour encircles the origin , are the associated laguerre polynomials , the surface delta function and is the indicator function on the dimensional region in defined by the minkowski difference set , where is the standard probability simplex .theorems [ derivativethm ] and [ exact_theorem ] provide a systematic , though not necessarily practical , way of computing the joint eigenvalue pdf for the difference of two random partial density matrices , via equations and . in fig .[ d3n3 ] we show the resulting pdf for the case and .the pdfs are supported within a convex polytope in the hyperplane of , with vertices at , , where the are the standard unit vectors in .they are symmetric under reflection ( ) and permutation of the eigenvalues .the multi - lobe shape of the pdf ( see fig . [ d3n3 ] ) is a signature of the well - known eigenvalue repulsion phenomenon , and is a consequence of the vandermonde determinant in , which forces the pdf to vanish whenever two of the eigenvalues are the same .of for the case , , as a function of two independent eigenvalues and ( hence ) . ,width=336 ] armed with the joint eigenvalue pdf , the average density of eigenvalues can be obtained by integrating out over eigenvalues .unfortunately , there does not appear to be any particularly simple expressions for these marginal pdfs , except in the case , where the exact result for the average eigenvalue pdf for any can be expressed in terms of a gauss hypergeometric function ( see appendix b ) as : ,\ ] ] where the proportionality constant is in figs .[ n2m10 ] and [ n3m3 ] we show how our results for and agree with empirical distributions obtained from independent samplings of . from the empirical distributions shown in figs .[ n20m35 ] and [ n50m70 ] for higher values of ( with ) , it becomes evident that the average eigenvalue density will show exactly peaks , which as grows become progressively less pronounced , tending to a smooth single - peaked , finitely - supported , distribution in the asymptotic limit . 0.4 for low dimensions ( red line ) .the blue bars represent the normalized histograms obtained numerically from random samples for figs .[ n2m10 ] and [ n3m3 ] . for figs .[ n20m35 ] and [ n50m70 ] the number of samples was .,title="fig : " ] 0.4 for low dimensions ( red line ) .the blue bars represent the normalized histograms obtained numerically from random samples for figs .[ n2m10 ] and [ n3m3 ] . for figs .[ n20m35 ] and [ n50m70 ] the number of samples was .,title="fig : " ] + 0.4 for low dimensions ( red line ) .the blue bars represent the normalized histograms obtained numerically from random samples for figs . [ n2m10 ] and [ n3m3 ] . for figs .[ n20m35 ] and [ n50m70 ] the number of samples was .,title="fig : " ] 0.4 for low dimensions ( red line ) .the blue bars represent the normalized histograms obtained numerically from random samples for figs .[ n2m10 ] and [ n3m3 ] . for figs .[ n20m35 ] and [ n50m70 ] the number of samples was .,title="fig : " ] considerable simplifications ensue in the asymptotic limit , with a fixed ratio , if we concentrate on the single - eigenvalue marginal of the joint eigenvalue pdf , to which the empirical eigenvalue density almost surely converges asymptotically . as we show in section [ section_asymptotic ] , unitary invariance of the independent matrices and implies that two ensembles satisfy the so - called freeness condition asymptotically , from which it follows that the aed of can be obtained from the free convolution of the aeds of and , which are given for all values of in terms of the marenko - pastur law , shown in equation .our main result involves the asymptotic density function of the rescaled eigenvalues , which we denote by : [ disttheorem ] let be the ensemble of difference matrices as defined at the beginning of the section , and define the constants where is real and positive and purely imaginary for and real and positive for .then , in the limit , with the ratio fixed , the asymptotic rescaled eigenvalue density is , almost surely , for , and for , where and note that in expressions and we can alternatively write for the regions of positive support - \frac{x^2+\frac{1}{3}(2-c)^2}{w[x , c]}\right),\ ] ] where fig .[ figthmas ] shows how the aed depends on the parameter .the most striking feature of this behavior is the fact that for a gap in the eigenvalue density arises for , with a point distribution appearing at with weight .the appearance of the point distribution for can be understood from the fact that if , the difference is of rank at most , and generically we expect the ranges of and to be linearly independent when , in which case the fraction of zero eigenvalues of should be .also , from the asymptoic behavior of random subspaces ( see collins ) , we expect that when ( ) , the ranges of and should become asymptotically orthogonal and thus the non - zero eigenvalues of to be approximately the union of the non - zero eigenvalues of and .the aed of should then be the mixture of the aeds of and , which follow the marenko - pastur law ( or a reflected version of it ) .thus , the existence of a gap for reflects the gap that already exists in the marenko - pastur law in a region in the neighborhood of the origin , and the closing of the gap for may be understood as a consequence of strong mixing due to the fact that when the ranges of and can not be linearly independent subspaces . an obvious variant of our problem is to consider differences of the density matrices with different weights , _i.e. _ , .the extension of the results of theorem [ disttheorem ] for this more general class of ensembles can also be obtained in closed form , as is shown in appendix a. 0.4 for high dimensions ( red line ) , compared to normalized histograms obtained numerically from random samples . in fig .[ n100m20 ] the values at 0 are not shown but constitute a fraction equal to of the total eigenvalues , as predicted ., title="fig : " ] 0.4 for high dimensions ( red line ) , compared to normalized histograms obtained numerically from random samples . in fig .[ n100m20 ] the values at 0 are not shown but constitute a fraction equal to of the total eigenvalues , as predicted ., title="fig : " ] + 0.4 for high dimensions ( red line ) , compared to normalized histograms obtained numerically from random samples . in fig .[ n100m20 ] the values at 0 are not shown but constitute a fraction equal to of the total eigenvalues , as predicted ., title="fig : " ] 0.4 for high dimensions ( red line ) , compared to normalized histograms obtained numerically from random samples . in fig .[ n100m20 ] the values at 0 are not shown but constitute a fraction equal to of the total eigenvalues , as predicted ., title="fig : " ] our final theorem concerns the absolute moments of the aed ( eqs . and ) .as shown in section [ section_moments ] , the general theorem for the absolute moments follows from carlson s theorem , which warrants an analytic extension of the closed - form expression for the even integer moments that is obtained from the laurent series of the cauchy transform of .our main result gives the absolute complex moments ( or equivalently the mellin transform for the density of ) : [ momentsth ] let be the aed in theorem [ disttheorem ] and with . then the complex absolute moments of , are where is the gauss hypergeometric function .the main application of our results is in quantifying the distance between the two random states and using distance measures derived from the spectrum of . in particular , the almost sure behavior of two distance measures , can be readily obtained as corollaries of theorems [ disttheorem ] and [ momentsth ] . from the almost sure convergence of the empirical eigenvalue distributions to the aed , the maximum absolute value of the eigenvalues of almost surely converges to the , where is the upper support point of the aed .hence , as a corollary to theorem [ disttheorem ] we have : [ corollary_op_norm ] under the conditions of theorem [ disttheorem ] , the operator norm of the difference , almost surely behaves as note that for small , the limiting value simplifies to .this may be compared to the leading behavior of , the operator norm of the difference between ( say ) the random state and its average , the totally mixed state . using the upper support point of the marenko - pastur law ,this leading behavior can be shown to be , so that asymptotically for .similarly , for , we find that both and behave as .next , we turn to the trace distance , which is defined as where is the trace norm .a straightforward consequence of the almost sure convergence of the aed ( theorem [ disttheorem ] ) is therefore that in the limit ( with constant ) , the trace distance tends to a limiting value thus , the limiting trace distance follows as a corollary of theorem [ momentsth ] for , together with hypergeometric identities ( see subsection [ absmom1 ] ) : [ corollary ] under the conditions of theorem [ disttheorem ] , the trace distance between and , almost surely behaves as as a function of ( blue line ) vs numerical values ( red dots ) . ] as shown in fig .[ pnormseven ] , the trace distance between two random states asymptotically tends to the maximum value of as , which can be understood as resulting from non - zero eigenvalues of each of which is of magnitude .similarly , as , the trace distance goes to zero like . comparing this leading behavior with that of the distance to the maximally mixed state , , which can be obtained from the first moment of the marenko - pastur law, we find that , so that asymptotically for , which is the same relation obeyed by the operator norm distance .we then proceed with the proofs of theorems [ derivativethm ] and [ exact_theorem ] .as mentioned earlier , despite the fact that theorem [ derivativethm ] is a known result , it is not widely known within the rmt community .we have therefore chosen to prove it here employing random matrix theory techniques .consider an ensemble of random hermitian matrices that is known to have a unitarily - invariant pdf ; that is , for any unitary matrix , where is the standard volume element for hermitian matrices and and are the real and imaginary parts of ( with independent variables for and for ) .now let be the matrix fourier transform of , } \, p(z ) dz\ ] ] where henceforth we shall assume that is also a hermitian matrix . from the unitary invariance of , it follows that is a symmetric function of the eigenvalues of ; hence ,we shall write the characteristic function as , where . we then define as the ordinary fourier transform of : let us now show that is in fact the pdf for the diagonal elements of . for this , note the identity where and the matrix integral will be understood to be over the subspace of hermitian matrices with respect to the volume element .the pdf can then be expressed as the integral } \phi_z(k ) dk.\ ] ] now break up as , where and the off - diagonal part of .the marginal pdf for the diagonal elements is then given by where .exchanging the order of integration , we use the fact that where is the off - diagonal part of the matrix .hence , where is .however , if is diagonal , the are also its eigenvalues . hence is the fourier transform of and is therefore equal to .now , since is hermitian , we can parametrize it as , where is unitary and is the diagonal matrix of the eigenvalues of . a standard result from random matrix theory statesthe volume element can be written as where is shorthand for the normalized haar measure on the group of unitary matrices . from thisit follows that if is unitarily invariant , the eigenvalue pdf can be written as using eqs . and , and similarly applying to the measure , we can then express as } \right\rangle_u \phi_z(\vec{\kappa})\ , d^n\!\kappa\ , , \ ] ] where is the diagonal matrix of the eigenvalues of and } \right\rangle_u \equiv \int e^{i\operatorname{tr}\left[u \lambda_k u^\dagger \lambda_z \right ] } \ , du\ , .\ ] ] this is the well - known harish - chandra - itzykson - zber integral : } \right\rangle_u = i^{-n(n-1)/2}\left(\prod\limits_{p=1}^{n-1}p!\right)\dfrac{\det(\exp[i \kappa_j\lambda_k]_{1\leq j , k\leq n})}{\delta(\vec{\kappa})\delta(\vec{\lambda})}.\ ] ] inside the integral , we can use permutational symmetry of the integrand to replace {1\leq j , k\leq d}) ] , to obtain finally , replacing the arguments in by the partial derivative operators acting on , we obtain relation between and , since is the fourier transform of .next we turn to the derivation of in theorem [ exact_theorem ] .first , suppose , where and are independently drawn from unitarily invariant ensembles .then , the pdf of the diagonal elements of is simply the convolution of and . specializing this result to the problem of interest ,we take and , where and are sampled independently from the ftwl ensemble , with eigenvalue pdf where and is the indicator function on the probability simplex , in the ftwl ensemble , a random reduced density matrix can be expressed as where is an matrix with independent gaussian complex entries .it is then straightforward to show that the pdf of the diagonal elements is the symmetric dirichlet distribution next , recalling that , we have hence , the region of integration , as well as the region of support of , is set by the support conditions of the in the integrand .the integration region for integral is the one that lies in the intersection between the probability simplex and the shifted simplex we will call this region .since the diagonal elements of a density matrix lie in the probability simplex , the diagonal elements of the difference between two random density matrices must lie in the minkowski difference set , or explicitly , which is the -dimensional convex polytope on the hyperplane with vertices at the points for , where are the standard unit vectors in .the condition is precisely the condition such that and hence the support condition for . to further characterize the integration region ,let be the facet of with vertices and likewise be the facet of with vertices , with .if is a point in then the distances and of to and respectively , are given by where are the coordinates of ( with ) .this implies that if , then the facet is closer to than the facet , and conversely , if then the facet is closer to than the facet .hence , is the convex polytope bounded by the facets where let us then parameterize points in a facet by where if and if .when two facets and meet , the intersection is described by the condition using these constraints and solving for the coefficients , the vertices of the integration region can be shown to be given by where thus , the integration region is also a regular simplex , obtained from the standard simplex by a shift and uniform rescaling by the factor .note that the support condition implies that , and hence that on , can be written as a symmetric function of the ; namely , introducing the change of variables to undo these transformations , and combining eqs . and, we arrive at an expression for in terms of an integral on the standard probability simplex : where is defined as in equation [ surface_delta ] . expanding the terms in the integrand, we can use the multinomial beta function to arrive at the expression where it is understood that the run over all values such that the arguments in the factorials are non - negative .we can now use the fact that and noting that the resulting sums under the integral are expressible in terms of the associated laguerre polynomials as we finally obtain expression .it is also worth noting that a closed - form expression for is possible in terms of the so - called lauricella generalized hypergeometric function of type , defined as where is the rising factorial .therefore , we can alternatively express as next proceed with the proof of theorem [ disttheorem ] for the aed .the approach we follow in this section again exploits the fact that the random matrix is the difference of two independent random matrices drawn from unitarily invariant ensembles . in the asymptotic limit , it is well - known that any two such matrices satisfy the so - called _ freeness condition _ underpinning voiculescu s theory of _ free probability _ for non - commuting variables . in a manner analogous to the case of the sum ( or difference ) of two independent commuting random variables ,the aed of can be shown to be given by the so - called _ free convolution _ of the aeds of and .first , let us briefly review the aspects of free probability that will be relevant to our analysis ; the interested reader is referred to nica and speicher s book on the topic for further details . as mentioned earlier , the ensemble of partial density matrices where is uniformly sampled from , is the unitarily invariant ftwl random matrix ensemble . in the asymptotic limit , with fixed , the normalized traces of integer powers of converge , almost surely , to the moments of a well - defined spectral measure , _ the asymptotic eigenvalue density _ ( aed ) : the aed for the ftwl ensemble was obtained in the context of random partial density matrices by page following the coulomb gas analogy . with the knowledge of , the theory of free probability allows us to calculate the aed for the sum ( or difference ) of two random matrices and sampled from the ftwl ensemble .free probability is a probability theory for non - commutative random variables satisfying a generalization of the ordinary notion of independence .specializing to the case of random matrices , we say that two random matrices and are said to be free if for all , where .a well - known theorem of voiculescu states that if and are sequences of matrices such that their asymptotic eigenvalue densities and exist , and if is a sequence of haar unitary random matrices , then and are asymptotically free as . from this theoremit follows that any two random matrices , each independently sampled from a unitarily - invariant ensemble , are asymptotically free .if and are free random variables with aeds and , the aed of the sum satisfies a generalized notion of convolution , known as _ free convolution _ , and denoted by . for a given aed , it is convenient to define its cauchy transform which is analytic on . in free probability theory ,this function plays the role of a moment - generating ( or characteristic ) function .indeed , the moments can be recovered from the laurent expansion of for sufficiently large : and using the stiltjes inversion formula , the aed can be recovered from according to : closely related to the moments are the so - called _ free cumulants _ , which in analogy with ordinary cumulants are those combinations of the moments that are additive under the sum of two free random variables .the free cumulant generating function is the so - called transform , which is connected to the cauchy transform through the functional equations and the free cumulants are obtained from the power series of about the origin : for the sum of two free random variables , additivity of the free cumulants implies additivity of the respective transforms : thus , the aed of the sum of two free random variables can be obtained using once the cauchy transform is retrieved from by means of .we now specialize the previous discussion to the case of interest .for a random partial density matrix playing the role of either or , the aed was first computed by page ( see also the result from c. nadal et al . ) . using the well - known 2d coulomb gas analogy, he showed that for the rescaled aed is with .using the fact that the non - zero eigenvalues of the partial density matrices of a bipartite pure state are equal , it is not difficult to show that this result extends to the result of equation for all values of .as pointed out by nechita , this aed corresponds to the well - known marenko - pastur law that arise from the infinite free poisson process . in terms of the variable , the free cumulants for the marenko - pastur distribution are well - known and given by ( see e.g. , ) .using , the expression for the transform is the rescaled aed of is given by the reflected density with free cumulants , and hence transform .we therefore have , from , that the transform of the rescaled aed of is making use of relation , a cubic equation for can be obtained : the three roots for , indexed by , can be given in terms of trigonometric functions as +\dfrac{c-2}{3cz},\ ] ] where and and is the principal branch of the function with branch cut along the negative real axis . the actual function built piecewise from the roots in such a way that is analytic in the region , and decays as as . as mentioned previously , the solution depends on whether or , so each case must be discussed separately .as our interest is in the aed obtained via the stieltjes inversion formula , we only need to concentrate in obtaining the appropriate expression for . in the limit , the root decaying as is . to examine the behavior of this function in the real axis we can express the in this root as where is the sign function and is the ordinary inverse sine function with domain ] is an operator that extracts the coefficient of -th power of on the taylor series expansion of a function . expanding as a double sum we get and applying the ] , thus the transforms are : where as in section [ section_asymptotic ] .the transform for the aed of will be the sum of the transforms , explicitly following the same procedure as in section [ sectiondiff ] we obtain an equation for the cauchy transform equation is cubic on , analizing the possible roots with the same criteria of analyticity as in the symmetrical case we find the desired aed where and where the parameters , , and are functions of and and have cumbersome expressions which we will not write here .+ + the structure is very similar to the result obtained in with the difference that the limits $ ] are changed with .although it is a complicated expression for , the absolute maximum limit can be found in the case .this limit is equal to . in figs .[ compararion ] and [ comparation ] we compare our result with numerical simulations , note that we have only considered here the case . in this appendixwe derive average eigenvalue density for the case .the first step is to note that for , the two eigenvalues satisfy the condition , hence there is only one independent eigenvalue . using this constraint , eqs . and , with , the pdf for the independent eigenvaluecan be written as where expanding the second factor in the integral in a binomial expansion , and using the beta integral , can be expressed in terms of a hypergeometric function using the standard hypergeometric identity can further be simplified to substituting into equation , we finally obtain after simplification .satoshi adachi , mikito toda , and hiroto kubotani .random matrix theory of singular values of rectangular complex matrices i : exact formula of one - body distribution function in fixed - trace ensemble ., 324(11):22782358 , november 2009 .roland speicher .asymptotic eigenvalue distribution of random matrices and free stochastic analysis . in gerold alsmeyer and matthiaslwe , editors , _ random matrices and iterated random functions _ , volume 53 of _ springer proceedings in mathematics & statistics_. springer berlin heidelberg , berlin , heidelberg , 2013 .
we investigate the spectral statistics of the difference of two density matrices , each of which is independently obtained by partially tracing a random bipartite pure quantum state . we first show how a closed - form expression for the exact joint eigenvalue probability density function for arbitrary dimensions can be obtained from the joint probability density function of the diagonal elements of the difference matrix , which is straightforward to compute . subsequently , we use standard results from free probability theory to derive a relatively simple analytic expression for the asymptotic eigenvalue density ( aed ) of the difference matrix ensemble , and using carlson s theorem , we obtain an expression for its absolute moments . these results allow us to quantify the typical asymptotic distance between the two random mixed states using various distance measures ; in particular , we obtain the almost sure asymptotic behavior of the operator norm distance and the trace distance .
silicon is a promising lens material for millimeter wavelength observations because it can be machined ; it has a high index of refraction , which is optically advantageous ; and it has a high thermal conductivity , allowing for straightforward cooling of the lenses to cryogenic temperatures .however , the high index ( ) in the submillimeter region leads to a reflection at each silicon / vacuum interface of approximately ^ 2\approx30\%$ ] per surface .this is prohibitively large , especially for multi - lens cameras .nevertheless , the benefits of silicon have motivated the development of antireflection coatings , where the referenced ar solutions are in ir and thus easier than mm - wave bands due to the thickness of the ar layer .this paper details the development and testing of a simple antireflection coating that reduces reflection to per lens at the design wavelength while maintaining transmission at 300ghz . at a fixed wavelengththe ideal , normal - incidence antireflection coating for a substrate of index in vacuum has an index of refraction of and is thick . in our application, we are building lenses for the atacama cosmology telescope ( act ) camera at 145 , 217 , and 265ghz to measure the fine scale anisotropy of the cosmic microwave background . in the following ,we focus on the 150ghz band , for which the ideal antireflection coating has m and index .the coating is a machined piece of cirlex^^ polyimide glued to silicon with stycast^^ 1266 epoxy and lord ap-134 adhesion promoter . for the curved lens surface ,a piece of cirlex approximately 1 cm thick is machined to the curved shape and then held in a teflon gluing jig shaped to match the lens surface while the epoxy cures .the low - frequency ( ) dielectric constant and loss reported in the kapton polyimide data sheet suggest that polyimide and silicon could be combined in an ar configuration . to ensure accurate modeling and to test sample dependent effects, we measured the dielectric properties with fourier transform spectrometers ( fts ) , summarized in table [ matprop ] . * * .dielectric properties of the materials for ar coating.[matprop ] [ cols="<,<,>",options="header " , ] of the coated 4mm - thick silicon flat ( flat 7 ) , both modeled ( black ) and measured on the fts ( gray ) .the measurement is the ratio of a sample to a reference spectrum .the lower curve shows that the difference ( measurement minus model ) is within 5% of zero through the well - measured range .the high transmission near 133 and 400ghz is due to the ar coating being and thick .the slow reduction in with increasing frequency is due to increasing loss in the coating and glue .this sample was made before precise values of the index of cirlex and stycast 1266 were known .thus , the center of the passband window , 133ghz , is 15ghz below our target frequency . ] rather than trying to interpret the lens results , we measured the transmission of the two coated flats labeled flat 6 and 7 in table [ befafter ] . figure [ data ] shows the transmission spectra for one of these samples along with a model .the measurement is the ratio of a sample to a reference spectrum , which are averages over two and six spectra , respectively .the model is _ not _ a fit to the coated transmission data but is determined instead by the cirlex , stycast , and silicon properties given in table [ matprop ] and by measurements of the component thicknesses .the one exception is the silicon loss .the coated flats have somewhat lower resistivity ( between 1300 and -cm , as measured by the vendor ) than the uncoated silicon samples ( all specified to exceed -cm ) ; both sets have poorly constrained . to handle the uncertain silicon loss ,we have treated the resistivity of the sample as an unknown and varied it to fit the measured transmission .a finished design of a silicon antireflection coating for any application requires a complete model of the system .incident angle range , frequency bandwidth , and polarization all affect the optimal coating thickness . it is helpful to begin the process with a few estimates and approximate guidelines , however , and we offer some here . first , using equation [ dielconst ] and recalling that power loss is one per radian of phase , we find that absorption loss in 10k-cm silicon should equal 1% per centimeter , scaling as . for normally incident light , the best ar - coating thickness can be estimated by requiring that the optical path through both materials ( coating and glue ) equal one - quarter wave .that is , comparing this rule against the calculated reflection , we find that it overestimates the optimal ar thickness by approximately m for a glue layer m thick .the difference increases quadratically with glue thickness , but the approximate expression is adequate for any reasonable size of the glue layer .m of glue and enough cirlex so that equals one - quarter of the vacuum wavelength .solid lines indicate normal incidence ; dashed lines are for incident angle . the lines labeled _ a5 _ and _ a20 _show the absorption loss .the _ r20 _lines give the reflection from the 20 mm lens ; the results are not visibly different for reflection from the 5 mm lens .absorption increases for oblique angles and at higher frequencies .we show the expected loss at room temperature in 5000-cm silicon .cryogenically , absorption loss should be reduced . ]figure [ fig.estimate ] shows the predicted absorption and reflection loss calculated for a pair of realistic coated lenses , both at and incident angles .the model assumes lenses made of 5000-cm silicon , 5 or 20 mm thick , and unpolarized light . over most of the frequency rangeit should be possible to achieve better than 2% reflection per lens and less than 10% absorption particularly if the lens is thin or silicon resistivity is greater than the -cm assumed here .it might be possible to use the stycast 1266 alone as an ar coating .its index of 1.68 is lower than the ideal 1.85 , and the loss is approximately double the loss of cirlex .however , cutting a single mold to shape the curing epoxy saves three machining steps when compared with the method for cutting cirlex coatings described in this article . at 150 and 300ghz ,the optimal stycast thickness are 300 and 140 m , which offer transmission of 90% and 88% , respectively .this compares unfavorably with the 95% and 93% transmission offered by a cirlex - coated lens .the stycast - coated lens would reflect approximately 1.5% in both frequencies , three times the reflection expected from a cirlex - coated lens .still , a simpler stycast - only coating might suffice in some applications if the thickness and shape could be controlled well enough .we have developed and tested a technique for antireflection coating silicon lenses at cryogenic temperatures at millimeter wavelengths .flat samples show 1.5% reflection and 92% transmission at the design frequency .the remaining 6.5% is attributable to absorption loss , which will decrease upon cooling .the authors are very grateful to john ruhl for sharing his fourier transform spectrometer and to norm jarosik for operating a vector network analyzer .we thank the princeton university physics department machine shop , especially glenn atkinson , for developing techniques to machine and glue the cirlex coatings .sarah marriage tested different ways of gluing silicon to polyimide ; ted gudmundsen and adrian liu repeatedly cycled coated silicon samples between 77 and 300k .we are grateful to our colleagues on the act collaboration and to asad aboobaker , andrew bocarsly , mark devlin , simon dicker , phil farese , jeff klein , jeff mcmahon , amber miller , mike niemack , suzanne staggs , and zachary staniszewski for many helpful discussions .this work was supported by the u.s .national science foundation through awards ast-0408698 for the act project and phy-0355328 for the princeton gravity group .n. g. ugras , j. zmuidzinas , and h. g. leduc , `` quasioptical sis mixer with a silicon lens for submillimeter astronomy , '' in _ proceedings of the 5th international symposium space terahertz technology , _ 125 .
we have developed and tested an antireflection ( ar ) coating method for silicon lenses at cryogenic temperatures and millimeter wavelengths . our particular application is a measurement of the cosmic microwave background . the coating consists of machined pieces of cirlex glued to the silicon . the measured reflection from an ar coated flat piece is less than 1.5% at the design wavelength . the coating has been applied to flats and lenses and has survived multiple thermal cycles from 300 to 4k . we present the manufacturing method , the material properties , the tests performed , and estimates of the loss that can be achieved in practical lenses .
entanglement has been used in many different protocols of quantum information theory , from teleportation and key distribution to secret sharing . in all these protocols , entanglement is a resource which is completely consumed by measurements of the parties involved and should be generated anew for next rounds of protocol .it is true that generating and maintaining entanglement between several particles is very difficult . yet with the developments in realizing quantum repeaters ,creating and maintaining long distance entanglement between stationary quantum systems becomes more feasible in the ( possibly distant ) future .it is thus rewarding to imagine if this entanglement can used in a different way , that is as a carrier of information , which modulates and transmits quantum information , in the same way as carrier waves in classical communication systems carry modulated messages . in this new application, we can imagine that arbitrary quantum states are uploaded ( entangled ) to the carrier by the sender and downloaded ( disentangled ) from the carrier by the receiver(s ) , in such a way that at the end of the protocol , the carrier remains intact and ready for use in next rounds .the role of the quantum carrier and its entanglement with the messages will be to hide messages from adversaries , hence the term secure quantum carrier .+ this new way for using entanglement was first reported in for quantum key distribution and then was developed for a simple secret sharing scheme in . in this paperwe want to develop it further and provide quantum carriers for secret sharing schemes for general threshold schemes .needless to say , our aim is not to develop new quantum secret sharing schemes , but to develop a quantum carrier for distributing in a secure way the already known quantum secrets among the parties . our emphasis is thus on the very concept of secure quantum carrier and the way it can be used in quantum communication protocols . in the particular context of quantum secret sharing , as we will see it allows us to generate broader threshold schemes than those of . + in particular we have to stress the difference with the works in where it was shown that graph state formalism can act as a framework for unifying some of the secret sharing protocols , albeit not for general threshold structures . the idea of was to encode the secret in some local actions of the dealer on a vertex of a suitably chosen weighted graph state .local measurements of the players on different vertices of this graph , could then reveal the secret to authorized subsets of players . in this way, it was shown that threshold schemes of the type and can be implemented in a unified way for various forms of channels interconnecting the dealer and the players .therefore the works of belong to the same class as in , in which entanglement is fully consumed due to measurements of the players .+ it is to be noted that while the idea of a fixed quantum carrier has an appeal for communication , a price should be paid for its implementation : it requires a larger number of particles to be entangled at the beginning and end of the protocol , but at the end of each round a fixed amount of entanglement remains in the form of a carrier .nevertheless , it is worth to develop such a concept from theoretical side and hope that it will someday become close to reality .+ we remark that although we present our analysis for secure communication of basis states or classical information , the idea also works for sending arbitrary quantum states . in the simplest protocol discussed in the beginning of the paperwe explicitly show this , although we will not repeat it for other general schemes .+ the structure of this paper is as follows . in section [ general ]we put forward the basic requirements that a quantum carrier should satisfy , in section [ keycarrier ] we explain the basic method in the simplest possible setting , that is quantum key distribution between two parties .then in section [ bkcarrier ] we briefly explain the use of quantum carrier for the simplest secret sharing scheme , where one dealer , wants to share a secret between two different players who have equal right for retrieving the message collaboratively . for this reason ,this is called a secret shring scheme .this is then generalized to the scheme where a secret is to be shared between players and all the players can retrieve the message collaboratively .finally in section [ kncarrier ] we define the carrier for the threshold scheme where any of the players can retrieve the secret , although collaboration of all of the players is needed for the continuous running and security of the protocol .we end the paper with conclusion and outlook .suppose that a quantum carrier has been set up for a specific communication task , i.e. for a quantum key distribution between alice and bob or a secret sharing scheme , between alice as the dealer and bob and charlie as the players .this quantum carrier should have the following properties : * there should be simple and local uploading and downloading operators , so that the legitimate parties can upload and download messages to or from this carrier . *while in transit , the messages should be hidden from third parties so that no intercept - resend strategy can reveal the identity of the message .* eve should not be able to entangle herself to the quantum carrier without being detected by the legitimate parties .this property is to prevent eve from conducting more complex attacks .once such criteria are met , we say that a secure quantum carrier has been set up for this communication task . in the rest of this paper we present quantum carriers for various communication tasks .we should stress again that these requirements are purely from the theoretical point of view , the main difficulty will obviously be to maintain the carrier for a long enough time so that it can be used for passing many quantum states before the entanglement decays and becomes useless .the first task that we discuss is the simple communication between two parties , where alice wants to send a sequence of bits and , a classical message , to bob .alice encodes the classical bits and into states and ( the eigenbases of the operator ) .the quantum carrier is and refer to the hilbert spaces of alice and bob respectively .the hilbert space of the message is denoted by a number ( since one qubit is being transmitted ) .the uploading operator , used by alice , is a cnot operator which we denote by , the downloading operator is , i.e. with control port by bob and target port the message .+ consider now a classical bit which is encoded to the quantum state and is to be transferred from alice to bob .alice performs the local operation on the state , turning this state into where .while in transit the message is in the state and hence inaccessible to eve . at the destination, bob can download the message from the carrier by his local operation , which disentangles the message and leaves the carrier in its original form , ready for use in the next round .the fact that bob downloads exactly the same state which has been uploaded by alice is due to the perfect correlation of the states of alice and bob in the carrier .alice can also use this carrier for sending quantum states to bob .linearity of the uploading and downloading operations allows alice and bob to entangle and disentangle a quantum state to and from the carrier .+ to conduct a somewhat complex attacks on the communication , eve can entangle herself to the carrier and try to intercept - recend the message .to do this the only possibility for her entanglement is where and are two un - normalized states of eve s ancilla .any other form of entanglement , i.e. one in which a term like is also present in the above expansion , will destroy the perfect correlation between the sequence of bits transmitted between alice and bob . in case that the two parties are using the carrier for sending classical bits , alice and bobcan publicly compare a subsequence of bits to detect the presence of eve s entanglement . in case that they are using the carrier for sending quantum states ,alice can insert a random subsequence of basis states into the main stream of states and ask bob to publicly announce his results of measurements of these specific states .this strategy also works in other more complicated schemes presented later , namely the and the schemes .+ in order to prevent this type of entanglement , we now use a property of the carrier ( [ ( 1,1)carrier ] ) which turns out to be important in all the other forms of quantum carriers that we will introduce later on .this is the invariance property of the carrier ( [ ( 1,1)carrier ] ) under hadamard operations , that is at the end of each round , when the message is downloaded and the carrier is clean , both alice and bob act on their share of the carrier by hadamard operations . in the absence of eve, the carrier will remain the same , however in presence of eve , ( who supposedly acts on her ancilla by a unitary ) the _ contaminated _ ( entangled with the ancialla of eve ) carrier ( [ ( 1,1)carrier ] ) will turn out to be where and .the second term in the carrier will certainly introduce anti - correlations into the basis states communicated between alice and bob , unless and hence which means that eve can not entangle herself to the carrier .in this scheme , alice wants to share a secret with bob and charlie so that they can retrieve the message only by their collaboration .the first quantum protocol for this scheme was designed in where it was shown that measurements of a ghz state in random bases by the three parties can enable them to share a random secret key .the secure carrier for this protocol was first developed in .its characteristic feature is that two types of carriers , should be used which are turned into each other by the hadamard operations .the two carriers are used in the odd rounds 1 , 3 , 5 , and used in the even rounds , 2 , 4 , 6 , the two types of carriers are turned into each other by the local hadamard action of the players at the end of each round , this property is crucial in checking the security of the protocol and detection of a eve attempts who may entangle herself with the carrier and intercept the secret bits . + in the odd and even rounds ,the secret bit is encoded differently as where . while in the odd rounds , the receivers each receive a copy of the sent bit , in the even rounds they need each other collaboration for its retrieval .therefore alice can use the odd rounds to put random stray bits and put the message bits in the even rounds . + in our opinion this property of the protocol , that is , a rate of one - half in sending message bits is analogous to discarding one - half of the measured bits in measurement - based protocols .however the bonus here is that alice can send pre - determined non - random messages .+ note that in both even and odd rounds the carrier can be written as where we have dropped the subscripts even and odd to stress the uniformity .the running of the protocol , i.e. the uploading and downloading operations , are based on the following readily verifiable identities which we state for odd and even rounds separately without writing the subscripts `` odd '' and `` even '' explicitly : + for odd rounds : , [ identitiesodd ] and [ identitieseven ] ) show how the encoded secret can be downloaded by alice and downloaded by bob and charlie in different rounds . in the odd rounds , the uploading operator is simply , and in the even rounds it is . in both types of roundsthe downloading operator is .these string of operators , that is , uploading , carrying and downloading is depicted as follows : the above equation shows that any state can be encoded as and transferred by the same operations .so this protocol can also be used for quantum state sharing in a secure way .the problems of security of the carriers and the impossibility of eve s entanglement with them , has been analyzed in detail in .the main points are that i ) the secret state is transferred from alice to bob and charlie in a mixed state and hence carries no information to outsiders , and ii ) the carriers in even and odd rounds are turned into each other by local hadamard actions of alice , bob and charlie , a property which is possible only in the absence of any entanglement with eve .any entanglement will have a detectable trace on a substring of transferred states , which will be used to detect the presence of eve .the previous protocol can be generalized to the scheme , where all the players should retrieve the secret collaboratively . in this casethe encoding of a bit to quantum states for odd and even rounds is therefore is an even parity state , and is an odd parity state. then the protocol runs as in the case with the obvious generalization of the carrier and the uploading and downloading operators .in fact in both types of rounds the carrier can be written as where stands for the encoding in ( [ encodingnn ] ) and we have suppressed the subscripts `` even '' and `` odd '' for simplicity .the uploading operator will be for the odd rounds and for the even rounds .the downloading operator will be the same for both rounds and will be .+ to show that the protocol runs in exactly the same way as in the ( 2,2 ) scheme , we need to prove the basic properties of the encoded states and the carrier . to this end , we first note from ( [ encodingnn ] ) that the following relations hold , and [ identitieseven ] ) to the ( n , n ) case . to this endwe start from the simple properties to obtain the only other non - trivial relation which we should prove is the following relation for the even rounds which is necessary for the downloading operation , ( for the odd rounds , the involved states are product states and the relation is obvious ) : to show the validity of this relation we first use the following simple property of cnot operation , where the first bit is the control and the second bit is the target qubits : second we use these properties and ( [ encodingnn ] ) and the abbreviation to obtain this completes the description and validity of the uploading and downloading procedures for the ( n , n ) scheme. + in passing we note that the form of the carrier ( [ carriern ] ) for this secret sharing scheme is the same as in the simplest cryptographic protocol,([(1,1)carrier ] ) .we will see in the next section that the appropriate carrier for the threshold scheme where is an odd prime , is of the same form .we will explain the reason for this general structure in the last section , however before that , we explain in detail the carrier for the secret sharing scheme .there are situations where there are players and any subset of or more members can retrieve the secret , while subsets of smaller size can not .this is called a threshold structure in which all the players have equal weight .one can also imagine situations where different players have different weights .this leads to a general access structure , according to which the players form a set of say members and an access structure is a collection of subsets of .the subsets in ( and their unions ) are called authorized subsets and the members of each authorized subset should be able to retrieve the key by their collaboration , while the subsets which are not in , called adversaries , can not retrieve the secret .it is known that once a threshold scheme is solved , then other more general access structures will be possible .for example if and , then we can run a threshold scheme giving 2 shares to and one shares to and each . +the threshold scheme was first generalized to the quantum domain in , where quantum states could also be shared between parties so that any of the players could retrieve the quantum state collaboratively . to be in conformity with the no - cloning theorem, had to be smaller than .we will deal in detail with the case where is a prime number .other cases where are obtained by a simple modification of the scheme .for example a scheme like is implemented by running the scheme as usual , but with alice playing the role of the other receivers in addition to her usual role .the idea of was to adapt the polynomial code , first developed in , to the quantum domain .note that in , quantum mechanics was exploited only for message splitting and not for message distribution .later it was shown in that graph states can be used for combining the two parts of the problem in one scheme , for some threshold schemes , namely for , and schemes . herewe show that the idea of quantum carrier can be used to provide a method of secure distribution for all secrets of the types provided in .let us first see what a polynomial code is .consider a symbol .classically if we want to share this symbol as a secret between parties , called , , so that any members of the parties can retrieve this symbol and fewer than members can not , we can define a real polynomial of degree in the form and evaluate this polynomial on distinct points , and .we can then give the member of the set , the value .it is a simple fact that a polynomial of degree is completely determined by its values on distinct points .so any members can compare their values and determine the full functional form of the polynomial and hence the real number . to make the process simple and less prone to errors , we can substitute the real number field with the field ( where is prime ) . for the points in can take simply .hence we can encode the symbol into a product state .let us now sum such a product state over all possible , and obtain the code in order to see how to find a suitable carrier for this code , and indeed in order to show that the carrier for this code falls within the same class of carriers considered so far , we have to prove further algebraic properties of this code . to do this, we cast it in the form of a calderbank - shor - steane ( css ) code .let be a prime number .with addition and multiplication modulo , the set will be a field .for any , is a a vector space over , i.e. is the set of all -tuples where .let be a linear code , i.e. a subspace of , spanned by linearly independent vectors .thus is isomorphic to consider the case where the dual code of , i.e. the code space spanned by all the vectors which are perpendicular to , contains and has one more dimension .let be spanned by the vectors .thus we have in the codes that we will introduce , the special vector is normalized so that we therefore have we will now define the following special chalderbank - steane - shor ( css ) code , whose codewords correspond to the classes of the quotient space : thus one dit is coded into the -qudit state which is to be distributed between the receivers , each component of the vector being given to one participant . + in the appendix , we will show the vectors with components satisfy all the properties listed in ( [ eiej ] ) .explicitly we have it is now very simple to see that the css code thus constructed is nothing but the polynomial code in ( [ encodes ] ) . to see this we note that the vector has the following expansion andhence the components will be therefore giving each components to one player , the code will be which is exactly the polynomial code ( [ poly ] ) .now that the css structure of the polynomial code is revealed , many of its properties can be proved in a simple way . in particularwe need the following property which plays an important role in the security of the carrier . +* * lemma:**_the set of all codes ( [ codewords ] ) , is invariant under the joint multi - local hadamard operation , i.e. where is a root of unity , .+ _ we use a well - known property of the css codes according to which , where is the number of cosets . to adapt this general relation to our case, we note that in our case , , and hence .moreover we make the following substitutions , and note that putting all this together proves the lemma .+ the quantum carrier is constructed as follows : the important property of this carrier is that it is invariant under the joint action of hadamard operators , performed by alice and all the other players .using ( [ hadamard ] ) proves this assertion : in order to see how alice uploads secrets onto the carrier and how the players download the secret from the carrier we need some algebraic properties of the code . +* definition : * for any vector define the following string of cnot operators performed by alice : also define the following multi - local operator for bob s : * theorem : * _ the operator , for as in ( [ ecomponents ] ) , uploads the message into the carrier by alice and the operator downloads the message from the carrier by bob s , leaving the carrier in its original form . _+ we fist show that for any state and any message this is seen by expansion of in components and noting that from ( [ proofa2 ] ) , we see that therefore alice uploads ( entangles ) the message to the carrier by the local operation . for the other part , we need the following to show this we note that from this last equation we find that which means that the players , can download the message from the carrier and put the carrier back to its original form .+ the basic steps of the quantum secret sharing are now clear .a carrier in the form of the state is shared between alice and all the receivers , .alice operates by his operator on her part of the carrier and the code state and thus entangles the code state to the carrier . at the destinationthe players act on the carrier and the code space by and download the state . from this code state ,no less than players can retrieve the secret symbol .the carrier is now ready for transferring the next code state .in this section we discuss the security of state transmission via the carrier and analyze two types of attacks performed by eve .the security of the retrieval procedure of the symbol from the encoded state need not concern us and has been discussed elsewhere .obviously the analysis of security depends on the resources available to eve .we consider two types of attacks in the following two subsections .this type of analysis applies to all the schemes mentioned up to now . in this type of attackwe assume that eve is not entangled with the carrier , but she has access to all the message qudits sent from alice to the players .after uploading the message , the full state is given by while in transit the data qudits are in the state which is an equal mixture of all the encoded states .therefore even if eve have access to all the data in transit and intercepts all the qudits sent to all the players , she can not acquire the secret or the secret state , since she only finds and equal mixture of all the encoded states .+ at the destination , the receivers , act by the inverse local operator and according to ( [ proofb3 ] ) , disentangle the code from the carrier .they can then retrieve the classical secret by collaboration of each other according to the access structure .once retrieved , we resort to the arguments of to show that this encoded state is secure against cheating of groups of unauthorized players . + therefore in the simplest intercept attack, eve does not acquire any information about the secret symbol .we now consider more general attacks .we now assume that in addition to access to the message channel , eve can entangle herself to the carrier .let us see if she can do appropriate action for intercepting the encoded state and not an equal mixture .consider the first round where the symbol is encoded to and sent by alice .the state of the carrier and the message after alice uploading operation will be where stands for alice , for all the players and for the message qudits , .eve can now set her ancilla qudits to , and then do the following operations : acts by which transforms the state to when alice and the players execute the first round of the protocol to the end and the players extract the state , eve acquires nothing from the symbol , however she has achieved in entangling herself with the carrier in the form in the second round , when the symbol is being sent and the full state is of the form eve performs the following sequence of operations : 1 ) acts by to produce the state 2 ) measures the ancillas to acquire , and 3 ) acts by to put back the full state in the form ( [ psi2 ] ) . when alice and the players finish the second round , they acquire the symbol , but eve also acquires the symbol , while she is still entangled with the carrier in the form ( [ psi1 ] ) and is ready to do the same attack for the next round . in this way she is able to retrieve the sequence of symbols this sequence enables her to find the whole message by checking different choices for the original symbol .this shows that if there is a possibility for eve s entanglement with the carrier , she is able to successfully intercept all the data .+ in order to prevent this , alice and the players act on their respective qudits of the carrier , by hadamard operations .as we have seen above , this operation leaves the pure form of the carrier invariant .let us see if this operation is able to detect an entanglement of eve , i.e. a contamination of the carrier .it is clear that if the carrier contains terms which are not of the form , then there will be mismatches between what alice uploads and what the players download .this mismatch can easily be detected by public announcements of some stray bits which are deliberately inserted into the stream of the symbols . in order to escape this detection ,the only admissible form of eve s entanglement with the carrier is where are a collection of un - normalized states of eve .the method of detection in this case is the same as in the simple case , discussed after equation ( [ abe ] ) .also in view of the discussion in subsection [ rollle ] , only the legitimate parties or even a subset of them are required to publicly announce the results of their measurements .+ in order to prevent this type of apparently undetectable entanglement , we note that the pure carrier is invariant under the action of hadamard operators , while this contaminated carrier is not . in order to retain the correlations, eve may operate on her ancilla by a suitable operator to change the above state into in order to retain the original form of correlations between alice and the players in the carrier , the operator must satisfy the following property for some states . putting one finds that is independent of and hence a rearrangement shows that acting on the left hand side by the inverse hadamard operation , one finds that , which means that all the states are equal to each other and hence the state can not be an entangled state .therefore eve can not entangle herself to the carrier without being detected .+ note that we have assumed that eve is an external agent and all the players have run the protocol as they should , i.e. have performed their hadamard operation on the carrier at the end of each round . in principle one can assume that a subgroup of players collaborate with eve , i.e. perform other operations than hadamard in order to ease undetectable entanglement of eve with the carrier . for example consider a scheme with players , and .the question is whether one of the players , say is capable to collaborate with eve to retrieve the secret symbol ?to be honest , we have not been able neither to devise a successful attack of this type nor a method for its prevention .it is part of the protocol that all the players should perform their cnot s on the state in order to download the state , but once this state is downloaded , then no less than players can collaborate to retrieve the symbol as proved in .one may then argue that this is not a genuine threshold scheme , since for downloading the state from the carrier , all the players should collaborate .+ the important point is that the cnot operation of all the players are needed only for cleaning of the carrier from remnants of messages , and in fact any players can retrieve the symbol from the state that they download from the carrier , but the running of the protocol for other rounds , needs collaboration of all the players . +this assertion can be proved as follows : let be any set of members who want to retrieve the message .denote by the joint cnot operations of the players belonging to the set , i.e. _ * k*:=_jkc_b_j , jdenote also by the joint cnot operations of the rest of players .also denote by , the local operation and classical communications that the players perform among themselves to recover the symbol from .we have shown that the sequence of operations when acting on the state in ( [ psi ] ) produces the symbol unambiguously and leaves the carrier clean of the remnants of the message , i.e. disentangle the state from the carrier .it is important to note that due to their local nature the two operations and commute , so that we have the identity _ * k**c*_*k**c*_*n - k*=*c*_*n - k*m_*k**c*_*k * = * c*_*n - k*(m_*k**c*_*k * ) .however if the operation on the left hand side of this relation , leaves the players in the set with an unambiguous symbol , we can conclude that the operations does the same thing , because the remaining operation , by its local nature , has no effect on the qudits retrieved by the set .the sole effect of is to disentangle completely the carrier from the message state and make it ready for the next round .such collaboration is of course necessary for the continuous running of the protocol like any other communication task .let us now study a simple example , in which we will also see in explicit terms the above argument .the simplest threshold scheme is the scheme for which and hence or more explicitly as note that for qudits , the operators and ( to be used later ) are defined as and , where .equation ( [ 35 ] ) shows what kind of encoding circuit alice has to use to encode a state to .the encoding circuit is shown in figure ( [ circuit ] ) . from ., width=302,height=113 ] moreover it is easily seen from ( [ 012 ] ) that the operator defined as acts as follows on the code states [ propertyz ] |=^-s| .these encoded states have the nice properties that which shows clearly that any two of the receivers can retrieve the classical secret by local measurements of the encoded state .the uploading and downloading operators for this scheme are shown in figure ( [ carrierfigure ] ) . + and ) for a simple threshold scheme ., width=377,height=151 ] let us now see explicitly in this example , an instance of the general discussion after equation ( [ sec ] ) . in other words , we want to show that although the cnot action of all three receivers is necessary for disentangling the state from the carrier , it does not mean that full collaboration of the participants is necessary for recovering the message .that is let us show that even without the collaboration of , and can indeed disentangle and retrieve the message from the carrier .the collaboration of is only needed to clean the carrier from the message .+ assume that only two of the participants , say and enact their cnot s on the state .the resulting state will be thus measurements of qudits and by these two participants reveals the secret , without any need for collaboration of .+ on the other hand suppose that the player wants to retrieve the message symbols on his own . to this endhe adds to the quantum carrier an extra qudit , in the state and at the end of each round , when all the parties are supposed to act on the carrier by hadamard operators , acts by a suitable bi - local operator on his two qudits , so that in conjunction with the hadamard operators of , and , the quantum carrier transforms to latexmath:[\[\label{bob3a } this is the only operation he can do in order not to destroy the correlations between stray qudits which is checked randomly by alice and the participants .+ now when the other participants proceed as usual for entangling a code state to and from the quantum carrier , wants to proceed in a different way to reveal the symbol on his own .the state of the quantum carrier and the code state , which at the beginning of a round is , after alice cnot operations will develop as follows : + where we use the subscripts to denote the qudits which are respectively sent to and .+ it is now easily verified that the density matrix of the qudits and is given by which is independent of .therefore even when one of the participants entangles a qudit to the quantum carrier , refrains from cooperation with others in applying hadamard gates and/or inverse cnot operations , he can not obtain any information about the secret symbol .+ the collaboration of all the participants , is only necessary for disentangling completely the data from the carrier and making it ready for next use .this is certainly a feature that any communication protocol should have .we have developed the concept of quantum carrier to encompass more complex classical secret and quantum sharing schemes .we have described the procedure of uploading and downloading messages to and from the carrier in increasingly complex situations , i.e. for quantum key distributions , for , and threshold schemes . as described in the text , for each task a different quantum carrieris required , although it seems that they all have similar forms ( [ carrier2 ] ) .we have also shown that simple intercept - resend attacks can destroy the pattern of entanglement in the carrier which can be detected by legitimate parties . in the general secret sharing scheme , although collaboration of all parties is required for the continuous running of the protocol ( i.e. cleaning of the carrier from the remnants of the transmitted messages ) , any set of players can download and retrieve the message .+ we hope that together with the previous results , the concept of quantum carrier can attract the attention of other researchers who will develop it into more complex forms . an important question is whether there can be universal carriers between a set of players , which can be used for various cryptographic tasks on demand of the players , i.e. quantum key distribution between the sender and a particular receiver , or secret sharing between the sender and a particular set of players . ?another interesting general question is whether there exist general carriers which can be used to simultaneously send many messages to different receivers via a single quantum carrier , in the same way that frequency modulation is used for such a goal in classical communication .finally the question of general proof of security of these types of carrier - based protocols remain to be investigated .we would like to express our deep gratitude to the two referees of this paper , specially referee b , whose careful reading of the manuscript and many constructive suggestions were essential in improvement of the presentation of our results .we also thank r. annabestani , s. alipour , s. baghbanzadeh , k. gharavi , r. haghshenas , and a. mani for very valuable comments .m. m. is deeply indebted to m. r. koochakie for very stimulating discussions . finally v. k. would like to specially thank farid karimipour , for his kind hospitality in villa paradiso , north of iran , where the major parts of this manuscript was writtenin this appendix we want to prove that the vectors in ( [ ecomponents ] ) satisfy the properties ( [ codewords ] ) . that is if we define then consider the following identity : =p^m-1\equiv -1.\ ] ] expand the first term by using the binomial theorem to find =-1.\ ] ] interchange the order of the two summations and use the definition ( [ sk ] ) to obtain the recursion relation is valid for .the lower bound is obvious from the upper limit on the summation .the upper bound is due to the fact that for , the denominator itself vanishes modulo .equation ( [ recur ] ) leads for example to \nonumber \\s_3(p ) & = & -\frac{1}{4}\left[\left(\begin{array}{c } 4 \\ 1\end{array}\right)s_1(p)+ \left(\begin{array}{c } 4 \\ 2\end{array}\right)s_2(p)\right ] \\s_4(p ) & = & -\frac{1}{5}\left[\left(\begin{array}{c } 5 \\ 1\end{array}\right)s_1(p)+ \left(\begin{array}{c } 5 \\ 2\end{array}\right)s_2(p)+ \left(\begin{array}{c } 5 \\3\end{array}\right)s_3(p)\right]\nonumber \\ & \cdots & \nn\end{aligned}\ ] ] direct calculation gives which is zero mod , since is even .the recursion relations above then imply that for all .the case should be calculated directly , using the euler theorem which states that for every prime number , .the result is immediate , namely + now using the relation ( [ skresult ] ) it s easy to verify that the vectors in ( [ ecompacta ] ) satisfy the desired properties in ( [ eieja ] ) .m. hillery _et al_. , _ phys .a _ * 59 * ( 1999 ) 1829 .a. karlsson _et al_. , _ phys . rev . a _ * 59 * ( 1999 ) 162 .a. m. lance _et al_. , _ phys .. lett _ * 92 * ( 2004 ) 177903 .t. tyc and b. c. sanders , _ phys .a _ * 65 * ( 2002 ) 042310. a. m. lance _et al_. , _ phys .rev . a _ * 71 * ( 2005 ) 033814 ; a. m. lance _et al_. , _ new .journal of physics _ * 5 * ( 2003 ) .yu - ao chen _et al_. , _ phys .* 95 * ( 2005 ) 200502 .s. gaertner _et al_. , _ phys .* 98 * ( 2007 ) 020503 .s. bagherinezhad and v. karimipour , _ phys .a _ * 67 * ( 2003 ) 044302 ; v. karimipour , _ phys .a _ * 72 * ( 2005 ) 056301 ; v.karimipour , _ phys .a _ * 74 * ( 2006 ) 016302 .zhang and zx .man , _ phys .a _ * 72 * ( 2005 ) 022303 .y. li and zx .man , _ phys .a _ * 71 * ( 2005 ) 044301 . h. briegel _et al_. , _ phys .* 81 * ( 1991 ) 5932 - 5935 . z. s. yuan _et al_. , _ nature _ * 454 * ( 2008 ) 1098 - 1101 . k. f. reim _et al_. , _ phys .lett . _ * 107 * ( 2011 ) 053603 .y. s. zhang_et al_. , _ phys .a _ * 64 * ( 2001 ) 024302 .et al_. , _ phys .* 83 * ( 1999 ) 648 .p. k. sarvepalli and a. klappenecker , _ phys .a _ * 80 * ( 2009 ) 022321 .d. markham and b. c. sanders , _ phys .a _ * 78 * ( 2008 ) 042309 .et al_. , _ phys .a _ * 82 * ( 2010 ) 062315 .et al_. , _ proc . of the international school of physics `` enrico fermi '' on `` quantum computers , algorithms and chaos '' _ varenna , italy , july , 2005 .g. blakely , _afips 48_,1979 , pp .313 - 317 .d. aharonov and m. ben - or , _ proc .acm symp . on theory of computing_,1998 , pp .176 - 188 .a. r. calderbank and p.w .shor , _ phys .a _ * 54 * ( 1996 ) 1098 .a. steane , _ proc .lond . a 452 _ , 1996 , pp .2551 - 2577 .m. grassl , t. beth , _ proceedings x. international symposium on theoretical electrical engineering _ , magdeburg , 1999 ; m. grassl _et al_. , _ international journal of foundations of computer science ( ijfcs ) _ , 2003 ,757 - 775 .
we develop the concept of quantum carrier and show that messages can be uploaded and downloaded from this carrier and while in transit , these messages are hidden from external agents . we explain in detail the working of the quantum carrier for different communication tasks , including quantum key distribution , classical secret and quantum state sharing among a set of players according to general threshold schemes . the security of the protocol is discussed and it is shown that only the legitimate subsets can retrieve the secret messages , the collaboration of all the parties is needed for the continuous running of the protocol and maintaining the carrier . secure quantum carriers for quantum state sharing + vahid karimipour + department of physics , sharif university of technology , + p.o . box 11155 - 9161 , + tehran , iran milad marvian + department of electrical engineering , sharif university of technology , + tehran , iran . keywords : quantum carrier ; entanglement ; secret sharing ; threshold schemes . +
interest in small robotic devices has soared in the last two decades , leading engineers to turn towards nature for inspiration and spurring the fields of biomimetics and bioinspired design .nature has evolved diverse solutions to animal locomotion in the forms of flapping flight , swimming , walking , slithering , jumping , and gliding .at least thirty independent animal lineages have evolved gliding flight, but only one animal glides without any appendages : the flying snake .three species of snakes in the genus _chrysopelea _ are known to glide. they inhabit lowland tropical forests in southeast and south asia and have a peculiar behavior : they jump from tree branches to start a glide to the ground or other vegetation , possibly as a way to escape a threat or to travel more efficiently .their gliding ability is surprisingly good , and one species of flying snake , _ chrysopelea paradisi _ ( the paradise tree snake ) , has also been observed to turn in mid - air. like all snakes , _chrysopelea paradisi _ has a cylindrical body with roughly circular cross - section .but when it glides , this snake reconfigures its body to assume a flatter profile . during the glide ,the snake undulates laterally and the parts of the body that are perpendicular to the direction of motion act as lift - generating ` wings'. as in conventional wings , the cross - sectional shape of the snake s body must play an important role .early studies indicated that it may even outperform other airfoil shapes in the reynolds number regime at which the snakes glide ( ,000). the first investigations into the aerodynamic characteristics of the flying snake s body profile were made by miklasz et al. they tested sections of tubing that approximated the snake s cross - sectional geometry as filled circular arcs , and encountered some unexpected aerodynamics .the lift measured on the sections increased with angle of attack ( aoa ) up to , then decreased gently without experiencing catastrophic stall , and the maximum lift coefficient ( ) appeared as a noticeable spike at the stall angle ( defined as the angle of attack at which lift is maximum ) .holden et al., followed with the first study of a model using an anatomically accurate cross - section .they observed a spike in the lift curve at aoa for flows with reynolds numbers 9000 and higher .the maximum lift coefficient was 1.9 , which is unexpectedly high , given the unconventional shape and reynolds number range. the counter - intuitively high lift observed in both studies suggests a lift - enhancement mechanism .holden et al .inferred that the extra lift on the body was due to suction by vortices on the dorsal side of the airfoil .however , the flow mechanism responsible for the enhanced lift by the flying snake s profile has yet to be identified . in the present study, we aim to answer the question : what is the mechanism of lift enhancement created by the flying snake s cross - sectional shape ? to address this , we ran two - dimensional simulations of flow over the anatomical cross - section of the snake .we computed the flow at various angles of attack at low reynolds numbers starting from and increasing .we found that a marked peak in the lift coefficient does appear at aoa for and above .hence , our simulations were able to capture some of the unique lift characteristics observed in the physical experiments of holden et al .aiming to explain the lift mechanism , we analyzed vorticity visualizations of the wake , time - averaged pressure fields , surface pressure distributions , swirling strength , and wake - vortex trajectories .although we performed two - dimensional simulations of flow over the cross - section of the snake , we recognize that three - dimensional effects are present at this reynolds - number regime .further 3-d and unsteady mechanisms due to the motion of the snake likely play a role in real gliding ( we discuss these effects in section [ discussion ] ) . despite the simplification of the gliding system , the two - dimensional simulations in this studyprovide insight into flows at reynolds numbers ( sometimes called ` ultra - low ' reynolds numbers in the aeronautics literature ) where studies are scarce compared to traditional applications in aeronautics ._ c. paradisi _ and other gliding snakes make use of unique kinematics , illustrated in figure [ fig : snakedive ] .the glide begins with a ballistic dive from a tree branch : the snake launches itself with a horizontal velocity and falls with a relatively straight posture . immediately after the jump ,it spreads its ribs apart and nearly doubles the width of its body , changing its cross - section from the cylindrical shape to a flattened shape ( see sketch in figure [ fig : snakeribs ] ) .the profile can be described as a rounded triangle with fore - aft symmetry and overhanging lips at the leading and trailing edges. figure [ fig : snakecs ] shows the anatomically accurate cross - section geometry of the paradise tree snake in the airborne configuration. the glide angle during the ballistic dive can reach a maximum of about degrees, while the snake gains speed , changes its posture to an -shape and begins undulating laterally .it thus uses its body as a continuously reconfiguring wing that generates lift .the speed of descent ( i.e. , the rate at which the snake is sinking vertically ) peaks and then decreases , while the trajectory transitions to the shallowing glide phase of the flight .equilibrium glides have rarely been observed under experimental conditions, and the glide angle usually keeps decreasing without attaining a constant value before the end of the descent . in field observations ,the snakes cover a horizontal distance of about 10 m , on average , when jumping from a height of about 9 m. glide angles as low as have been observed towards the end of the descent. during a glide , the snake moves its head side - to - side , sending traveling waves down the body , as illustrated in figure [ fig : snakemotion ] ( not to scale ) . with the body forming a wide -shape at an angle with respect to the glide trajectory , long sections of the flattened body that are perpendicular to the direction of motion generate lift .compared to terrestrial motion, the undulation frequency in air ( 12 hz ) is lower and the amplitude ( 1017% snout - vent length ) is higher. as the body moves forward along the glide path , the fore and aft edges of the body repeatedly swap , which is not a problem aerodynamically thanks to the fore - aft symmetry .all portions of the body move in the vertical axis as well , with the most prominent motion occurring in the posterior end .the kinematics of gliding in the paradise tree snake are complex and various factors may contribute to the generation of lift .it is difficult to study the unsteady aerodynamics of the snake by incorporating simultaneously all of the elements of its natural glide .but as with any airfoil , we expect the cross - sectional shape to play an important role in the aerodynamics .this cross - section is thick compared to conventional airfoils , and is better described as a lifting bluff body. miklasz et al. were the first to investigate the role of the profile , using wind - tunnel testing of stationary models resembling the snake s cross - section .the models consisted of segments perpendicular to the flow , with simple geometrical shapes meant to approximate a snake s straight portions of the body ( circular pipes cut to make a semi - circular , filled , half - filled , or empty cylindrical section ) .their experiments provided measurements of lift and drag at various angles of attack ( aoa ) in a flow with reynolds number 15,000. the maximum lift ( ) occurred at an angle of attack of and the drag remained approximately the same between and , thus causing a spike in the polar plot ( where the lift coefficient is plotted against the drag coefficient ) .near - maximal lift was also observed in a wide range of angles of attack ( ) . beyond aoa , in the post - stall region , the lift drops gradually while the drag increases rapidly .this region is characterized by flow separating at the leading edge , resulting in some of the flow being deflected upwards and consequently generating a wide wake .miklasz et al .also tested tandem models finding that the downstream section experienced higher lift when placed below the wake of the upstream model , at horizontal separations up to five chord lengths .a subsequent study by holden et al. was the first to use models with an anatomically accurate cross - section based on _c. paradisi_. they tested the models in a water tunnel at reynolds numbers in the range 300015,000 and used time - resolved digital particle image velocimetry ( trdpiv). plotting the experimental lift curves , they show a peak in lift coefficient at aoa , with a maximum between 1.1 and 1.9 , increasing with reynolds number . for reynolds numbers 9000 and above, the maximum also appeared as a noticeable spike about 30% higher than at any other angle of attack . in the case of drag , at reynolds 7000 and below grows gradually until aoa and then increases steeply in the post - stall region .but for reynolds 9000 and higher , the drag coefficients at and aoa are the same , similar to the result by miklasz et al .high values of lift coefficient were maintained in the range .the peak in lift at is an unexpected feature of the snake s cross - section and the value is considered high for an airfoil at this reynolds number. but gliding and flying animals have surprised researchers before with their natural abilities and performance. solved the navier - stokes equations using an immersed boundary method ( ibm ) .our implementation uses a finite - difference discretization of the system of equations in a projection - based formulation proposed by taira & colonius. for the convection terms , we used an explicit adams - bashforth time - stepping scheme , with a crank - nicolson scheme for the diffusion terms .the spatial derivatives for the convection terms were calculated using a second - order symmetric conservative scheme and the diffusion terms were calculated using central differences .the formal spatial order of accuracy of the method is second order everywhere except near the solid boundaries , where it is first order .the temporal order of accuracy is first order , by virtue of the projection method used to step forward in time . throughout this study , we performed direct numerical simulations of two - dimensional , unsteady , incompressible flow over the cross - section of the snake .we used an accurate geometry for the cross - section of the snake , as determined by socha and used in a previous study with physical models . we discretized the flow domain using a stretching cartesian grid , as shown in figure [ fig : grid ] .the cross - section of the snake was scaled such that the chord length ( defined , as in previous studies , as the maximum width of the profile ) was equal to .the body was placed at the center of a domain that spanned the region \times[-15,15] ] , and exponentially stretched in the remaining region with a stretching ratio of 1.01 .the total number of cells in the entire domain is , providing nearly 3 million cells , with the majority located in the area near the body .grid points , for clarity . ]the reynolds number in the simulations varied in the range 5003000 , in increments of 500 , and the angles of attack were varied in the range of in steps of .flows were impulsively started , and run for a sufficiently long period of time until periodic vortex shedding was obtained .we simulated 80 time units of the unsteady flow in each run . with a time step of , this required 200,000 time steps for each run .we computed the flow over an impulsively started cylinder at with these same simulation parameters and grid , and the results matched well against past simulations . we also conducted a grid - independence study by computing the flow over the cross - section at and angles of attack and , using two different grids and calculating the average lift and drag coefficients in each case .the primary measure of grid refinement used is the width of the square cells near the body , which are the smallest cells in the grid .the grids were generated such that the aspect - ratios of the cells near the domain boundary are nearly the same in both cases .as shown in table [ table : gridindependence ] , when is reduced from 0.006 to 0.004 , we obtain a change of 0.07% in the lift coefficient at , and a change of 2.3% in the lift coefficient at .this confirms that a grid with in our simulations produces sufficiently accurate solutions .c c c c c c : & & & & & + mesh size & & avg . & & avg . & + & 0.006 & 0.964 & & 1.533 & + & 0.004 & 0.967 & 0.3% & 1.532 & 0.07% + + : & & & & & + mesh size & & avg . & & avg . & + & 0.006 & 1.280 & & 2.098 & + & 0.004 & 1.316 & 2.7% & 2.147 & 2.3% + the numerical solutions for the flow field were post - processed to obtain various physical quantities and time - dependent flow visualizations , which we used to analyze the aerodynamic characteristics of the snake s cross - section . here , we briefly describe the post - processing procedures : [ [ lift - and - drag - coefficients . ] ] lift and drag coefficients .+ + + + + + + + + + + + + + + + + + + + + + + + + + + the lift and drag forces per unit length were obtained by integrating the immersed - boundary - method force distribution along the body .forces were normalized using the fluid density , freestream velocity , and chord length having a numerical value of 1 in the current study to obtain the lift and drag coefficients : the lift and drag coefficients oscillate due to vortex shedding in the wake ( see figure [ fig : unsteadylift ] ) . to analyze the aerodynamic performance of the snake s cross - section , we calculated the mean lift and drag coefficients by taking the time - average of the unsteady force coefficients in the period of non - dimensional time , neglecting the influence of the initial transients ( is normalized using the freestream velocity and the chord length ) .[ [ vorticity . ] ] vorticity .+ + + + + + + + + + in two - dimensional flows , only one component of vorticity exists : the component perpendicular to the plane of the flow , . on our staggered cartesian grid ,the velocity components are stored at the centers of the cell faces ; we calculated vorticity at the node points using a central - difference approximation of the velocity derivatives ( see figure [ fig : vortcalc ] ) . [ [ pressure - field . ] ] pressure field .+ + + + + + + + + + + + + + + most of the force on a bluff body moving through a fluid arises from differences in surface pressure , with frictional forces playing a minor role .therefore , we analyzed the pressure field to supplement our observations of the lift and drag coefficients .the time - averaged pressure fields ( and surface pressure distributions ) were calculated by taking the mean of 125 equally spaced sampling frames for pressure in the period .this sampling rate corresponds to about 12 frames per period of vortex shedding , a choice made on the basis of being sufficient to capture the features of the flow .the pressure contribution to the lift force is equal to the line integral of the pressure along the surface of the body .this surface distribution is obtained via bilinear interpolation of the pressure from grid points to locations on the body surface .however , the immersed boundary method can produce oscillations in the flow quantities at the surface itself .the width of the mesh cells near the body is 0.4% of the chord length , and our discrete delta function for interpolating the velocity field extends to from the surface in each cartesian direction .the boundary definition is therefore not sharp , and we avoided spurious artifacts by measuring the surface pressure in the fluid at a distance of 1% of the chord length normal to the body .[ [ swirling - strength - in - the - vortical - wake . ] ] swirling strength in the vortical wake .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the vortex dynamics in the wake can provide insight into the mechanism of lift generation . to this end , we attempted to identify the vortices in the flow and plot their evolution .several methods have been developed to objectively define and identify vortices in fluid flows . in the present work , we make use of the swirling strength , denoted by and defined as the imaginary part of the complex eigenvalue of the tensor . in regions of flow where no complex eigenvalues exist ,there is no rotating flow and the swirling strength is zero . [ [ vortex - trajectories ] ] vortex trajectories + + + + + + + + + + + + + + + + + + + vortex trajectories were plotted by marking the locations of the centers of vortices at successive equally spaced instants of time .the centers of vortices are assumed to coincide with the minima of the instantaneous pressure field , and the markers were placed at these locations .the set of points obtained for each vortex represents its pathline .vortex centers can also be identified as maxima in the swirling strength field . in the flows we studied ,both methods resulted in the same vortex locations .we previously validated and verified our immersed - boundary - method code ( called ` cuibm ` ) using analytical solutions , comparisons with published experimental results , and comparisons with other published numerical results. the verification tests in two dimensions with an analytical solution used couette flow between two cylinders , while the other benchmarks consist of lid - driven cavity at and an impulsively started circular cylinder at different values of .comparisons with published numerical results also include the vortex shedding over a circular cylinder and flow over heaving and flapping airfoils .we also computed the temporal and spatial orders of convergence at several sampling times , using the couette - flow test .the barba research group has a consistent policy of making science codes freely available , in the interest of reproducibility . in line with this, we release the entire code that was used to produce the results of this study .the ` cuibm ` code is made available under the mit open - source license and we maintain a version - control repository. the code repository includes all the files needed for running the numerical experiments reported in this paper : input , configuration , and post - processing . to support our open - science and reproducibility goals , in addition to open - source sharing of the code and the running scripts , several of the plots themselves are also available and usable under a cc - by license , as indicated in the respective captions .the first result in this work is the characterization of the lift and drag of the snake profile .figure [ fig : clcd ] shows curves of lift and drag coefficient versus aoa for reynolds numbers between 500 and 3000 .the lift coefficient ( ) increases rapidly with aoa at all the reynolds numbers tested , from a negative lift at to about at or . beyond this value of aoa ,the lift coefficient increases more slowly , peaks , and then starts falling ( stall ) . for and above, the value of jumps to above at an aoa , the stall angle .the drag coefficient ( ) increases gradually and almost linearly in the range aoa . except for the lowest value of the reynolds number , , the slopes of the -versus - aoa curves increase after aoa or , as the snake profile approaches stall . in summary , the 2d model of the snake scross - section exhibits enhanced lift at aoa , just before stall .this aoa coincides with the observations in water - tunnel experiments by holden and colleagues. the actual values of the lift coefficient , however , are larger in these simulations than in the experiments .+ + given that the lift coefficient spikes at aoa for the simulations with and higher , we generated visualizations of the vortical wakes at both and and searched for any differences .the results are presented in figure [ fig : vorticity ] , where the frames in the left column correspond to , and the frames in the right column are for , in both cases for aoa .( animations of these visualizations are available online as supplementary materials. ) the wake at is a classical von krmn vortex street at the lower values of aoa : alternate clockwise ( blue ) and counter - clockwise ( red ) vortices are shed to form a street , which is slightly deflected downwards . at aoa , the wake is different .the separation point of the vortices on the dorsal surface has moved towards the leading edge and larger vortices are formed in the region behind the body , with longer formation times .some vortices form dipole pairs that are deflected upwards , giving rise to a much wider wake ( figure [ subfig : vort1k40 ] ) .associated with this wake behavior , the drag curve shows an increase in slope and the lift coefficient drops ( figure [ fig : clcd ] ) .the vortices produced in the wake at are stronger and more compact , as expected .in addition , three different wake patterns appear across angles of attack : for aoa and below , for , and for aoa and above . at aoa , the dorsal vortex ( blue ) interacts with the trailing = edge vortex and is strained to the point that it is split in two ( figure [ subfig : vort2k30 ] ) .this gives rise to a wake pattern known as , for ` single ' and ` pair ' : a street of single vortices at the top and dipoles at the bottom ( or _ vice versa _ ) . at aoa , the dorsal vortex is stronger and it separates closer to the leading edge .the strain field of the trailing - edge vortex in this case is not strong enough to split the dorsal vortex , and the resulting wake is a classical von krmn vortex street .it differs from that of the lower reynolds number , , by showing almost no deflection and consisting of stronger vortices that are more tightly close together ( figure [ subfig : vort2k35 ] ) .finally , at angles of attack and higher , the wake is similar to the case , with large vortex pairs deflected upwards and smaller vortices deflected downwards .there are more evident shear - layer instabilities with higher - frequency vortices appearing in the unstable shear layers which were not seen at .+ + the visualization of time - averaged pressure fields is another tool for characterizing the vortex wakes .regions of intense vorticity are seen as areas of lower pressure , and thus the main path of the von krmn vortices becomes visible .figure [ fig : averagepressure ] shows the average pressure field for the flows at and 2000 .the slight downwards deflection of the flow at the cases with lower reynolds number and lower aoa is evident , and the wake at and aoa tracks a tight and straight path downstream .the wakes at aoa show two low - pressure tracks in the average pressure field : one corresponding to the large dipoles that are deflected upwards and the other corresponding to the smaller vortices deflected downwards .the paths of vortices in the flow can provide a picture of how the vortices interact with the body and each other to produce lift .figure [ fig : trajectories ] shows the trajectories of the centers of the vortices in the flow for angles of attack and at reynolds numbers 1000 and 2000 . at , the paths traced by the vortices at the two angles of attack are similar to each other . but at , there is a marked difference . at , the newly forming trailing - edge vortex stretches andsplits the vortex formed on the dorsal side .the trajectories of the split vortices are seen by the two tracks with blue circles in figure [ subfig : trajectories2k30 ] . at , we can see that the dorsal vortex is formed nearer to the fore of the cross - section and follows a trajectory that stays closer to the body compared to the case at .the dorsal vortex is also stronger and does not split when it interacts with the trailing - edge vortex .+ the majority of the force experienced by a bluff body in a fluid flow is due to differences in the pressure along the surface of the body .the time - averaged surface pressure distribution is therefore directly related to the average lift and drag experienced by the body .figure [ fig : surfpres ] shows the surface distribution of the average pressure acting on the body at and and at different angles of attacks before the stall regime .these have been plotted against the -coordinate of the corresponding surface points so that the area under the graph gives the pressure contribution to the lift on the body .as expected , the leading - edge suction increases with an increase in angle of attack , associated with greater lift production .but in addition , there is an extended region of low pressure over the rear part of the dorsal surface at aoa ( figure [ subfig : surfp2k ] ) .this is observed only at , and the sudden pressure drop in this region is associated with the spike in the lift coefficient at this angle of attack .+ + the swirling strength of the flow field identifies the regions of the flow that contain coherent vortical structures , as opposed to regions of vorticity that are dominated by shear. figure [ fig : qvalues ] shows the contours of swirling strength at , at instants when the unsteady lift coefficient is maximum ( first four frames ) and minimum ( last two frames ) .the corresponding surface pressure distribution at these instants is also shown in each frame .note that only the leading - edge suction changes up to aoa .but at , there is a decrease in the pressure along the whole dorsal surface , which is associated with a marked increase in the lift coefficient . at the instants of both maximum and minimum unsteady lift, strong vortices decrease pressure on a large portion of the dorsal surface .these features contribute to the observed enhanced time - averaged lift at .the swirling strength helps to identify regions of vorticity that correspond to rotating flow , rather than strain - dominated regions .but we can not identify the sign of vorticity , so we do not recognize primary vs. secondary vortices in these plots . to examine near - body features , we plotted the contours of vorticity in the region close to the body .figures [ fig : vconts1k30 ] and [ fig : vconts2k30 ] consist of three frames showing the vorticity plots of the flow at different points within one cycle of vortex shedding for flows at and , respectively , when the angle of attack is . because the case with and is of interest to us and we would like to examine it more carefully , figure [ fig : vconts2k35 ] shows six frames of vorticity contours of this flow from one cycle of shedding . these were plotted during the times when periodic vortex shedding had been established .a detailed description of the flow features and their effect on the force coefficients is saved for the following section .+ + + + + +the first question that we aimed to answer with this study was whether an enhanced lift at a specific angle of attack would also appear in two - dimensional simulations of the flow around the cross - section of the snake , as it does in the experiments .our results show a pronounced peak in lift at angle of attack for reynolds numbers 2000 and beyond .this reynolds number at which the observed switch occurs is lower than in the two previous experimental studies , but the angle of attack at which the peak appears is the same in the simulations and the experiments . the snake s cross - section acts like a lifting bluff body , hence the pressure component accounts for most of the force acting on the body , with viscosity contributions being small .figure [ fig : surfpres ] shows the time - averaged pressure distribution along the surface of the body for flows with reynolds numbers 1000 and 2000 .the plots show an increased suction all along the dorsal surface of the snake for the case when and , which accounts for the enhanced lift .to more fully explain the mechanism of lift enhancement , it is necessary to consider the unsteady flow field .we compared the development of the vorticity in the wake of the cross - section of the snake with that of a two - dimensional circular cylinder .koumoutsakos and leonard studied the incompressible flow field of an impulsively started two - dimensional circular cylinder .they analyzed how the evolution of vorticity affected the drag coefficient at various reynolds numbers between and , arriving at a description of the role of secondary vorticity and its interaction with the separating shear layer .here , we present a similar analysis for the flow over the snake s cross - section at reynolds numbers 1000 and 2000the lift curve of the latter exhibits the spike at and the former does not .figure [ fig : sketch ] shows a sketch of the near - wake region , indicating the terms that we will use to describe the flow .figure [ fig : vconts1k30 ] shows that at and aoa , the primary vortex generated on the dorsal side of the body induces a region of secondary vorticity .the boundary layer feeding the primary vortex at this reynolds number is thick and the secondary vorticity is relatively weak .the primary vortex generated at the trailing edge interacts with the dorsal primary vortex , straining it and weakening it .a similar flow pattern is observed at for this reynolds number ( not shown ) . at the higher reynolds number of 2000 ,the vortices are stronger and more compact , as expected .but the flow fields at angles of attack and are qualitatively different from each other . at ,the trailing - edge vortex is strong enough to weaken , stretch , and split the dorsal primary vortex into two ( see figure [ subfig : vort2k30 ] ) , resulting in a wake that consists of single vortices on top and pairs of vortices at the bottom of the vortex street .this wake pattern appears at this reynolds number for angles of attack and lower .but at , the stronger vortex on the dorsal side induces a stronger secondary vorticity ( figure [ subfig : vc2k35a ] ) .the vortices along the dorsal surface are associated with an enhanced suction all along the upper side of the body , as seen in figure [ subfig : surfp2k ] . as the secondary vorticity infiltrates the primary vortex , a new second vortex of negative sign is formed by the shear layer on the dorsal surface ( figures [ subfig : vc2k35b ] and [ subfig : vc2k35c ] ) .the shear layer on the dorsal side separates near the leading edge , as evidenced by the positive vorticity near the surface in figures [ subfig : vc2k35c ] and [ subfig : vc2k35d ] .the separated shear layer does not stall , but instead rolls up into the new vortex of negative sign due to the influence of the secondary vortex .this new second vortex forms a dipole with the secondary vorticity ( figure [ subfig : vc2k35c ] ) and the vortices are pushed towards the profile s surface .as the primary vortex convects away from the body , the secondary vorticity weakens ( figure [ subfig : vc2k35d ] ) and the second vortex formed due to the separated shear layer initiates a new primary vortex ( figure [ subfig : vc2k35e ] ) that remains close to the body and contributes to the enhanced suction on the dorsal surface .the trajectory plots in figure [ fig : trajectories ] show that the dorsal primary vortex for the case when and indeed forms and remains closer to the body surface as compared to the other cases .the swirling strength plots at ( figure [ fig : qvalues ] ) are also consistent with these interpretations . at the instant when lift is minimum, the flow remains attached at the leading edge for the case when , but is separated when .regions of high swirling strength correspond to regions of low pressure in the flow , as on the front half of the dorsal surface . at the instant of maximum instantaneous lift ,when swirling strength is higher at the locations of the secondary vorticity and of the new second vortex , associated with a lowering of pressure across the dorsal surface and not just at the leading edge .both the minima and maxima of the instantaneous lift increase sharply at ( see figure [ fig : unsteadylift ] ) .one phenomenon that occurs at this range of reynolds number for flows over bluff bodies is the instability of the shear layer . for circular cylinders , kelvin - helmholtz instability of the shear layersis observed for flows with reynolds numbers approximately 1200 and higher. this is a two - dimensional phenomenon , and causes an increase in the 2-d reynolds stresses , and subsequently increases base suction. for the snake s cross - section at and , the separated shear layer on the dorsal side is also subject to this instability and can form small - scale vortices that eventually merge into the primary vortices .this could explain why the vortices are stronger and remain closer to the body . at lower angles of attack, the boundary layer is attached to the body and the instability is not manifested , which implies a reduced value of base suction , and subsequently lift and drag . as mentioned at the beginning of this section, the enhanced lift occurs at the same angle of attack in previous experiments with the cross - section of the snake , but in a different range of reynolds number .this difference reflects the limitations of our study , which only computes two - dimensional flow .experiments with bluff bodies show that beyond a certain reynolds number ( for circular cylinders ) , three - dimensional instabilities produce steam - wise vortices in the wake. the formation and effect of these vortices are more pronounced for bluff bodies they increase the reynolds stresses in the wake of the cylinder , and wake vortices are formed further from the body surface , causing a decrease in the base suction when compared to two - dimensional simulations of the same flow . hence , 2-d simulations overestimate the unsteady lift and drag forces .this is also observed in our computed values of lift and drag for the snake model when compared to the experiments of holden et al. taking these facts into account , the simulations we report here can be considered a reduced model , one whose utility we could not assert _ a - priori_. it is perhaps a surprising result that a peak in lift is in fact obtained in the 2d simulations , as observed in the experiments .we surmise that the discrepancy in reynolds number range where the enhanced lift appears can be ascribed to 3d disturbances , which inhibit the mechanism responsible for the peak in lift , so that it can only manifest itself at higher reynolds number when the wake vortices are stronger and more compact .a flying snake in the field exhibits complex three - dimensional motions and a number of other factors could affect its aerodynamics , some of which we briefly list here .the finite size of the sections generating lift means that wingtip vortices will be generated , causing a decrease in lift and an increase in drag .the -shape of the snake in the air suggests that multiple segments of the snake body can generate lift simultaneously , and the wake of the section furthest upstream could have an effect on the downstream sections . a preliminary result by miklasz et al. with two tandem models whose cross - sections approximated the snake geometry found that the value of the lift - to - drag ratio of the downstream section could increase by as much as 54% , depending on its position in the wake .the undulatory motion of the snake in the air means that the body also moves laterally in addition to moving forward along the glide path .the sideways motion may generate spanwise flow , which has been known to stabilize leading - edge vortices and increase the lift on the body .another mechanism that we do not know if the snake makes use of is thrust generation due to heaving or pitching motions of its body . - structure interactions could also be present . without discounting the limitations of two - dimensional models , we have sought to characterize the wake mechanism that could explain the lift enhancement on the snake s cross - section in this work .the results give insight into the vortex structures that are involved in this mechanism and suggest new directions to interrogate the flow in three - dimensional studies .these constitute future work that we aim to carry out once the appropriate extensions to the code have been completed .we studied computationally the aerodynamics of a species of flying snake , _ chrysopelea paradisi _ , by simulating two - dimensional incompressible flow over the cross - sectional shape of the snake during gliding flight .we obtained lift and drag characteristics of this shape in the reynolds number range 5003000 . in flows with reynolds numbers 2000 and beyond , the lift curve showed a sharp increase at an angle of attack of , followed by a gradual stall .this behavior is similar to that observed previously in experimental studies of cylindrical sections with the same cross section , for reynolds numbers 9000 and above .our unsteady simulations reveal that in flows with reynolds number 2000 and above and at angle of attack , the 2-d flow separates at the leading edge but does not produce a stall .the free shear layer thus generated over the dorsal surface of the body can roll up and interact with secondary vorticity , resulting in vortices remaining closer to the surface and an associated increase in lift .differences between the experiments and simulations in the magnitude of the force coefficients and the value of the threshold reynolds number beyond which the phenomenon is observed may be attributed to the three - dimensional effects .the code , running scripts and data necessary to run the computations that produced the results of this paper can all be found online , and are shared openly .the ` cuibm ` code is open source under the mit license and can be obtained from its version - controlled repository at https://bitbucket.org / anushk / cuibm/. data sets , figure - plotting scripts , figures and supplementary animations have been deposited on http://figshare.com/. lab acknowledges partial support from nsf career award aci-1149784 .she thanks the support from nvidia via an academic partnership award ( 2012 ) and the cuda fellows program .this research was also partially supported by nsf grant 1152304 to ppv and jjs .10 robert dudley , greg byrnes , stephen p yanoviak , brendan borrell , rafe m brown , and jimmy a mcguire . gliding and the functional origins of flight : biomechanical novelty or necessity ?, 38:179201 , 2007 .holden , d. , j. j. socha , n. d. cardwell , and p. p. vlachos .aerodynamics of the flying snake _ chrysopelea paradisi _ : how a bluff body cross - sectional shape contributes to gliding performance . , 217(3 ) : 38294 , 2014 .p kunz and i kroo . analysis and design of airfoils for use at ultra - low reynolds numbers . in thomasj. mueller , editor , _ fixed and flapping wing aerodynamics for micro air vehicle applications _ , volume 195 of _ progr ._ , s. sunada , k. yasuda , and k. kawachi . comparison of wing characteristics at an ultralow reynolds number ., 39(2):331338 , 2002 .anush krishnan , john j. socha , pavlos p. vlachos , and l. a. barba .body cross - section of the flying snake chrysopelea paradisi .data set and figure on * figshare * under cc - by license , http://dx.doi.org/10.6084/m9.figshare.705877[10.6084/m9.figshare.705877 ] , may 2013 .joseph w. bahlman , sharon m. swartz , daniel k. riskin , and kenneth s. breuer .glide performance and aerodynamics of non - equilibrium glides in northern flying squirrels ( _ glaucomys sabrinus _ ) ., 10(80):10 pp . , 2013 .anush krishnan and l. a. barba .validation of the ` cuibm ` code for navier - stokes equations with immersed boundary methods . technical report on * figshare * , cc - by license , http://dx.doi.org/10.6084/m9.figshare.92789[doi:10.6084/m9.figshare.92789 ] , 6 july 2012 .anush krishnan , john j. socha , pavlos p. vlachos , and l. a. barba .lift and drag coefficient versus angle of attack for a flying snake cross - section .data set and figure on * figshare * under cc - by license , http://dx.doi.org/10.6084/m9.figshare.705883[10.6084/m9.figshare.705883 ] , may 2013 .anush krishnan and l. a. barba . flying snake wake visualizations with ` cuibm ` .video on on * figshare * , cc - by license , http://dx.doi.org/10.6084/m9.figshare.157334[doi:10.6084/m9.figshare.157334 ] , february 2013 .anush krishnan , john j. socha , pavlos p. vlachos , and l. a. barba .time - averaged surface pressure on a flying - snake cross - section .data set and figure on * figshare * under cc - by license , http://dx.doi.org/10.6084/m9.figshare.705890[10.6084/m9.figshare.705890 ] , may 2013 .
flying snakes use a unique method of aerial locomotion : they jump from tree branches , flatten their bodies and undulate through the air to produce a glide . the shape of their body cross - section during the glide plays an important role in generating lift . this paper presents a computational investigation of the aerodynamics of the cross - sectional shape . two - dimensional simulations of incompressible flow past the anatomically correct cross - section of the species _ chrysopelea paradisi _ show that a significant enhancement in lift appears at a angle of attack , above reynolds numbers 2000 . previous experiments on physical models also obtained an increased lift , at the same angle of attack . the flow is inherently three - dimensional in physical experiments , due to fluid instabilities , and it is thus intriguing that the enhanced lift also appears in the two - dimensional simulations . the simulations point to the lift enhancement arising from the early separation of the boundary layer on the dorsal surface of the snake profile , without stall . the separated shear layer rolls up and interacts with secondary vorticity in the near - wake , inducing the primary vortex to remain closer to the body and thus cause enhanced suction , resulting in higher lift .
the inherent connection between noise and disturbance is one of the most fundamental features of quantum measurements . on the one hand, a measurement can not give any information without disturbing the object system .on the other hand , a noisier ( less informative ) measurement can be implemented with less disturbance than a sharper measurement . roughly speaking ,more noise means that measurement outcome distributions become broader , while disturbance is reflected in the measurement outcome statistics of subsequent measurements . in the most extreme case , the disturbance inherent in a measurementmakes all subsequent measurements useless as far as the original input state is concerned .various trade - off inequalities between noise ( or information ) and disturbance are known , all depending on different quantification of these notions , see e.g. .all these trade - off inequalities are revealing different aspects of the interplay between noise and disturbance in quantum measurements . in this work we present a relation between certain important forms of noise and disturbance which is qualitative in nature and not based on any specific quantifications of noise and disturbance .our result is a structural connection between observables and channels .more precisely , we show that a certain partial order in the set of equivalence classes of quantum observables ( positive operator valued measures ) corresponds to an inclusion of the related subsets of quantum channels ( trace preserving completely positive maps ) . as we will explain , this correspondence has a clear interpretation as a noise - disturbance relationship since it shows how the possible state transformations are limited to more noisy ones if the measurement is required to be more accurate . due to its simplicity and generality , we believe that our qualitative noise - disturbance relation can be seen as a common origin of many quantitative noise - disturbance inequalities . to give a preliminary idea on the coming developments , we recall two well - known special situations .( see e.g. for general results that cover these cases . ) first , let us consider a measurement in an orthonormal basis . if is an input state , then the measurement outcome probabilities are .the output state is a mixture , where are states that depend on the measurement device but not on the input state .hence , a measurement in an orthonormal basis is sharp but disturbs a lot .a completely different kind of measurement is such that we do nothing on the input state but we just throw a dice to produce measurement outcome probabilities .this measurement has maximum amount of noise , but it can be implemented without disturbing the input state at all .most of measurements belong to the intermediate area between the two previously described extreme cases .namely , they contain some additional noise and can be measured in a way that implies some disturbance .more noise should allow for a less disturbing measurement , and vice versa .it is exactly this kind of intuitive trade - off that we will turn into an exact theorem . in the rest of the paper a fixed hilbert space related to the input system .the dimension of can be either finite or countably infinite .we denote by the set of all bounded operators on .a quantum measurement produces measurement outcomes and conditional output states .the mapping from input states to measurement outcome statistics is called an observable , while the mapping from input states to unconditional output states ( i.e. average over conditional output states ) is called a channel .we will briefly recall some of the basic properties of observables and channels before proving our main results , theorem [ th : ideal ] and theorem [ thm : main ] .a quantum observable with finite or countably infinite number of outcomes is described by a mapping such that each is a positive operator ( i.e. for all ) and , where is the identity operator on .the labeling of measurement outcomes is not important for the questions that we will investigate , hence we assume that the outcome set of all our observables is .we denote by the set of all observables on .let us remark that it is possible that for some outcomes , hence e.g. observables with only a finite number of outcomes are included in by adding zero operators .for each observable , we denote by the set of all outcomes with . by a _ stochastic matrix _ we mean a real matrix ] . physically speaking, the equivalence class ] if and only if .( we use the same symbol for these two different relations , but this should not cause a confusion . ) it is easy to see that in the partially ordered set , there exists the least element but there is no greatest element .namely , an observable defined by , for is a representative of the least element since for every , the equality holds .the equivalence class ] , and a natural partial order is introduced by { \precsim}[\lambda_2] ] .thus ] is the probability of obtaining an outcome and the operator } ] is also an -channel .thus , a subset of is naturally introduced as |\ \lambda \mbox { is an } { \mathsf{a}}\mbox{-channel}\} ] , introduced in .the fact that belongs to for any observable relates to the possibility of performing a destructive measurement ; we can always measure , destroy the system and prepare a state .a less obvious and more interesting fact is that the partially ordered set contains the greatest element . to construct a channel belonging to the greatest element of ,let be a naimark dilation of ; is a hilbert space , is an isometry , and is a projection - valued measure ( pvm ) on satisfying for all .we define a channel by to see that is an -channel , we define an instrument by then and .although the construction of relies on the choice of the naimark dilation , the following arguments do not depend on this choice . from now on, we will always assume that a naimark dilation has been fixed for each observable , hence also is defined for each .[ th : ideal ] let be an observable .the set of all -channels consists of all channels that are below , i.e. , thus , has the greatest element ] is called a _ principal ideal _ , which is the minimal ideal containing ] ( big dot).,width=264 ] we have already seen that , hence we need to show that the inclusion holds in the other direction as well .let be an -channel .to prove that , we first fix a minimal stinespring dilation of .thus , is a hilbert space , is an isometry satisfying and the set is dense in .since is an -channel , we can apply the radon - nikodym theorem of cp - maps to conclude that there exists a unique observable on satisfying for all . for each , we define an operator by .then for any , we have since satisfies , by the polar decomposition theorem there exists an isometry satisfying and therefore we note that if , then the polar decomposition theorem states that is a partial isometry ( and not necessarily isometry ) .however , in our setting it is possible to extend the partial isometry to an isometric operator .this additional argument is given in the appendix .let be the naimark dilation of .the relationship implies that there exists an isometry satisfying again , the argument why is an isometry and not just a partial isometry is given in the appendix .inserting into gives finally , fix an arbitrary state on .we define ( { \mathbf{1}}-\sum_x \hat{{\mathsf{a}}}(x ) j_x j^*_x \hat{{\mathsf{a}}}(x ) ) .\end{aligned}\ ] ] then is a channel and \left ( \sum_x k^ * \hat{{\mathsf{a}}}(x ) k - \sum_x k^ * \hat{{\mathsf{a}}}(x ) j_x j_x^ * \hat{{\mathsf{a}}}(x ) k \right ) \\ & = & \lambda(c ) + \mbox{tr}[\rho c ] \left ( { \mathbf{1}}- \sum_x \sqrt{{\mathsf{a}}(x)}\sqrt{{\mathsf{a}}(x ) } \right ) = \lambda(c ) .\end{aligned}\ ] ] thus we obtain , implying that .let us emphasize that the existence of a least disturbing channel is generally guaranteed only if the output space is not fixed .this is a noteworthy difference to the analogous result on instruments . in that case ,a least disturbing instrument ( in the sense of conditional post processing ) exists even if we fix ; see e.g. theorem 7.2 in .suppose that and are two observables satisfying .this means that every -channel is also -channel , so even without any quantification of noise we can conclude that it is possible to measure with less or equal disturbance than generated in any measurement of . in other words , the unavoidable disturbance related to is smaller than or equal to the unavoidable disturbance related to .this qualitative description of disturbance will be the basis of the forthcoming noise - disturbance relation .the following preliminary observation is easily extracted from our earlier discussion and theorem [ th : ideal ] .[ lemma : min ] let and be two observables .then if and only if .we are now ready to proceed to our second main result .[ thm : main](qualitative noise - disturbance relation ) let and be two observables. then if and only if .this result is illustrated in fig .[ fig : main ] .it is already intuitively clear that if an observable is noisier than , then it should be possible to measure in a less disturbing way .the purpose of theorem [ thm : main ] is to sharpen and clarify certain aspects of this intuitive idea .first of all , theorem [ thm : main ] shows that the fundamental trade - off between noise and disturbance is a structural feature of quantum theory that can be expressed even without any quantifications of these notions .perhaps the more surprising part of theorem [ thm : main ] is that the inclusion implies the smearing relation . in particular ,if two observables and are compatible with exactly the same set of channels , i.e. , then and are equivalent and can thus differ only by some physically irrelevant ways .therefore , the set of all -channels characterizes the observable essentially . in some situations, the smearing relation can be seen as too restrictive characterization of noise .for instance , we may try to use as an approximate version of even if does not hold .theorem [ thm : main ] then implies that the associated sets of channels are not anymore in an inclusion relation .this should not be understood in the sense that the smearing relation is the only reasonable way to characterize noise , but that it determines the setting where the related disturbances are indisputably ordered , no matter on the quantification .a consideration on some more specific class of measurements may well justify another kind of comparison of observables and channels .illustration of theorem [ thm : main ] : the smearing relation of two observables ( left ) holds if and only if the associated sets of channels are ordered by inclusion ( right).,width=302 ] the _ only if_-part : suppose that , hence there exists a stochastic matrix such that .let be a -channel , meaning that there exists an instrument such that we define an instrument by the formula .then it is easy to see and .therefore , is an -channel . since was an arbitrary -channel, we conclude that .the _ if_-part : by lemma [ lemma : min ] we have .a stinespring representation of is given by an isometry , where is a hilbert space with the dimension equal to the cardinality of and is an orthonormal basis of .since is compatible with , then it follows from the radon - nikodym theorem of cp - maps that there exists an observable acting on such that for all .( in case the stinespring representation is not minimal , the uniqueness of drops . ) thus we obtain for any , where we used . as is a stochastic matrix ,we conclude that . as a direct consequence of theorem [ th : ideal ] and theorem [ thm : main ] we record the following link between the preorderings on observables and channels .this is , again , one manifestation of the trade - off between noise and disturbance .let and be two observables . then if and only if their respective least disturbing channels and satisfy . finally , we note that our results can be applied to any measure of disturbance on the set of channels that satisfies the natural requirement for all channels and .namely , theorem [ th : ideal ] implies that any -channel satisfies .this enables us to derive a lower bound for the disturbance since has a quite simple form .for instance , a very natural disturbance measure was defined in as where the infimum is taken over all channels and is the completely bounded norm .the function quantifies the quality of the best available decoding channel for , and is easily shown to satisfy .it was proved in that is bounded by the distance between conjugate channel and completely depolarizing channels . by using this result, we can show the following .[ thm : ksw ] let and be two observables .* if , then there exists an -channel that can be decoded with better or equal quality than any -channel in the sense that for all -channels .* every -channel satisfies where is the operator norm on .the right hand side of is related to one of the functions characterizing sharpness and bias of quantum effects , namely , the quantity is the width of the spectrum of .it follows that the right hand side of is zero if and only if is a coin tossing observable , expressing the fact that no disturbance implies no information. in the other extreme case , the quantity takes the maximal value if and only if the spectrum of contains both and ( * ? ? ?2 ) . for instance , if contains a non - trivial projection ( i.e. and ) , then theorem [ thm : ksw ] gives for all -channels .this is a lower bound on the quality of the best available decoding channel for any -channel .* we choose and then the claim is a direct consequence of theorems [ th : ideal ] and [ thm : main ] . *let be a channel compatible with .as was explained above , we have thus , in the following we estimate and this will lead to a lower bound for .the channel has a stinespring representation , where ( may be infinity ) and is defined by where is an orthonormal basis of .its conjugate channel is + let us denote the completely depolarizing channel with respect to a state on by , i.e. , {\mathbf{1}} ] and a unitary channel defined in , there exists a channel such that if and only if .this is in line what we would expect ; the sharper the measurement , the smaller must the weight of the identity channel be . in this example , it is not too difficult to find the concrete form of a channel satisfying .namely , for all ] .this latter condition implies that ^{\perp}} ] . note that ^{\perp } \supseteq ker[{\mathsf{a}}(x)] ] and write it as .it satisfies } ] holds , we have and .thus we can define an isometry on the whole space .consequently we have obtained an isometry satisfying .second , we show that the operator in can be chosen to be an isometry .the relationship implies that there exists a partial isometry satisfying and =ker[{\mathsf{a}}(x)] ] .we denote by the restriction of to $ ] .then is an isometry satisfying .th acknowledges the financial support from the academy of finland ( grant no .tm thanks jsps for the financial support ( jsps kakenhi grant numbers 22740078 ) .
the inherent connection between noise and disturbance is one of the most fundamental features of quantum measurements . in the two well - known extreme cases a measurement either makes no disturbance but then has to be totally noisy or is as accurate as possible but then has to disturb so much that all subsequent measurements become redundant . most of the measurements are , however , something between these two extremes . we derive a structural connection between certain order relations defined on observables and channels , and we explain how this connection properly explains the trade - off between noise and disturbance . a link to a quantitative noise - disturbance relation is demonstrated .
wsns are currently being considered for many applications ; including industrial , security surveillance , medical , environmental and weather monitoring . due to limited battery lifetime at each sensor node ; minimizing transmitter to increase energy efficiency and network lifetime is useful .sensor nodes consist of three parts ; sensing unit , processing unit and transceiver .limited battery requires low power sensing , processing and communication system .energy efficiency is of paramount interest and optimal wsn should consume minimum amount of power . in wsns, sensor nodes are widely deployed in different environments to collect data .as sensor nodes usually operate on limited battery , so each sensor node communicate using a low power wireless link and link quality varies significantly due to environmental dynamics like temperature , humidity etc .therefore , while maintaining good link quality between sensor nodes we need to reduce energy consumption for data transmission to extend network lifetime , , .ieee802.15.4 is a standard used for low energy , low data rate applications like wsn .this standard operate at frequency 2.45 ghz with channels up to 16 and data rate 250 kbps . to efficiently compensate link quality changes due to temperature variations ,we propose a new scheme for control east , that improves network lifetime while achieving required reliability between sensor nodes .this scheme is based on combination of open - loop and closed - loop feedback processes in which we divide network into three regions on basis of threshold on for each region . in open - loop process , each node estimates link quality using its temperature sensor .estimated link quality degradation is then effectively compensated using closed - loop feedback process by applying propose scheme . in closed - loop feedback process , appropriate transmission control is obtained which assign substantially less power than those required in existing transmission power control schemes .rest of the paper is organized as follows : section ii briefs the related existing work and motivation for this work . in section iii , we provide the readers with our proposed scheme . in section iv, we model our proposed scheme .experimental results have been given in section v.to transmit data efficiently over wireless channels in wsns , existing schemes set some minimum transmission for maintaining reliability .these schemes either decrease interference among sensor nodes or increase unnecessary energy consumption . in order to adjust transmission , reference node periodically broadcasts a beacon message . when nodes hear a beacon message from a reference node , nodes transmit an ack message . through this interaction , reference nodeestimate connectivity between nodes . in local mean algorithm ( lma ) ,a reference node broadcasts lifemsg message .nodes transmit lifeackmsg after they receive lifemsg .reference nodes count number of lifeackmsgs and transmission to maintain appropriate connectivity .for example , if number of lifeackmsgs is less than nodeminthresh ; transmission is increased .in contrast , if number of lifeackmsgs is more than nodemaxthresh transmission ; is decreased . as a result , they provide improvement of network lifetime in a sufficiently connected network . however , lma only guarantees connectivity between nodes and can not estimate link quality .local information no topology / local information link - state topology ( lint / lilt ) and dynamic transmission power control ( dtpc ) use to estimate transmitter .nodes exceeding threshold are regarded as neighbor nodes with reliable links .transmission also controlled by packet reception ratio ( prr ) metric . as for the neighbor selection method ,three different methods have been used in the literature : connectivity based , prr based and based . in lint / lilt , a node maintains a list of neighbors whose values are higher than the threshold , and it adjusts the radio transmission if number of neighbors is outside the predetermined bound . in lma / lmn , a node determines its range by counting how many other nodes acknowledged to the beacon message it has sent .adaptive transmission power control ( atpc ) adjusts transmission dynamically according to spatial and temporal effects .this scheme tries to adapt link quality that changes over time by using closed - loop feedback .however , in large - scale wsns , it is difficult to support scalability due to serious overhead required to adjust transmission of each link .the result of applying atpc is that every node knows the proper transmission to use for each of its neighbors , and every node maintains good link qualities with its neighbors by dynamically adjusting the transmission through on - demand feedback packets .uniquely , atpc adopts a feedback - based and pairwise transmission control . by collecting the link quality history ,atpc builds a model for each neighbor of the node .this model represents an in - situ correlation between transmission and link qualities .with such a model , atpc tunes the transmission according to monitored link quality changes .the changes of transmission reflect changes in the surrounding environment .existing approaches estimate variety of link quality indicators by periodically broadcasting a beacon message . in addition, feedback process is repeated for adaptively controlling transmission . in adapting link quality for environmental changes , where temperature variation occur ,packet overhead for transmission control should be minimized . reducing number of control packetswhile maintaining reliability is an important technical issue .radio communication quality between low power sensor devices is affected by spatial and temporal factors .the spatial factors include the surrounding environment , such as terrain and the distance between the transmitter and the receiver .temporal factors include surrounding environmental changes in general , such as weather conditions ( temperature ) .to establish an effective transmission control mechanism , we need to understand the dynamics between link quality and values .wireless link quality refers to the radio channel communication performance between a pair of nodes .prr is the most direct metric for link quality .however , the prr value can only be obtained statistically over a long period of time . can be used effectively as binary link quality metrics for transmission control .radio irregularity results in radio signal strength variation in different directions , but the signal strength at any point within the radio transmission range has a detectable correlation with transmission power in a short time period .there are three main reasons for the fluctuation in the .first , fading causes signal strength variation at any specific distance .second , the background noise impairs the channel quality seriously when the radio signal is not significantly stronger than the noise signal .third , the radio hardware does nt provide strictly stable functionality . since the variation is small , this relation can be approximated by a linear curve .the correlation between and transmission is approximately linear .correlation between transmission and is largely influenced by environments , and this correlation changes over time . both the shape and the degree of variation depend on the environment .this correlation also dynamically fluctuates when the surrounding environmental conditions change .the fluctuation is continuous , and the changing speed depends on many factors , among which the degree of environmental variation is one of the main factors .propose energy efficient transmission scheme east helps efficiently compensate link quality changes due to temperature variation . to reduce packet overhead for adaptive power control temperature measured by sensors is utilized to adjust transmission for all three regions based on .compared to single region in which large control packets overhead occur even due to small change in link quality .closed - loop feedback process is executed to minimize control packets overhead and required transmitter .in this section , we present energy efficient transmission scheme that maintains link quality during temperature variation in wireless environment .it utilizes open - loop process based on sensed temperature information according to temperature variation . closed - loop feedback process based on control packetsis further used to accurately adjust transmission . by adopting both open - loop and closed - loop feedback processes we divide network into three regions a , b , c for high , medium and low respectively . in order to assign minimum and reachable transmission to each link east is designed .east has two phases that is initial and run - time . in initial phase referencenode build a model for nodes in network . in run - time phasebased on previous model east adapt the link quality to dynamically maintain each link with respect to time . in a relatively stable network ,control overhead occurs only in measuring link quality in initial phase .but in a relatively unstable network because link quality is continuously changing initial phase is repeated and serious overhead occur .before we present block diagram for proposed scheme some variables are defined as follows ( 1)current nodes in a region ( 2 ) desired nodes in a region ( 3 ) error : e(t ) = ,(4 ) .fig1 shows system block diagram of proposed scheme .prr , ack , and used to determine connectivity .ack estimates connectivity but it can not determine link quality . prr estimates connectivity accurately but it causes significant overhead . in our scheme , we use for connectivity estimation , which measures connectivity with relatively low overhead . power controller adjusts transmission by utilizing both number of current nodes and temperature sensed at each node . since power controller is operated not merely by comparing number of current nodes with desired nodes but by using temperature - compensated , so that it can reach to desired rapidly .if temperature is changing then temperature compensation is executed on basis of relationship between temperature and .network connectivity maintained with low overhead by reducing feedback process between nodes which is achieved due to logical division of network .transmission power loss due to temperature variation formulated using relationship between and temperature experimented in bannister et al .. mathematical expression for due to temperature variation is as follows : =0.1996*(t[c^{o}]-25[c^{o}])\ ] ] to compensate estimated from eq.(1 ) we have to control output of radio transmitter accordingly .relationship between required transmitter and is formulated by eq.(2 ) using least square approximation : ^{2.91}\ ] ] based on eqs ( 1 , 2 ) , we obtain appropriate to compensate due to temperature variation . to compensate path loss due to distance between each sensor node in wsn, free space model helps to estimate actual required transmitter power .after addition of due to temperature variation in eq.(3 ) , we estimate actual required transmitter power between each sensor node . for free space path loss model we need number of nodes in a network ( n ) , distance between each node ( d ) , ( ) depends upon ( ) , spectral efficiency ( ) , frequency ( ) and receiver noise figure ( ) : =[\eta*(e_{b}/n_{0})*mktb*(4\pi d /\lambda ) ^2+rnf]+rssi_{loss}\ ] ] parameters for propose scheme are,(1 ) threshold for each region .( 2 ) desired nodes in each region , ( 3 ) transmission power level for each region .threshold is minimum value required to maintain link reliability .reference node broadcasts beacon message periodically to nodes and wait for acks . if acks are received from nodes then is estimated for logical division of network , number of nodes with high considered in region a , medium considered in region b , and with low in region c. if ( threshold ) and ( ) then threshold transmitter assigned if for similar case ( ) then similar transmitter assigned and if ( threshold ) then by default keep same transmitter .given below is an algorithm for east . fig2 shows complete flow chart for reference node .node senses temperature by using locally installed sensor and checks if temperature change detected .if there is any temperature change , compensation process is executed on the basis of eqs ( 1 , 2 ) .nodes send an ack message including temperature change information with a newly calculated .apply ing this temperature - aware compensation scheme we can reduce overhead caused by conventional scheme in changing temperature environments .let suppose we have 100 nodes in a network that are randomly deployed represented as ( ) .nodes are placed at different locations in a square area of 100 * 100 m and distance ( ) between them is from 1 to 100 m .for given environment temperature ( ) can have values in range -10 53 i n. + due to the temperature variation can be formulated using the relation between and the temperature experimented in bannister et al .equation for the for the temperature variation is as follows : + =0.1996*(t_{i}[c^{o}]-25[c^{o}])\ ] ] relation between and is formulated by using a least square approximation : + ^{2.91}\ ] ] maximum , minimum and average value of for all nodes in network can be formulated as : + after finding maximum and minimum values of we will define upper and lower limit of to divide network into three regions and also set counter to count number of nodes in each region .let suppose we have set counter zero initially and then define upper and lower bound and check condition , nodes that follow this condition are considered to be in region a i n. + count=0 ; + =count+1 + given that i n ; + and + similarly we define upper and lower limits for region b and c and also check nodes that follow given conditions are said to be in region b and c respectively .+ count=0 ; + =count+1 + given that i n ; + and + count=0 ; + =count+1 + given that i n ; + and + to apply our proposed scheme we need to define threshold on for each region for energy efficient communication between sensor nodes .threshold on for each region depends upon of all nodes in a particular region and number of nodes in that region .threshold on for each region is defined as : + is also an important metric to measure link reliability . here are and number of nodes not present in region due to mobility and ( - ) are .it is defined as number of nodes present in a region at particular time to number of desired nodes in a region .similarly we can define for regions b and c. for all three regions is defined as given below : + here , and are packet reception ratio for regions a , b , c respectively . for each region on basis of propose scheme for given conditions like threshold and is formulated as : + given that i n : + and + given that i n : + and or + estimation of for new is formulated as i n : + ^{2.91}\ ] ] is defined as the difference between assigned before applying propose scheme and after applying propose scheme : + network life time can be enhanced by maximizing .aim of proposed scheme is to save maximum power with link reliability .objective function formulation for is defined i n : + constraints to save maximum power are given below n : + here and are number of nodes above and below threshold in each region respectively .in this section , we describe simulation results of proposed technique for energy efficient transmission in wsns .simulation parameters are ; rounds 1200 , temperature -10 - 53 , distance ( 1 - 100)m , nodes 100 , regions a , b , c , 0.0029 , snr 0.20db , bandwidth 83.5mhz , frequency 2.45ghz , rnf 5db , t 300k , 8.3db . in fig3we have shown values of meteorological temperature for one round that each sensor node have sensed .let suppose we have 100 nodes in 100 * 100 square region and temperature can have values in range ( -10 - 53) for given meteorological condition of pakistan .reference node is placed at edge of this region .different values of temperature for each sensor node based on meteorological condition helps to estimate .fig4 shows due to temperature variation in any environment using the relationship between and temperature given by bannister et al .high means that sensor node placed in region where temperature is high so link not have good quality . for temperature ( -10 - 53) value in range ( -6dbm ) - ( 5dbm ) ..estimated parameters [ cols="^,^",options="header " , ] [ tab : addlabel ] from fig4 it is also clear that link quality and have inverse relation , when temperature is high has high value means low quality link and vise versa . after estimating for each node in wsn we compute corresponding transmitter to compensate .fig5 shows range of on y - axis for given that is between ( 20- 47 ) and also variation of required for sensor node with changing temperature that is at low temperature required is low and for high temperature required is high . as we have earlier estimated for each sensor node on the basis of given meteorological temperature that helps to estimate required to compensate .that power level only helps to compensate due to temperature variations . to compensate path loss due to distance between each sensor node in wsns, free space model helps to estimate actual required transmitter power .after addition of required due to temperature variation and distance , we estimate actual required between each sensor node .fig6 shows required including both due to temperature variation and free space path loss for different nodes .we clearly see from figure that lies between ( -175 - 90) and most of times it is above -120 . for regions a , b , and c ] in fig7 , we have shown using classical approach for three regions and in fig8 , for the proposed technique ; east .we can clearly see the difference between assigned .to show for each region , we take the difference between the assigned using east and classical technique , as can be seen in the figures 9 , 10 , 11 . as we know thatin classical approach , there is no concept of sub regions , so , for the sake of comparison with the proposed technique ; east , we have shown for different regions using classical approach . after estimating for nodes of each region , we have estimated required for nodes of each region that we clearly see in fig7 , in region a, lies between ( 40 - 45 ) , for region b ( 30 - 35 ) and for region c ( 20 - 25 ) .it means that for region a required high than both other region that also shows that for that region temperature and is large . for regionb required is between both region a and c and for c region required is less than both other two regions .we have earlier seen in fig7 for each region assigned using classical approach . after applying proposed techniquewe see what required for each region .we can clearly see difference between as shown in fig8 , that required decreases for each region and for region a it decreases maximum .fig9,10,11 respectively shows required for region a , b and c after implanting proposed technique . up to 2.3 for region a , 1.7 for b and 1.5 for c. fig12 describes the effect of reference node mobility on for region a. reference node move around boundaries of square region and nodes in a region considered to be static .when reference node is at center location ( 50 , 50 ) of network maximum nodes around reference node have large than threshold so we need to reduce to meet threshold requirement that cause maximum .we can clearly see maximum 12dbm to 20dbm for center location .when reference node move from center to one of the corner ( 0 , 0 ) of square region remains constant approximately around 1db , fact is that number of nodes near reference node region having same mean constant temperature and they need approximately same near threshold . for reference node movement from ( 0 , 0 ) to ( 0 , 100 ) fluctuate between -5dbm - 6dbm and at two moments we observe maximum because number of nodes near reference node have to increase their to meet threshold is minimum .movement of reference node from ( 0 , 100 ) to ( 100 , 100 ) causes between -4dbm - 12dbm and only one time peak .similarly when reference node move from ( 100 , 100 ) to ( 100 , 0 ) remains in limits between -4dbm- 7dbm and only one time maximum . from this figureit is also clear that for region a reference node location at center gives maximum that enhances network lifetime .we can also see variation of with respect to time that basically depends upon nodes near reference node have what if nodes have less then threshold then we have to increase that decrease and if nodes have large then threshold then we need to decrease that enhances .it is also clear from result that peak maximum and minimum comes at same time .similarly we can see for similar pattern of reference node mobility considering regions b and c. for region b in fig13 when reference node at center location ( 50 , 50 ) remains between 14dbm-20dbm , from center to ( 0 , 0 ) remains between 0 - 1dbm .when reference node moves from ( 0 , 0 ) to one of the corner of square region ( 0 , 100 ) fluctuate between 0 - 4dbm .reference node movement from ( 0 , 100 ) to ( 100 , 100 ) cause -1dbm-5dbm .reference node movement from ( 100 , 100 ) to ( 100 , 0 ) -4dbm-5dbm .this figure also indicates that for region b is maximum when reference node at center location . for reference node mobility from center to ( 0 , 0 ) remains constant due to constant near reference node region . for other reference node movements remains approximately constant due to less variations in . compared to region a where goes to peak maximum and minimum value in regionb remains on average approximately constant and less variation occurs , fact is that nodes in region b have approximately same near threshold . for reference node mobility in region c around square as shown in fig14 .when reference node is at center ( 50 , 50 ) fluctuates between 8dbm-50dbm . from center to edge ( 0 , 0 ) reference node mobility cause around 0dbm .when reference node move from corner of square ( 0 , 0 ) to corner ( 0 , 100 ) -5dbm-12dbm . similarly from ( 0 , 100 ) to ( 100 , 100 ) remains between -10dbm-18dbm .finally when reference node location changes from ( 100 , 100 ) to ( 100 , 0 ) goes to maximum value 60dbm that shows that nodes near reference node have large than threshold at that moment .this figure also elaborates that on average maximum for reference node location at center . compared to region b in this region peak maximum and minimum exists reasonis that nodes in this region have large than threshold at that moment .in this paper , we presented a new proposed technique east .it shows that temperature is one of most important factors impacting link quality .relationship between and temperature has been analyzed for our transmission power control scheme .proposed scheme uses open - loop control to compensate for changes of link quality according to temperature variation . by combining both open - loop temperature - aware compensation and close - loop feedback control , we can significantly reduce overhead of transmission power control in wsn , we further extended our scheme by dividing network into three regions on basis of threshold and assign to each node in three regions on the basis of current number of nodes and desired number of nodes , which helps to adapt according to link quality variation and increase network lifetime .we have also evaluate the performance of propose scheme for reference node mobility around square region that shows up to 60dbm .but in case of static reference node goes maximum to 2dbm . in future , firstly , we are interested to work on internet protocol ( ip ) based solutions in wsns . secondly , as sensors are usually deployed in potentially adverse environments , so , we will address the security challenges using the intrusion detection systems because they provide a necessary layer for the protection .i. akyildiz , w. su , y. sankarasubramaniam , and e. cayirci , `` wireless sensor networks : a survey , '' computer networks , vol .4 , pp . 393422 , 2002 . k. srinivasan , p. dutta , a. tavakoli , and p. levis , `` an empirical study of low - power wireless , '' acm transactions on sensor networks ( tosn ) , vol .k. lin , m. chen , s. zeadally , and j. j. rodrigues , `` balancing energy consumption with mobile agents in wireless sensor networks , '' future generation computer systems , vol .2 , pp . 446456 , 2012 .k. lin , j. j. rodrigues , h. ge , n. xiong , and x. liang , `` energy efficiency qos assurance routing in wireless multimedia sensor networks , '' systems journal , ieee , vol . 5 , no .495505 , 2011 .m. kubisch , h. karl , a.wolisz , l. zhong , and j. rabaey , `` distributed algorithms for transmission power control in wireless sensor networks , '' in wireless communications and networking , 2003 .wcnc 2003 .2003 ieee , vol .1 , pp . 558563 , ieee , 2003 . j. jeong , d. culler , and j. oh , `` empirical analysis of transmission power control algorithms for wireless sensor networks , '' in networked sensing systems , 2007 .fourth international conference on , pp .2734 , ieee , 2007 .s. lin , j. zhang , g. zhou , l. gu , j. stankovic , and t. he , `` atpc : adaptive transmission power control for wireless sensor networks , '' in proceedings of the 4th international conference on embedded networked sensor systems , pp .223236 , acm , 2006 . m. meghji and d. habibi , `` transmission power control in multihop wireless sensor networks , '' in ubiquitous and future networks ( icufn ) , 2011 third international conference on , pp .2530 , ieee , 2011 .f. lavratti , a. ceratti , d. prestes , a. pinto , l. bolzani , f. vargas , c. montez , f. hernandez , e. gatti , and c. silva , `` a transmission power self - optimization technique for wireless sensor networks , '' isrn communications and networking , vol .2012 , p. 1 , 2012 .v. g. douros and g. c. polyzos , `` review of some fundamental approaches for power control in wireless networks , '' computer communications , vol . 34 , no . 13 , pp . 15801592 , 2011 .x. cui , x. zhang , and y. shang , `` energy - saving strategies of wireless sensor networks , '' in microwave , antenna , propagation and emc technologies for wireless communications , 2007 international symposium on , pp. 178181 , ieee , 2007 .k. bannister , g. giorgetti , and s. gupta , `` wireless sensor networking for hot applications : effects of temperature on signal strength , data collection and localization , '' in proceedings of the fifth workshop on embedded networked sensors ( hotemnets 08 ) , citeseer , 2008 . s. cheema , g. rasul , and d. kazmi , `` evaluation of projected minimum temperatures for northern pakistan , '' 2010 .j. j. p. c. rodrigues and p. a. c. s. neves , `` a survey on ip - based wireless sensor networks solutions , '' communication systems , vol .23 , no . 8 , pp . 963981 , 2010 .g. han , j. jiang , w. shen , l. shu , and j. j. p. c. rodrigues , `` idsep : a novel intrusion detection scheme based on energy prediction in cluster - based wireless sensor networks , '' iet information security .
one of the major challenges in design of wireless sensor networks ( wsns ) is to reduce energy consumption of sensor nodes to prolong lifetime of finite - capacity batteries . in this paper , we propose energy - efficient adaptive scheme for transmission ( east ) in wsns . east is an ieee 802.15.4 standard compliant . in this scheme , open - loop is used for temperature - aware link quality estimation and compensation . whereas , closed - loop feedback process helps to divide network into three logical regions to minimize overhead of control packets . threshold on transmitter power loss and current number of nodes ( ) in each region help to adapt transmit power level ( ) according to link quality changes due to temperature variation . evaluation of propose scheme is done by considering mobile sensor nodes and reference node both static and mobile . simulation results show that propose scheme effectively adapts transmission to changing link quality with less control packets overhead and energy consumption as compared to classical approach with single region in which maximum transmitter assigned to compensate temperature variation . ieee 802.15.4 ; link quality ; transmitter ; temperature ; wsns ; ; reference node ; control packets ;
recently , hierarchical systems have been attracting attention of scientists working on complex networks . in factmany real networks are hierarchically organized , e.g. www network , actor network , or the semantic web .dynamics at such networks can be qualitatively and quantitatively different from that at regular lattices ( see ) . the ising model at a network with a hierarchical topology was studied by komosa and hoyst .the analyzed parameters were , among others , magnetization , magnetic susceptibility , critical temperature and correlations of magnetization between different hierarchies .it was shown that the critical temperature is a power function of the network size and of the ratio , where stands for a node degree .opinion formation in hierarchical organizations was studied by laguna et al .agents , belonging to various authority strata , try to influence others opinions .the probability that an opinion of an agent of a certain authority prevails in the community depends on the size distribution of the authority strata .phase diagrams can be obtained , where each phase corresponds to a distinct dominant stratum ( or a sequence of the strata , with the decreasing probability of prevailing ) .fashion phenomena at hierarchical networks were studied by galam and vignes .interactions were imposed between social groups at different levels of hierarchy .a renormalization group approach was used to find the optimal investment level of the producer and to assess the influence of counterfeits on the probability of a new product success .one of fundamental topics in social dynamics are conflict situations and many different sociophysics approaches or _prisoner s dilemma_-type games have been proposed . recentlya simple model of communities isolation has been introduced by sienkiewicz and hoyst .the model can describe such various issues as strategy at battlefields or formation of cultures .the idea behind this model is similar to the game of go and it takes into account a natural leaning of people to avoid being surrounded by members of another ( potentially hostile ) community . in this paperwe extend the model of communities isolation studied for chains , hypercubic , random and scale - free networks to hierarchical networks proposed by .the model of hierarchical networks was proposed by ravasz and barabsi and modified by suchecki and hoyst .such networks possess 3 parameters determining their structure : * the degree of hierarchy * the distribution , where , determining number of nodes at each level of hierarchy ( in particular , the size of the cliques at the lowest level of hierarchy is ) * the parameter determining the density of edges ] should be an increasing function of which , while not being too complicated , would give a reasonable approximation for the widest possible ranges of and .it turned out that in the case of the p1 model , choosing results in a good agreement of the function with simulation data . for the pd model, is a good choice . , for various networks of p1 model .symbols correspond to data from computer simulations .lines show analytical approximations ( eq . , and ) . left side linear scale , right side log - log scale.,title="fig : " ] , for various networks of p1 model .symbols correspond to data from computer simulations .lines show analytical approximations ( eq . , and ) . left side linear scale , right side log - log scale.,title="fig : " ] , for various networks of p1 model .symbols correspond to data from computer simulations .lines show analytical approximations ( eq . , and ) . left side linear scale , right side log - log scale.,title="fig : " ] , for various networks of pd model. symbols correspond to data from computer simulations .lines show analytical approximations ( eq . , and ). left side linear scale , right side log - log scale.,title="fig : " ] , for various networks of pd model .symbols correspond to data from computer simulations .lines show analytical approximations ( eq . , and ) . left side linear scale , right side log - log scale.,title="fig : " ] , for various networks of pd model .symbols correspond to data from computer simulations .lines show analytical approximations ( eq . , and ) .left side linear scale , right side log - log scale.,title="fig : " ]as it was previously mentioned , in the case of the network consists of isolated cliques of nodes . in order to find the distribution of critical time ( time , when the first blocked cluster appears ), one has to consider the probability that at time there are no blocked nodes yet .it means that at time the only completely filled cliques are those filled with members of one community , which leads to the formula where .the cumulative critical time distribution can be immediately obtained as well as the critical time distribution in the approximation of continuous time : the mean critical time can be also calculated analytically : where ( euler beta function ) and ( incomplete euler beta function ) . for networks of hierarchy for networks with hierarchy for networks with any degree of hierarchy , , a recursive formula for the cumulative critical time distribution can be expressed as [ cols="^,^ " , ] the mean critical time can be obtained by numerical integration of : all cases the function , defined as the average number of blocked nodes at time , can be approximated with a high accuracy by a power function the exponent depends on the parameters of the network . for -dimensional hypercubic networks ( including the -dimensional ones , i.e. chains ) . for modular hierarchical networks depends mainly on and parameters , i.e. on the sizes of basic cliques at the lowest hierarchy and on the density of inter - clique connections .the dependence on the degree of hierarchy ( and on the network size ) is weak , what can be explained by the fact , that increasing the degree of hierarchy is a process similar to system rescaling , therefore for , the parameter can be analytically found : .the result is in agreement with the simulated data .increasing the density of connections ( the parameter ) leads to the increase of up to approximately for .there is an important distinction in the way was approximated for hypercubic and hierarchical networks . for hypercubic networks ,the number of isolated nodes was calculated using the following approximation : all blocked nodes were blocked alone , i.e. they do not neighbor with other blocked nodes ( of the same community ) .although this approximation might seem coarse , resulting analytical predictions turned out to be in quite good agreement with simulated data . for modular hierarchical networks, such an approximation would not be reasonable . because of the fact that at the lowest level of hierarchy such networks consist of cliques of nodes , the most probable are situations when nodes are simultaneously blocked .the second analyzed parameter was critical time , i.e. the moment , when the first isolated cluster appears .it is a random variable .the critical time distribution was studied , as well as mean critical time .more precisely , _ a critical density _ ( or _ a critical relative time _ ) was often shown so networks with different parameters could be easily compared . for it was possible to find the analytical formula for both and .the distribution is a polynomial of degree ( see eq . [ eqn : tcdist_p0 ] ) and the average is a scaled difference of two euler beta functions ( see eq .[ eqn : avgtc_p0 ] ) . the average decreases with and for a fixed it reaches a minimum for ( see fig .[ fig : mean_tc ] ) .for the distribution reaches a constant , non - zero value for $ ] ( for , ) , which means that processes when blocked clusters firstly appear at the very end of the evolution are not unlikely .the values of can be compared with those obtained for hypercubic networks .similar trends can be observed in hypercubic and hierarchical networks : decreases with the network size and increases with the average degree .however , for modular hierarchical networks the dependence of on the average degree ( which equals for and rises with ) is very weak in comparison to hypercubic networks .typical values of for hierarchical networks correspond to the ones obtained for two- or three - dimensional networks , even for .
the model of community isolation was extended to the case when individuals are randomly placed at nodes of hierarchical modular networks . it was shown that the average number of blocked nodes ( individuals ) increases in time as a power function , with the exponent depending on network parameters . the distribution of time when the first isolated cluster appears is unimodal , non - gaussian . the developed analytical approach is in a good agreement with the simulation data .
multifractional brownian motion ( mbm ) is considered as one of the most natural extensions of fractional brownian motion ( fbm ) . nowadays applications of mbm are numerous and growing .similar to fbm , mbm has been used in such diverse areas as geology , image analysis , signal processing , traffic networks and mathematical finance .for instance , we refer to lvy - vhel ( 1995 ) , bertrand et al .( 2012 ) and bianchi et al .( 2013 ) . in this brief introduction, we focus on applications to mathematical finance , which we know best . since generally neither fbm nor mbm are semi - martingales , rogers ( 1997 ) pointed out that there would be arbitrage in a market where stocks are modelled by fbm . however , cheridito ( 2003 ) showed that , if one relaxes the definition of arbitrage , fbm is an excellent candidate to model long - term memory in stock markets .bayraktar et al .( 2006 ) obtained fbm as the limit of the stock price in an agent - based model where investors display inertia .moreover , unlike stock prices , several processes , like stochastic volatility , exchange rates , or short interest rates do not need to be semi - martingales for a mathematical model to be arbitrage - free in a strict sense .for each of these processes , there is empirical evidence of long - term memory .we refer to corlay et al .( 2014 ) for stochastic volatility , xiao et al .( 2010 ) for exchange rates , and ohashi ( 2009 ) for interest rates . making the hurst parameter time - dependentallows to model different regimes of the stochastic process of interest .for example , in times of financial crisis , asset volatility rises significantly .likewise , empirical evidence shows that there has been periods of different volatility in either exchange rates or interest rates .this phenomena motivates one to introduce mbm into finance , since unlike fbm , the local regularity of volatilities driven by mbm allows to change via different periods .let denote an mbm with hurst function .we consider a general model with , ) ] through its harmonizable representation ( see benassi et al .1997 ) : for ] , where : : is an mbm defined in ( [ mbm ] ) .assume that its hurst functional parameter belongs to ) ] and almost everywhere . : : suppose that a discrete trajectory of : is observed for some large enough . our major goal is to evaluate the functional parameter . as in coeurjolly ( 2005 , 2006 ) , we introduce a pointwise estimation method , namely , the function is estimated pointwisely for any .once the time is fixed , the problem becomes a parametric estimation problem .peng ( 2011a ) studied this problem when is some stationary increment process ( with ) , where the optimal convergence rate is obtained by using the observations .however , when varies via time , the estimation of only relies on the sample size of the observed data in the neighborhood of and this neighborhood s radius convergence speed .hence the convergence speed of the corresponding estimator would be reasonably slower than ( see e.g. coeurjolly ( 2005 , 2006 ) for a particular case when is mbm ) . in this work , heuristically speaking , since the sample size of the neighborhood data of for estimating each is about , it is then believed that a good estimator should have its convergence rate near .subject to this statistical setting , we try to get the `` optimal '' rate of convergence estimator of by using wavelet basis .let the integer and let us pick any mother wavelet ) ] .we remark that the latter inequality still holds when and }h(t)<3/4 ] '' , `` {\mathbb p} ] '' to respectively denote convergence -almost surely , convergence in probability and convergence in law .it is also useful to briefly introduce the steps which lead to the construction of the estimators : * step 1 : * : : identification of starting from the observations . in order to estimate , it suffices to make an identification of given in ( [ d ] ) .such an identification can be naturally obtained by discretization of integrals to the sum as by lvy s modulus of continuity theorem for mbm ( see e.g. theorem 1.7 in benassi et al .1997 ) , ) ] with , then , the following important relations hold ( the proofs are given in the appendix ) : which show that is a good estimator of .next we set and show that satisfies please see the appendix for the proof of ( [ widev ] ) .* step 2 : * : : identification of starting from .we use similar computations appeared in peng ( 2011a ) .the main result is the following ( see proposition [ varvx ] for a more explicit formula ) : where is a constant which does not depend on ; denotes the cardinal of .it is observed that therefore subject to feasible choices of the sequence , the following convergence can accordingly take place in probability or almost surely : where is some constant not depending on .* step 3 : * : : identification of starting from . + under assumption * ( a1 ) * ( resp . * ( a1)-(a2 ) * ) , one has the following relation of equivalence ( see ( [ as1 ] ) , ( [ taylor2 ] ) , ( [ diffv_p ] ) and ( [ diffv_as ] ) ) : for , in probability ( resp . almost surely ) . as a consequence , is a consistent estimator of .we remark that the speed of convergence relies on the choice of .more details on the choice of will be discussed in theorem [ vydn ] .in this part we provide a sharp estimation of the covariance structure of wavelet coefficients of . for , define the wavelet coefficient of by for all - 1 \right\}, ] being the integer part function .+ the following proposition provides a fine identification of s covariance structure .[ prop1 ] let satisfy that , there exist two constants such that . ] .let - 1\} ] .+ if and }|\frac{at - bs}{ak - bk'}|\leq1 a , b\rightarrow0 ] is defined as and where ) ] , with .we let satisfy assumption * ( a1 ) * , then ( b ) : : let satisfy assumptions * ( a1)-(a2 ) * , then for any arbitrarily small .( c ) : : under assumptions * ( a2)-(a4 ) * and , {dist}\mathcal n\big(0,\tilde{c}(t_0)\big).\ ] ] * proof . * in order to prove ( [ vydn2 ] ) and ( [ vydn3 ] ) , we rely on the following relation : under assumptions * ( a1)-(a2 ) * , this is because , by using markov s inequality , cauchy - schwarz inequality , ( [ widev ] ) , ( [ ivy3 ] ) and the dominated convergence theorem , similarly to ( [ ediffv ] ) , we also obtain for any arbitrarily small , the fact that implies , then by borel - cantelli s lemma , therefore , ( [ vydn2 ] ) ( resp . ( [ vydn3 ] ) ) follows from the following 2 decompositions : and equations ( [ vyd2 ] ) , ( [ diffv_p ] ) ( resp . ( [ vyd3 ] ) , ( [ diffv_as ] ) ) . for showing ( [ cltyn1 ] ) , we only need to show {\mathbb p}0.\ ] ] by using the same idea we took to prove ( [ diffh1 ] ) , we just need to verify {\mathbb p}0.\ ] ] this is true since , according to ( [ ediffv ] ) and the fact that ( by assumption * ( a4 ) * ) , the left - hand side of the above term can be bounded in probability by the fact that entails .consequently , {\mathbb p}0,\ ] ] hence ( [ cltyn1 ] ) holds . in theorem [ vydn ] ,the choices of and depend on the target parameter .this is unacceptable from practical point of view . to overcome this inconvenience, we make assumptions * ( a1 ) , ( a4 ) * stronger so that the values of and do nt rely on : suppose the lower bound }h(t) ] are known . * ( a1 ) * : : with ; * ( a4 ) * : : with and . without great effort we could see * ( a1 ) * implies * ( a1 ) * and * ( a4 ) * implies * ( a4 ) * for all .then from theorem [ vyd ] and theorem [ vydn ] , we easily derive the following results : [ cor2 ] ( a ) : : let satisfy assumption * ( a1 ) * , then ( b ) : : if satisfies assumption * ( a1 ) * , then it also satisfies * ( a2)*. as a result , for arbitrarily small . ( c ) : : under assumptions * ( a2 ) , ( a3 ) , ( a4 ) * , {dist}\mathcal n\big(0,\tilde{c}(t_0)\big).\ ] ] [ cor1 ] ( a ) : : set ] , where * } ] ; * ) ] with .let then , ( a ) : : under assumption , {\mathbb p}\theta(t_0)^2.\ ] ] ( b ) : : under assumptions and * ( a2 ) * , {a.s.}\theta(t_0)^2.\ ] ] * proof of proposition [ prop : application ] .* we only show ( [ theta1 ] ) holds , since the way to obtain ( [ theta1 ] ) is similar . under assumptions * ( a1 ) * and * ( a2 ) * , on one hand , it follows from ( [ diffv_as ] ) and ( [ ivy3 ] ) that {a.s.}2c_2(t_0).\ ] ] on the other hand , by using ( [ c2 ] ) and the fact that , we see since the functions and are both continuous over , therefore by combining ( [ ptheta1 ] ) , ( [ diffhath ] ) and ( [ vydn3 ] ) , we obtain proposition [ prop : application ] . and surgailis ( 2013 ) established a pseudo - generalized least squares version of the localized increment ratio estimator and quadratic estimator of , denoted by and respectively .this work is so far the most achieved one on estimation of the mbm s pointwise hlder exponent . in this sectionwe compare our model and approach to bardet and surgailis ( 2013 ) s and illustrate some simulation results .the main differences between our statistical setting and bardet and surgailis ( 2013 ) s are : 1 .bardet and surgailis ( 2013 ) considers observation of a multifractional gaussian process which has asymptotic self - similarity and `` tangent to '' an fbm with hurst parameter and scaled by at each time .this setting covers ours when .however , our model is different when is some other function . 2 .the estimators and are obtained in terms of observations of discrete sample path of the process .we have established estimators based on observations of discrete sample path of and wavelet coefficients of the process ( see in theorem [ vyd ] ) , respectively .the latter result is the main contribution of this paper . for more details on wavelet - based statisticswe refer to delbeke and van ( 1995 ) , abry et al .( 2002 ) , abry and conalvs ( 1997 ) .3 . the estimators and apply to all ) ] and + : _ * initialize all the parameters : * _ + : ; ; + : _ * establish a set of indices corresponding to the neighbors of : * _ + : ] , with and .we also choose to be haar wavelet s mother wavelet function : }(x) ] and + : _ * initialize all the parameters : * _ + : ; ; ] ; + : _ * estimate the wavelet coefficients : * _ + : * for in , do : * + : ; + : * end for * + : _ * estimate `` partial '' sum of squares of the wavelet coefficients : * _ + : ; + : _ * compute estimate of : * _ + : ; + : * output : * + line 3 shows the procedure to set the parameters , , and . in line 8 , each item can be approximated by , when is large .hence from lines 7 - 9 of the algorithm we see that the algorithm requires at least operations .this fact together with the discussion on corollary [ cor1 ] reveals that the choice of has no impact on the computational cost , but only on the precision of the estimation .next we provide the empirical mean , the standard deviation and the quantile - quantile plots ( qq plots ) through a simulation study , where all the codes in matlab are available from authors upon request .we let , then choose 4 different types of and different hurst functional parameters , as follows : since for the above three , we then choose , , .we assume a single discrete trajectory : is available , and denote to be the estimate of : ( ) for short . by generating each estimator times , we present the corresponding empirical mean ( ) and standard deviation ( ) in the following table ..mean and std of the estimates with and . [ cols= " < , < , < , < , < , < , < " , ] note that in the function _variair(eta , n , tot ) _ , the mbm was previously generated using the choleski decomposition of the covariance matrix , so that the sample size is limited to . however here we have generated the mbm using wood chan circulant matrix , some krigging and a prequantification ( see chan and wood 1998 ; barrire 2007 ) , the sample size can be thus taken as .the empirical comparison shows no significant difference between the performances of and , except that estimator has less variance .below we compare the probability distributions of versus , by displaying qq plots ( see fig .1 ) . the first qq plots show whether and come from the same distribution for , , and ; the last one illustrates the asymptotically normal behavior of our estimator for and . versus for and versus standard normal for .] the qq plots show that the probability distribution of our estimator for each are close to that of ir2 estimator when the observed signal process is mbm and it is asymptotically normally distributed . through the above simulation study we conclude that there is no significant difference among ir2 estimator provided in bardet and surgailis ( 2013 ) and our wavelet - based estimator . andno significant difference is observed among wavelet - based estimators corresponding to different .we also state that the bias and variance of are generally greater than ir2 estimator when the sample size is relatively small .this is because , the wavelet - based method generally provides estimators of slower convergence rate than ir2 estimator .first by using triangle inequality , we get recall that for ] is a gaussian process with continuous trajectories , by applying dudley s theorem and borell s inequality ( more precisely , with the same arguments for the proof of on page 1445 - 1446 in rosenbaum ( 2008 ) .see also ledoux and talagrand ( 2010 ) ) , we can show that this means all of s moments are finite .hence , using the mean value theorem , we get }|\phi'(s)| ] , and this together with the fact that for ] . therefore ( [ dnd ] ) has been proven . it follows from ( [ jensen ] ) , cauchy - schwarz inequality , ( [ card ] ) and the fact that that roughly speaking ( and it can be proven without efforts ) , since the trajectory is at least as smooth as , then for , there exists a constant ( only depending on ) such that , then it results from ( [ vnv1 ] ) , ( [ vnv2 ] ) , ( [ dnd ] ) and ( [ card ] ) that in view of the equivalence relation between and as and , ( [ widev ] ) finally results from ( [ vnv3 ] ) . by using the fact that , are zero - mean gaussian random variables , fubini s theorem , the isometry property of mbm s harmonizable presentation and a change of variables , we get since the pointwise hlder exponent of in the neighborhood of behave locally asymptotically like those of fractional brownian motions with hurst parameters : on and on , we can thus consider a taylor expansion of respectively on and . to be more explicit ,let s fix and define , since belongs to ) ] , we can thus take in ( [ computei ] ) and obtain we finally obtain the function case ( ii ) : : + by definition , equals since ) ] , can be expressed as let respectively , and in ( [ general ] ) and notice that depends on , we get + then similarly to ( [ computei ] ) , using a order taylor expansion of respectively on and on , and also use the fact that for , ( since ) , we obtain + + by using symmetric property , case ( v ) : : . + since the computations are quite similar as in the previous cases , we present the results without proof . now denote by it remains to show that for and , - 1}\sum_{k'=0}^{[b^{-1}]-1}\big(t(k , k',q , a , b)\big)^2=\mathcal o((ab)^{1/2}\log a\log b),~\mbox{as }.\ ] ] according to ( [ tkk ] ) , it suffices to prove for any , - 1}\sum_{k'=0}^{[b^{-1}]-1}(ab)^{-h(ak)-h(bk')-1}\big(\mathcal{i}_{l , l'}(k , k',a , b)\big)^2=\mathcal o((ab)^{1/2}\log a\log b).\ ] ] to prove ( [ boundi ] ) holds we only take as an example , since the computations of the other items are similar . recall that remember that , the fact that yields , and ] as for ] , ) ] as , for $ ] , then it follows from the taylor expansion and the fact that that where is some value in and where are some values satisfying and .observe that , by a change of variable , then ( [ varv21 ] ) together with ( [ varv5 ] ) , ( [ varv6 ] ) , ( [ varv7 ] ) entails that observe that , since , }^{+\infty}l^{4h(t_0)-4q}+\frac{1}{card(\nu_{t_0,2^j})}\sum_{|l|\in\nu_{t_0,2^j},l\neq0}|l|^{4h(t_0)-4q+1}\big).\nonumber\\\end{aligned}\ ] ] on one hand , we recall that if is a monotonic decreasing function and the series is convergent , then for , it yields }^{+\infty}l^{4h(t_0)-4q}&=&\mathcal o\big(\int_{[2^j\epsilon_j]}^{+\infty}x^{4h(t_0)-4q}{\,\mathrm{d } } x+ ( 2^j\epsilon_j)^{4h(t_0)-4q}\big)\nonumber\\ & = & \mathcal o\big((2^j\epsilon_j)^{4h(t_0)-4q+1}\big).\end{aligned}\ ] ] on the other hand , since , then it is easy to see finally , it follows by ( [ varv8 ] ) and ( [ varv10 ] ) that we thus can conclude where is a constant only depending on .proposition [ varvx ] has been proven . ayache a , shieh n r , xiao y ( 2011 ) multiparameter multifractional brownian motion : local nondeterminism and joint continuity of the local times .h. poincar probab .47 ( 4 ) : 1029 - 1054 barrire o ( 2007 ) synthse et estimation de mouvements browniens multifractionnaires et autres processus rgularit prescrite : dfinition du processus autorgul multifractionnaire et applications .dissertation , ecole centrale de nantes delbeke l , van a w ( 1995 ) a wavelet based estimator for the parameter of self - similarity of fractional brownian motion .proceedings of the 3rd international conference on approximation and optimization in the caribbean , 14 pp .( electronic ) , benemrita univ . autn .
we propose a wavelet - based approach to construct consistent estimators of the pointwise hlder exponent of a multifractional brownian motion , in the case where this underlying process is not directly observed . the relative merits of our estimator are discussed , and we introduce an application to the problem of estimating the functional parameter of a nonlinear model . + * * keywords : * * pointwise hlder exponent ; multifractional process ; wavelet coefficients ; parametric estimation
epidemic spreading phenomena are ubiquitous in nature and society .examples include the spreading of infectious diseases within a population , the spreading of computer viruses on the internet , and the propagation of information in society .understanding and modelling the dynamics of such events can have significant practical impact on health care , technology and the economy .various spreading mechanisms have been studied .the two most common mechanisms are _ local spreading _, where infected nodes only infect a limited subset of target nodes ; and _ global spreading _ , where nodes are fully - mixed such that an infected node can infect any other node . in reality , many epidemics use _ hybrid spreading _ , which involves a combination of two or more spreading mechanisms .for example the computer worms conficker and code - red can send probing packets to targeted computers in the local network or to any randomly chosen computers on the internet .early relevant studies investigated epidemics spreading in populations whose nodes mix at both local and global levels ( `` two levels of mixing '' ) .these early studies did not incorporate the structure of the local spreading network , assuming both local and global spreading are fully - mixed . since the introduction of network based epidemic analysis ,hybrid epidemics have been studied in structured populations , in structured households , and by considering networked epidemic spreading with `` two levels of mixing '' . a number of studies have also considered epidemics in metapopulations , which consist of a number of weakly connected subpopulations. the studies of epidemics in clustered networks are also relevant .much prior work on hybrid epidemics has focused on the impact of a network s structure on spreading .most previous studies were about what we call the _ non - critically _ hybrid epidemics where a combination of multiple mechanisms is not a necessary condition for an epidemic outbreak . in this case , using a fixed total spreading effort , a hybrid epidemic will always be less infectious than an epidemic using only the more infectious one of the two spreading mechanisms . however , many real examples of hybrid epidemics suggest the existence of _ critically _ hybrid epidemics where a mixture of spreading mechanisms may be more infectious than using only one mechanism . in this paperwe investigate whether , and if so when , hybrid epidemics spread more widely than single - mechanism epidemics .we propose a mathematical framework for studying hybrid epidemics and focus on exploring the optimum balance between local and global spreading in order to maximize outbreak size .we demonstrate that hybrid epidemics can cause larger outbreaks in a metapopulation than a single spreading mechanism .our results suggest that it is possible to combine two spreading mechanisms , each with a limited potential to cause an epidemic , to produce a highly effective spreading process .furthermore , we can identify an optimal tradeoff between local and global mechanisms that enables a hybrid epidemic to cause the largest outbreak . manipulating the balance between local and global spreading may provide a way to improve strategies for disseminating information , but also a way to estimate the largest outbreak of a hybrid epidemic which can pose serious threats to internet security .here we introduce a model for hybrid epidemics in a _ metapopulation _ , which consists of a number of subpopulations .each subpopulation is a collection of densely or strongly connected nodes , whereas nodes from different subpopulations are weakly connected . as illustrated in figure[fig - l - g ] , our model considers two spreading mechanisms : 1 ) local spreading where an infected node can infect nodes in its subpopulation and 2 ) global spreading , where an infected node can infect all nodes in the metapopulation . in our modeleach subpopulation for local spreading can be either fully - mixed or a network . for mathematical convenience , we describe each subpopulation as a network and represent a fully - mixed subpopulation as a fully connected network .note that our definition of metapopulation is different from the classical metapopulation defined in ecology where subpopulations are connected via flows of agents .hybrid epidemic spreading in a metapopulation . at each time step, an infected node has a fixed total spreading effort which must be allocated between local spreading and global spreading .the proportion of spreading effort spent in local spreading is and that in global spreading is .local spreading occurs between infected and susceptible nodes that are connected in individual subpopulations ; global spreading happens between an infected node and any susceptible node in the metapopulation.,scaledwidth=80.0% ] our model considers hybrid epidemics in which at each time step , an infected node has a fixed total spreading effort which must be allocated between the two spreading mechanisms . let the hybrid tradeoff , , represent the proportion of spreading effort spent in local spreading .the proportion of global spreading effort is .a tunable enables us to investigate the interaction and the joint impact of the two spreading mechanisms on epidemic dynamics , ranging from a completely local spreading scenario ( with ) to a completely global spreading scenario ( with ) .for example computer worms like conficker and code - red can conduct both local and global probes but the average total number of probes in a time unit is fixed .we consider the hybrid epidemic spreading in terms of the susceptible - infected - recovered ( sir ) model , where each node is in one of three states : _ susceptible _ ( s ) , _ infected _( i ) , and _ recovered _ ( r ) . at each time step , each infected node spreads both locally and globally ; it infects 1 ) each directly connected nodes in the same subpopulation with rate and 2 ) each susceptible node in the metapopulation with rate . is the local infection rate when all spreading effort is local ( ) .and is the global infection rate when all spreading effort is global ( ) .each infected node recovers at a rate , and then remains permanently in the recovered state .a node can infect other nodes and then recover in the same time step .before we analyse hybrid spreading in a metapopulation , we study a relatively simple case where the epidemic process takes place in a _ single population_. that is , there is only one population , where local spreading is via direct connections on a network structure and global spreading can reach any node in the population .here we extend the system in for the analysis .the system in was proposed to analyse single - mechanism based epidemics for the continuous time case . herewe extend the system to analyse 1 ) hybrid epidemics , and 2 ) for the discrete time case .we calculate the probability that a random test node is in each state : susceptible , infected , and recovered .we denote as the probability that a node has degree ( i.e. number of neighbours ) .the generating function of degree distribution is defined as .let represent the probability that a random neighbour of has neighbours .we assume the network is _ uncorrelated _ : the degrees of the two end nodes of each link are not correlated ( i.e. independent from each other ) . in an uncorrelated network .let be the probability that a random neighbour has not infected through local spreading .let be the probability that a random node has not infected through global spreading .suppose has neighbours , the probability that it is susceptible is where is the total number of nodes in the population .then by averaging over all degrees , we have , the probability can be broken into three parts : is susceptible at , ; is infected at but has not infected through local spreading , ; is recovered at and has not infected through local spreading , .neighbour can not be infected by and itself , then . in a time step , neighbour 1 )infects with rate through local spreading and 2 ) recovers without infecting through local spreading at rate , i.e. after every time step : increases by and increases by .the increase rate of here , , is different from that ( ) in the original system in .because the original system was designed for the continuous time case , and in the discrete time case in this paper , neighbour can infect and recovers at the same time step . given that and are both approximately 0 in the beginning ( ), we have .then for global spreading , the probability can also be broken into three parts : is susceptible at , ; is infected at but has not infected through global spreading , ; is recovered at but has not infected through global spreading , . using a similar derivation process , we have and , and when the epidemic stops spreading , and . by setting in equation([eq - phii ] ) we get substituting equation([eq - vartheta - n-2 ] ) and into equation([eq - varphii ] ) , we have by setting and substituting equation([eq - vartheta ] ) in equation([eq - phii ] ) we have then - stationary value of is a fixed point of . has a known fixed point of which represents no epidemic outbreak .we test the stability of this fixed point . by substituting equation([eq - phii ] ) and equation([eq - vartheta ] ) into , setting and take the leading order ( taylor series ) , we have and where , , and .then where is a constant .when is negative , gradually decreases and approaches 0 as increases ; while when is positive , gradually increases and approaches with the increase of .that is the fixed point turns from stable to unstable when changes from negative to positive .a more rigorous analysis would need to consider the fact that when a small amount of disease is introduced , the fixed point at is moved slightly .however , the stability analysis we do here is sufficient to determine whether epidemics are possible for arbitrarily small initial infections .further details are in .the threshold condition for an epidemic outbreak is then : this epidemic threshold represents an condition which , when _ not _ satisfied , results in an epidemic that vanishes exponentially fast .there are two special cases . * for completely local spreading ( ) , the threshold reduces to >1 ] .we set the global infection rate and the recovery rate ( i.e.an infected node only spread the epidemic in one time step ) . for epidemics on the fully connected network , the local infection rate . and for epidemics on the random and scale - free networks , .figure[fig - threshold ] shows that the final outbreak size predicted by equation([eq - rinf0 ] ) is in close agreement with simulation results .the hybrid epidemics on the random network and the scale - free network exhibit similar outbreak sizes for large values of .it is also evident that the hybrid epidemic is characterised by a phase change , where the threshold is well predicted by equation([eq : threshold ] ) .theoretical predictions and simulation results for hybrid epidemics in a single - population . the final outbreak size is shown as a function of the hybrid tradeoff .three network topologies are considered : ( 1 ) a fully connected network ( i.e. fully mixed ) ; ( 2 ) a random network with an average degree of 5 ; ( 3 ) a scale - free network with a power - law degree distribution which is generated by the configuration model with the minimum degree .the population has 1000 nodes .the global infection rate and recovery rate are the same for epidemics on these three types of networks . the local infection rate is for epidemics on the fully connected network ; and it is for epidemics on the random and scale - free networks .initially 5 random nodes are infected .simulation results are shown as points and theoretical predictions of equation([eq - rinf0 ] ) are dashed curves .the simulation results are averaged over 1000 runs with bars showing the standard deviation .the epidemic threshold values of are predicted by equation([eq : threshold ] ) and marked as vertical lines.,scaledwidth=80.0% ]we now extend the above theoretical results for a single - population to analyse hybrid spreading in a metapopulation which consists of a number of subpopulations .local infection happens only between nodes in the same subpopulation whereas global infection occurs both within and between subpopulations .we define a subpopulation as susceptible if it contains only susceptible nodes .a subpopulation is infected if it has at least one infected node .a subpopulation is recovered if it has at least one recovered node and all other nodes are susceptible .only global spreading enables infection between subpopulations , whereas spreading within a subpopulation can occur via both local and global spreading .the final outbreak size at the population level , is defined as the proportion of subpopulations that are recovered when the epidemic stops spreading .we define that a subpopulation a directly infects another subpopulation b if an infected node in a infects a susceptible node in b. we define the population reproduction number , , as the average number of other subpopulations that an infected subpopulation directly infects before it recovers . note that our definition of is similar to in but the definition of a metapopulation in is different . in the simulations and theoretical analysis , we approximate as the population reproduction number of the initially infected subpopulation , i.e. the average number of other subpopulations that directly infects .this approximation becomes exact when the metapopulation has infinite number of subpopulations each with the same network structure .a metapopulation includes many subpopulations . in order for an epidemic to spread in a metapopulation, an infected subpopulation should infect at least one other subpopulation before it recovers , i.e. the threshold condition of the hybrid epidemic at the population - level is . we conduct epidemic simulations on a metapopulation containing 500 subpopulations each with 100 nodes .two topologies for local spreading in each subpopulation are considered : random network and scale - free network .figure[fig - ob - rn0-a ] shows simulation results of the final outbreak sizes and and the population reproduction number ( right y axis ) as a function of the hybrid tradeoff .epidemic parameter values are included in figure[fig - ob - rn0-a ] s legend .for both the random and scale - free networks , all three functions show a bell shape curve regarding .it is clear that the epidemic will not cause any significant infection if it uses only local spreading ( ) or only global spreading ( ) . for the random network , the maximal outbreak at the node level obtained around the optimal hybrid tradeoff .that is , if 50% of the infection events occur via local spreading ( and the rest via global spreading ) , the epidemic will ultimately infect 34% of all nodes in the metapopulation . at the population level, the total percentage of recovered subpopulations follows a very similar trend to , and the maximum epidemic size in terms of subpopulations occurs at the same optimal .the population reproduction number follows a similar trend to the final outbreak sizes and .the threshold defines the range of for which the final outbreak sizes are significantly larger than zero .fig_mpop_er2 ( 0,55)(a ) fig_mpop_pl2 ( 0,55)(b ) it is important to appreciate that although the maximal is uniquely defined by the optimal , other values can be obtained by _ two _ different values , on either side of the optimal , potentially representing different epidemic dynamics . as the hybrid epidemic for random and scale - free networks exhibit similar properties , for simplicity we only show results for the random network in the following . the population reproduction number is a fundamental characteristic of hybrid epidemics in a metapopulation .we consider a metapopulation with subpopulations , which are denoted as where .each subpopulation has nodes connected to a same structured local spreading network . is the subpopulation where the epidemic starts from .we assume the infection inside the initially infected subpopulation is all caused by infected nodes inside .that is , we neglect the effects of global spreading of other subpopulations on .this is an acceptable assumption when the metapopulation has a larger number of subpopulations . under these conditions ,hybrid spreading within is the same as spreading in a single - population , which has been analysed in previous sections .to predict , we first analyse the expected number of nodes outside that will be infected by .we then estimate the number of other subpopulations that these infected nodes should belong to .let represent the probability that a random test node in other subpopulations are susceptible at time .using the same parameters defined in the analysis about hybrid epidemics in a single population , we have where is the number of node in .when recovers at time , the _ fraction _ of nodes in other subpopulations that have been infected by ( infected nodes in ) ( via global spreading ) is where we have used equation([eq - vartheta ] ) .then the _ number _ of such infected nodes is where is the total number of nodes in other subpopulations and can be numerically calculated as by fixed - point iteration of equation([eq - theta ] ) . as the nodes are infected randomly via the global spreading , the probability that an infected node does not belong to a particular subpopulation is ; and the probability that none of these infected nodes belongs to the subpopulation is .so the probability that at least one infected node belongs to the subpopulation is .thus the population reproduction number , which is the number of other subpopulations that these infected nodes should belong to , is : figure[fig - a - rn0 ] compares the predicted against simulation results as a function of the hybrid tradeoff . is characterised by a bell - shaped curve .it peaks at the optimal hybrid tradeoff where the population reproduction number achieves its maximal value .this optimal point is of particular interest as it represents the optimal trade - off between the two spreading mechanisms , where the hybrid epidemic is most infectious and therefore has the most extensive outbreak .fig_mpop_er_rp we next investigated the maximum epidemic outbreak in the context of varying infectivity and recovery rates . for a given set of epidemic variables , we calculate the theoretical prediction of as a function of using equation([eq : r_n0 m ] ) , and then we obtain the optimal and the maximal . for ease of analysis ,we fix the global infection rate at a small value of and then focus on the local infection rate and the recovery rate .fig_opta_er ( 0,75)*(a ) * fig_optrp_er ( 0,75)*(b ) * + fig_opta_er_bdg ( 0,55)*(c ) * fig_opta_er_barp ( 0,75)*(d ) * figure[fig - mamrc]a shows the optimal hybrid tradeoff as a function of and . for a given , a larger results in a smaller .intuitively this can be understood as when the efficiency of local spread increases , less effort needs to be devoted to this spreading mechanism , and more can be allocated to global spreading . on the other hand , for a given , a larger results in an increase in .when the recovery rate is higher , nodes remain infectious for shorter times . in this case , in order to achieve the maximum epidemic outbreak , more local infection is favoured , since this will allow an infected subpopulation to remain infected for longer , and hence increase the probability of infecting other subpopulations before it recovers .a plot of versus is shown in figure[fig - mamrc]c .the fitting on a log - log scale in the inset indicates the two quantities have a power - law relationship , i.e. is determined by .this means the optimal hybrid tradeoff can be predicted when is known .figure[fig - mamrc]b shows the maximal as a function of and , where the is obtained when the corresponding value of in figure[fig - mamrc]a is used . is very sensitive to the recovery rate . as approaches zero , the value of increases dramatically ( note that uses a log - scale colour - map ) regardless of value of is in agreement with the intuition that a low recovery rate will favour any type of epidemic spreading . for a fixed , increases with .an increased infection rate of local spreading will obviously increase the reproductive number , if other parameters are kept constant , but the effect is much smaller than that of changing the recovery rate , because global spreading maintains the reproductive number when local spreading falls to low values .figure[fig - mamrc]a shows a clear phase shift between areas where an epidemic occurs ( the coloured area ) and areas where it does not ( the white area towards the top - left corner ) .accordingly , the corresponding in figure[fig - mamrc]b in the area where no epidemic occurs is very small .the boundary between the epidemic and non - epidemic phase space is defined by the line .this is the threshold for completely local spreading in a single - population : and for the network topology used .since the global infection rate is fixed at a small value , no major spreading will occur either within or between subpopulations below this threshold .figure[fig - mamrc]d plots as a function of and on a log - log scale while fixing . for given values of ,the corresponding optimal are shown as points .we can see that points always fall in the area of the maximal for the given .each point represents a local optimum .the global optimum , the largest possible value of , is obtained towards the bottom - right corner , where the local infection rate is high but the epidemic spends most effort on global spreading .infection across subpopulations can only be achieved by global spreading .since global spreading has a low infection rate , the epidemic should spend most of its time ( or resource ) on global spreading .there will be much less time spent on local spreading but its infection rate is high anyway .[ [ section ] ]hybrid spreading , the propagation of infectious agents using two or more alternative mechanisms , is a common feature of many real world epidemics . widespread epidemics ( e.g. computer worms ) typically spread efficiently by local spreading through connections within a subpopulation , but also use global spreading to probe distant targets usually with much lower infectivity . in many cases ,the amount of resources ( e.g. time , energy or money ) which an infectious agent can devote to each mode of propagation is limited .this study focuses on the tradeoff between local and global spreading , and the effect of this tradeoff on the outbreak of an epidemic .we develop a theoretical framework for investigating the relationships between , the relative weight given to each spreading mechanisms , and the other epidemic properties .these properties include epidemic infectivity , subpopulation structure , epidemic threshold , and population reproduction number .the predictions of the theoretical model agree well with stochastic simulation results , both in single populations and in metapopulations .our analysis shows that epidemics spreading in a metapopulation may be critically hybrid epidemics where a combination of the two spreading mechanisms is essential for an outbreak and neither completely local spreading nor completely global spreading can allow epidemics to propagate successfully .our study reveals that , in metapopulations , there exists an optimal tradeoff between global and local spreading , and provides a way to calculate this optimum given information on other epidemic parameters .these results are supported by our recent study on measurement data of the internet worm conficker .the above results are of practical relevance when the total amount of time or capacity that is allocated to spreading is limited by some resource constraint .for example , the total probing frequency of computer worms is often capped at a low rate to prevent them from being detected by anti - virus software .furthermore , other epidemic parameters , such as local or global infection rates are difficult to change because they derive from inherent properties of the infectious agent .for example it would be difficult to increase the global infection rate of an internet worm .the tradeoff between different types of spreading therefore becomes a key parameter in terms of design strategy , which can be manipulated to maximise outbreak size .the consideration of hybrid spreading mechanisms also has some interesting implications for strategies for protecting against the spread of epidemics .it is clear from both theoretical considerations and simulations that epidemics can spread with extremely low global infection rates ( far below individual recovery rates ) , provided there is efficient local infection .such conditions are common for both cyber epidemics ( as computers within infected local networks tend to be more vulnerable to infection ) and in infectious disease epidemics , where contacts between family or community members are often much closer and more frequent than the overall population .protection strategies which target local networks collectively ( for example intensive local vaccination around individual disease incidents , as was used in the final stages of smallpox eradication ) may therefore be a key element of future strategies to control future mixed spreading epidemics . in conclusion ,our study highlights the importance of the tradeoff between local and global spreading , and manipulation of this tradeoff may provide a way to improve strategies for spreading , but also a way to estimate the worst outcome ( i.e. largest outbreak ) of hybrid epidemics which can pose serious threats to internet security .here we use newman s method to obtain the threshold condition for the local spreading .firstly we need to calculate the `` transmissibility '' which is the average probability that an epidemic is transmitted between two connected nodes , of which one is infected and the other is susceptible . according to , for the discrete time case can be calculated as where is the time steps that an infected node remains infected , and respectively are the probability distribution of and . for the model in this paper, is a constant and , in which is the probability that an infected node has not recovered until steps after infection , and is the probability that the node recovers at the step after infection . also for the model in this paper , each infected nodeat least remains infected for time step .so that for our model can be obtained as according to the epidemic threshold for completely local spreading is i.e. >1 $ ] .this is the same as the epidemic threshold for completely local spreading obtained in this paper .note that treating each edge as having this value of independently will lead to the correct epidemic threshold and final size calculation , but there are further discussions on its correctness in calculating the infection probabilities .random networks used in all simulations have a poisson degree distribution and they are generated by the erds - rnyi ( er ) model with the average degree of 5 .scale - free networks used in all simulations have a power - law degree distribution and they are generated by the configuration model with the minimum degree .figure[fig - threshold ] - simulations in a single - population : size of single - population : 1,000 nodes ; single - population topology : fully connected network , random network and scale - free network ; local infection rate : ( except for fully connected network ) ; global infection rate : ; recovery rate : ; initial condition : all nodes are susceptible except 5 randomly - chosen nodes are infected ; number of simulation runs averaged for each data point : 1,000 .figure[fig - ob - rn0-a ] - simulations in a metapopulation : size of metapopulation : 500 subpopulations each with 100 nodes ; subpopulatin topology : random networks and scale - free networks ; local infection rate : ; global infection rate : ; recovery rate : ; initial condition : all nodes are susceptible except 3 randomly - chosen nodes are infected ; number of simulation runs averaged for each data point : 1,000 .figure[fig - a - rn0 ] and [ fig - mamrc ] - theoretical predictions about hybrid epidemics : same as figure[fig - ob - rn0-a ] except only the random network topology is considered .10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 _ _ ( , ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) . , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . ._ _ * * , ( ) . ._ _ * * , ( ) . & ._ _ * * , ( ) . , & ._ _ * * , ( ) . & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) ._ _ * * , ( ) . & ._ _ * * , ( ) ., & . _ _ * * , ( ) ._ _ * * , ( ) . , , &_ _ * * , ( ) . , , , & ._ _ * * , ( ) . ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) . , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . ._ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) . & _ _ ( ) . , & ._ _ ( ) . ., & . _ _ * * , ( ) ._ _ * * , ( ) . ._ _ * * , ( ) . ._ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) .we thank prof . valerie isham of ucl for her helpful comments .c.z . was supported by the engineering and physical sciences research council of uk ( no .ep / g037264/1 ) , the china scholarship council ( file no .2010611089 ) , and the national natural science foundation of china ( project no .60970034 , 61170287 , 61232016 ) .i.j.c . acknowledges support from the epsrc irc in early warning sensing systems for infectious diseases ( grant reference ep / k031953/1 ) .j.c.m . was supported in part by the rapidd program of the science and technology directorate , department of homeland security and the fogarty international center , national institutes of health .the content is solely the responsibility of the authors and does not necessarily represent the official views of the national institute of general medical sciences or the national institutes of health .the funders had no role in study design , data collection and analysis , decision to publish , or preparation of the manuscript .c.z . , s.z . , i.j.c . , and b.m.c .designed the study .c.z and j.c.m . conducted the mathematical modelling and derivation .c.z . performed the computational analysis and simulations .s.z . and b.m.cwrote the manuscript with contributions from c.z . and j.c.m . and i.j.c .* competing financial interests : * the authors declare no competing financial interests .
epidemic spreading phenomena are ubiquitous in nature and society . examples include the spreading of diseases , information , and computer viruses . epidemics can spread by _ local spreading _ , where infected nodes can only infect a limited set of direct target nodes and _ global spreading _ , where an infected node can infect every other node . in reality , many epidemics spread using a hybrid mixture of both types of spreading . in this study we develop a theoretical framework for studying hybrid epidemics , and examine the optimum balance between spreading mechanisms in terms of achieving the maximum outbreak size . we show the existence of _ critically _ hybrid epidemics where neither spreading mechanism alone can cause a noticeable spread but a combination of the two spreading mechanisms would produce an enormous outbreak . our results provide new strategies for maximising beneficial epidemics and estimating the worst outcome of damaging hybrid epidemics .
in , katz and vafa showed how to geometrically engineer matter representations in terms of the local singularity structure of type iia , m - theory , and f - theory compactifications . in that framework matter and gauge theory bothhave purely geometrical origins : , and gauge theories arise from the existence of co - dimension four singular curves of certain types in the compactification manifold ; and massless matter representations arise from isolated points ( in type iia or m - theory ) or curves ( in f - theory ) along the singular surface over which the type of singularity is enhanced by one rank . despite the extraordinary generality of this framework ,it has not been widely used phenomenologically .this is largely because the description of the isolated enhancements of singularities giving rise to various matter representations is inherently local : although the geometry near any particular enhancement could be described concretely , the framework had nothing to say about numbers , types , and relative locations of different matter fields .this global data was either to be determined by duality to a concrete , global string theory model , or suggested via the _ a posteriori _ success of a given set of relative positions ( as in e.g. ) . another way to relate the number and relative positions of ( enhanced singularities giving rise to ) matter fieldswas given in : in that paper , we described for example how a local description of the geometry giving rise to a massless of could be smoothly deformed into a local description of a and a of which live at distinct points related by a single deformation parameter .a cartoon of what was described in that paper is shown in figure [ sm_res_of_su5 ] . in this paperwe describe pedagogically how to extend that idea to engineer analogies to and grand unified models , the resolution naturally starts as a theory with three s of related by an family symmetry . ] .although in we were able to analyze explicit unfoldings of and singularities sufficiently well by sight , this will not be possible for our present examples .all of the examples in this paper involve the unfolding of isolated singularities ; and although algebraic descriptions of these are known and classified , it would be unnecessarily cumbersome and unenlightening to analyze them explicitly as we did in .therefore , in we describe a much more powerful and elegant language in which to study these resolutions . in section [ so10 ]we describe in detail how the unfolding of a of into the standard model is derived in the language of section [ resolvingen ] .this is achieved in two stages : in the first stage , we unfold the into of ; we then unfold the resulting model into a single ` family ' of the standard model . at the end , all the relative positions of the singularities of the family are set by the non - zero values of two complex structure moduli , thereby greatly reducing the arbitrariness of their relative positions .the next most obvious example would be a description of how a of geometrically unfolds into the standard model .however , there are two reasons to leave this example to the reader : first , it is a most natural extension of the results of section [ so10 ] ; secondly , it is a consequence of the grand unified model which we describe in section [ e6xsu2 ] . although not given as an example in , it is not hard to see . ]that a single isolated singularity at the intersection of a co - dimension four surfaces of types and gives rise to matter in the representation .it is easy to see how this would unfold into the matter content of three families one coming from each of the s . that three families emerge from a general consequence of group theory and can be understood from the fact that is a maximal subgroup of into which the adjoint of partially branches into an triplet of s . as in the preceding paper , this work is presented concretely in the language of calabi - yau compactifications of type iia string theory , which can also be naturally extended to f - theory models . here , we engineer the explicit local geometry of ( non - compact ) calabi - yau three - folds which are -fibrations over . if type iia string theory is compactified on this three - fold , a four - dimensional theory with various massless hypermultiplets will result .but if , for example , the base of this three - fold were fibred as an bundle over , the resulting total space would be a calabi - yau four - fold to result in a calabi - yau four - fold . ] upon which f - theory would compactify to an theory with chiral multiplets . however , because the manifold over which the singular s are fibred in m - theory is a real , three - dimensional space , our fibrations over do not have a direct application to m - theory .it would of course be desirable to have a similar description of geometric unfolding explicitly in the language of -manifolds so that this picture could be realized concretely in m - theory as well .this is particularly important in light of the recent advances in m - theory phenomenology ( e.g. ) . by extension of the work of berglund and brandhuber in , such a generalization should be relatively straight - forward , but we will not attempt to do this here .recall that a gauge theory in type iia string theory can arise from compactification to six dimensions over a singular surface ( similar statements apply to m - theory and f - theory ) .the complex structures of the singular compactification manifolds giving rise to , , and gauge theory are given in table [ orbifolds]where the surfaces are labelled conveniently by the name of the resulting gauge theory were first identified by fleix klein in 1884 .the reader may also be amused that the full resolutions of these surfaces were almost completely classified up to a few computational errors by bramble in 1918 . ] .lcr gauge group & & polynomial + ( ) & & + ( ) & & + & & + & & + & & + we can generalize this discussion by considering a complex , one - dimensional space over which a smooth family of singular surfaces are fibred . if almost everywhere over the -fibres have singularities of a single type , then compactification of type iia string theory over the total space will give rise to gauge theory in four - dimensions of the type corresponding to the typical fibre . masslesscharged matter will arise if over isolated points in the type of fibre is enhanced by one rank .the geometry about a single such isolated point where the singularity is enhanced was described in detail by katz and vafa in .the representation of matter living at these ` more - singular ' points was also given in : suppose that and that the rank of is one less than ; then , if there is an isolated -type singularity over a surface of -type singularities , the resulting massless representation is given by those parts of the decomposition of the adjoint of into which are charged under the . because the question of how to ( smoothly ) deform the surfaces of table [ orbifolds ] into ones of lower rank has intrinsic mathematical interest , it is not too surprising that all possible two - dimensional deformations have been classified .our discussion below will make use of the notation and results presented in . in our present work , we are interested in deformations of singularities into ones of lower rank . unlike singularities ,the resolutions of which are easy enough to read off by sight , the algebraic complexity of singularities is formidable . to appreciate what is meant by this , consider the resolution of . from table[ orbifolds ] we know that an singularity is locally isomorphic to the surface in .its full resolution in terms of the seven deformation parameters is given by where the are order symmetric polynomials in the components of which are tabulated over several pages of the appendix of . a nave way to determinethe type of singularity found by resolving `` in the direction '' would be to expand equation ( [ e7res ] ) completely using the explicit functions , find each of its singular points , and expand locally about each until an isomorphism with a singularity of lower rank in table [ orbifolds ] was clear .this is the way , for example , that demonstrated that the resolution of in the direction gives rise to for .all of the results in this paper could be verified in this way .luckily , however , katz and morrison described a much more powerful and direct way to analyze the deformations of singularities .we would like a pragmatic answer to the following question : _ what is the type of fibre found by resolving an singularity in the direction ? _ that there is an easy answer to this question makes our work much simpler . although an adequate treatment would take us well beyond the scope of our present discussion , the answer given in is at least very easy to make use of : _ for each of the equations in table [ roots ] satisfied by the components of , the singularity has the corresponding root ._ given the list of roots , it is then a straight - forward exercise to construct the dynkin diagram corresponding to the singularity , and , one can think of the vectors as an orthonormal basis in minkowski space which is equipped with a mostly - plus metric .then roots are vectors in this space of norm .each ( positive ) root gives rise to a node in the resulting dynkin diagram , and two nodes are connected by a line if their inner product is and disconnected if they are orthogonal . ] . in an admittedly bad notation, we consider each of the deformation parameters to be functions of , the local coordinate on the base space . a ( non - abelian ) gauge theory will be present if there are roots implied by table [ roots ] which are preserved for generic values of . and charged massless matter will exist if at isolated points an additional root is added or , in terms of dynkin diagrams , if an additional node is added . at each isolated pointwe can therefore identify the resolution and thereby determine the resulting representation .ccc equation&&root + && + & & + & & + & & +a necessary starting point to describe the unfolding of a of into the standard model is a description of the initial geometry as was done in . we will briefly review that construction in the language described above before we unfold it , first into an model , and later all the way into .let be a local complex coordinate on the space over which is fibred the resolution of parameterized by . to be clear , for each value of , the vector describes an explicit surface in given in reference analogous to that of equation ( [ e7res ] ) above . considering the rules of table [ roots ], we see that for an arbitrary value of the root lattice of the fibre is where we have displayed the roots suggestively so as to reproduce the dynkin diagram . at , however , is restored .so we have an isolated fibre over the point , while for any the fibre is .this gives rise to gauge theory with a single massless located at the origin in the -plane .we would like to unfold the manifold described above into one with gauge theory .it is not hard to guess in what ` directions ' we may deform the the geometry so that the fibre over a generic point is .let denote a parameter independent of which adjusts the whole geometry over the region which is coordinatized by .then let the fibre over be given by the resolution of in the direction .obviously when the situation is the same as above and results in a single massless of .however , when the situation is different : for generic values of it is easy to see that the simple roots are which means that the generic fibre over is just so the resulting gauge theory is . to find what matter representations exist, we must determine over which locations the rank of the fibre is enhanced .this means we are seeking special values of ( determined by ) at which an additional equation in table [ roots ] is satisfied . for each of these points, we can draw the resulting dynkin diagram to determine the fibre over that point , thereby determining the representation which arises there .rcclocation&fibre& + && + && + && + it is not hard to exhaustively find all these ` more singular ' points .they are give in .notice that we have included the -charge assignments that result ; these are normalized as in the appendix of . to complete our task and unfold the of all the way to the standard model, we must deform the fibres by another ` global ' parameter , which we will denote .it is not hard to guess a direction over which the generic fibre will be : try for example .again , we notice that for a general location and generic fixed values , the singularity has the root structure which is visibly .like above , it is a straight - forward exercise to determine all the locations over which the singularity is enhanced , and the resulting representation which arises .these points including their resulting representations ( with -charges as normalized in ) are listed in table [ so10unfolding2 ] .the entire unfolding is reproduced graphically in .rccclocation&fibre&&name + &&& + &&& + &&& + &&& + &&& + &&& +after having completed the unfolding of a of into the standard model , it is natural to ask if this idea can be extended to relate all the singularities of the standard model as perhaps the unfolding of a single isolated singularity of higher - rank .the answer is in fact yes andthere is a sense in which precisely three families arise if the notion of ` geometric unification ' is saturated . because a of arises from the resolution , it can only be unfolded out an exceptional singularity .clearly the highest level of unification one can achieve along this line would be to start with a resolution where is a rank - seven subgroup of which contains .the possible ` top - level ' gauge groups are then , , and .we choose to study as our example because it will naturally include a description of the unfolding of of into the standard model , which is interesting in its own right , and because it follows quite directly from our work in section [ so10 ] . the initial geometry which we will deform into the standard model is given as follows .let be a complex coordinate on the base space over which is fibred the resolution of .clearly , when we recover ; when we see that the roots of the fibre are which is visibly .following the general rule to determine the representation resulting from a given resolution , we find that at lives massless matter charged in the representation of . to avoid pedantic redundancy , in figure [ fullres ] we have summarized in great detail the entire unfolding into .an outline of the steps involved in deriving this unfolding is given presently .first , the unfolding of the gauge theory into gauge theory is obtained by defining the fibre over to be given by the resolution of in the direction for some .this clearly kills the node of the fibre in .there are five locations at which the singularity is enhanced by one rank , giving rise to three s and two singlets as shown in the left - most section of .the rest of the unfolding is a natural application of the work in section [ so10 ] .let us now set the fibre over to be given by the resolution of for arbitrary complex deformation parameters . from section [ so10 ]we see immediately that the generic fibre is .a thorough scanning for possible solutions to equations in table [ roots ] shows that there are isolated points on the complex -plane over which the singularity is enhanced .these correspond to the ` breaking ' of each of into of , while the singlets remain singlets .this is seen in the second vertical strip ( from the left ) in figure [ fullres ] .again , following our discussion above , it is easy to guess possibilities for the next two resolution directions . first , we set the fibre over to be given by which will result gauge theory with matter content corresponding to the ` canonical ' decomposition of three s of with two singlets . andfinally , the full resolution of the grand unified model into can be given by letting the fibre over be given by the resolution of for ( generic ) arbitrary fixed complex structure moduli .let us clarify what we have done .for a given set of fixed , nonzero complex structure moduli , the resolution given above describes the explicit , local geometry of a non - compact calabi - yau three - fold , which is a -fibration over .if type iia string theory is compactified on this three - fold , the resulting four - dimensional theory will have gauge theory with hypermultiplets at isolated points as given in figure [ fullres ] which reproduce the spectrum of three families of the standard model with an extended higgs sector and some exotics . alternatively , if one takes this ( non - compact ) calabi - yau three - fold and fibres it over as described in section [ intro ] so that the total space is calabi - yau , then f - theory on this space will give rise to supersymmetry with gauge theory and chiral multiplets in the representations given in figure [ fullres ] .and although it does not follow directly from our construction above , considering the close similarities between two- and three - dimensional resolutions of the singular surfaces we have every reason to suspect an analogous geometry can be engineered for m - theory in terms of hyper - khler quotients by extension of the results in .we are currently working on building this geometry in m - theory , and we expect to report on this work soon . given these four complex structure moduli , all the relative positions of the 35 disparate singularities giving rise to all three families of the ( extended ) standard model are then known theory from type iia , we are unable to distinguish the from the in the splitting of the s of . in figure[ fullres ] , a consistent choice was made and although we do not justify this claim here , it is the choice that will be correct for the m - theory generalization of this work . ] . beyond the usual three families of the standard model, the manifold also gives rise to two higgs doublets for each family , six higgs colour triplets , three right - handed neutrinos and five other standard model singlets .we should point out that this matter content ( and their -charge assignments ) is a consequence of group theory and algebraic geometry alone it is simply what is found when unfolding all the way to the standard model . and given the relative positions and local geometry of the singularities together with the -structure , one can in principle compute the full superpotential coming from instantons wrapping different singularities . because these are fixed by the values of the complex structure moduli, there is a ( complex ) four - dimensional landscape of different , explicit embeddings at the compactification scale .although this large landscape may appear to have too much freedom , we remind the reader that in the traditional understanding of geometrical engineering there would be hundreds of parameters describing the ( independent ) relative locations of each of the isolated singularities .there are a few things to notice about the form of the superpotential that will emerge .first , because of the -charge assignments , each term in the superpotential must combine exactly one term arising from each of the s .this greatly limits the form of the superpotential . andin particular , it implies that neither mass nor flavour eigenstates will arise from any single is , the ` families ' in the colloquial sense are necessarily linear combinations of fields resulting from different s . also notice that in general the terms in the superpotential will be proportional to where is the volume form of some cycle wrapping singularities ( the details of which depends on whether we are talking about type iia , m - theory , or f - theory realizations ) , and are in principle calculable in terms of the deformation moldui . andbecause these coefficients are exponentially related to the volumes of cycles , we expect the high - scale lagrangian will be generically hierarchical .this structure could be important for solving problems in phenomenology for example the problem in the higgs potential , the higgs doublet - triplet splitting problem , or avoiding proton decay .we are in the process of studying the phenomenology of models on this landscape . at first glance, the -structure combined with high - scale hierarchies could possibly be complex enough to be able to avoid some of the typical problems of -like grand unified models .we should point out that if there were no high - scale hierarchies , however , then the allowed terms in the superpotential would generically give rise to low - energy lepton and baryon number violation , similar to any ` generic ' model i.e .one which includes all types of terms allowed by the -mandated -structure .we could always impose additional symmetries and add fields by hand to solve these problems , but this would not be very compelling . however , if viable models already exist in the landscape which do not require additional fields or symmetries , these would be compelling even if we do not yet understand how they are selected .one of the most important phenomenological questions about these models is the fate of the additional symmetries .although we suspect that one can determine which of the symmetries are dynamical below the compactification scale by studying the normalizability of their corresponding vector multiplets , we do not presently have have a complete understanding of this situation .of course , if any additional s survive to low energy they could have very interesting or damning phenomenological consequences .an important point to bear in mind when considering geometrically engineered models is that there generically exist moduli which can deform the geometry into one which gives rise to a theory with less gauge symmetry .for example , if you are given a geometrically - engineered grand unified model , then our results show explicitly that the model can be locally deformed into an model , and this can be deformed further into the standard model ; the original theory is seen to be a single point in a ( complex ) two - dimensional landscape of theories . andbecause larger symmetries always lie in lower dimensional surfaces of moduli space , it is very relevant to ask what physics prevents this unfolding from taking place .indeed , this question applies to the standard model as well our analysis could easily go further to unfold away .we are not presently able to answer why this does not happen may provide an alternative to tuning in the usual higgs sector . it would be interesting to understand in greater detail the relationship between unfolding and the higgs mechanism . ] ; although this observation suggests that perhaps theories with less symmetry , like , could be much more natural than grand unified theories .more generally , it is not presently understood what physics controls the values of the geometric moduli which deform the manifold the parameters which deform the complex structure , for example .we do not yet have a general mechanism which would fix these parameters ; we simply observe that any non - zero values of the moduli will give rise to a geometrically engineered manifold with gauge theory ` peppered ' with all the necessary singularities of the three families of the standard model together with the usual -like exotics . andimportantly , for any point in the complex four - dimensional ` landscape , ' the relative locations of all the relevant singularities are known and hence in principle so is the superpotential .this relationship between moduli - fixing and gauge symmetry breaking could be a novel feature of geometrically - unfolded models .it may allow one to apply the results in , for example , to single out theories on the landscape . however, a prerequisite to this type of analysis would be an identification of which moduli should be identified with the ones which deform the geometry as described here .although the motivation in this paper and in appears to be a top - down realization of grand unification , there is a sense in which we are really engineering from the bottom - up . specifically ,because the local geometry we have described is non - compact , the resulting theory is decoupled from quantum gravity , and the parameters along the landscape of deformations are continuous .this is not unlike the situation in .but what we lose in global constraints we perhaps gain by concrete local structure .not only do we have a framework which naturally predicts three families with a rather detailed phenomenological structure , but we have done so in a way that preserves all the information about the local geometry . and because this framework realizes the ` physics from pure geometry ' paradigm in a potentially powerful way , it could prove important to concrete phenomenological constructions in m - theory , for example . of course we envision these local geometries to be embedded within compact calabi - yau manifolds .it is an assumption of the framework that the precise global topology of the compactification manifold can be ignored at least as a good first approximation .one may ask the extent to which these constructions can be glued into compact manifolds . concretely : _ under what circumstances can a non - compact calabi - yau three - fold which is a fibration of surfaces with asymptotically uniform ade - type singularities be compactified _ ?this is an important question for mathematicians , the answer to which would likely lead to important physical insight quantization of the moduli space of deformations .a possible objection to this framework is that our constructions appear to depend on several seemingly arbitrary choices ( the specific chain from to the standard model , which roots were eliminated at each step , etc . ) .however , it is likely that the particle content , for example , which results is completely independent of these choices .furthermore , we suspect that different realizations of the unfolding merely result in different parameterizations of the landscape , and do not reflect true additional arbitrariness .but this is still an area that deserves attention .lastly , because in this picture the standard model is seen to unfold at the compactification scale , one may ask what has become of gauge coupling unification . because the gauge coupling constants are functions of the volumes of their corresponding co - dimension four singular surfaces which depend on the deformation moduli , the traditional meaning of grand unification is more subtle here as is typical in string phenomenology .for example , although we chose to unfold the standard model sequentially as a series of less unified models , there is no reason to suspect that that order has any physical importance .surely , if as we parameterized the unfolding in section [ e6xsu2 ] , setting ( or ) would result in an grand unified theory ; but setting instead would result in a restoration of family symmetry .the four complex structure moduli tune different types of unification separately and should simultaneously be at play in the question of gauge coupling unification .it is interesting to note , however , that if one were to simultaneously scale the values of all the moduli to be very small , the spectrum would be more and more unified : the relative distances between singularities shrink , unifying the coefficients in the superpotential ; and the volumes of the co - dimension four singularities ( if realized in a compact manifold ) would approach one another , resulting in a unification of their gauge couplings .what this may mean phenomenologically remains to be understood . in this paperwe have described a local , purely geometric framework in which gauge symmetry ` breaking ' can be re - cast as a problem of moduli fixing and in which the same moduli which describe this geometric ` unfolding ' also determine the physics of massless matter . and although we still do not understand the mechanisms by which these moduli are fixed , the landscape of possibilities is already enormously reduced : what would have been the hundreds of parameters describing the relative positions on the compactification manifold of the standard model s three families worth of matter fields , we specify them all in terms of only four complex structure moduli which describe the unfolding of an isolated singularity . and the fact that three families emerges is group - theoretic and not added by hand .it is a pleasure to thank helpful discussions with and insightful comments of herman verlinde , sergei gukov , gordy kane , paul langacker , edward witten , cumrun vafa , brent nelson , malcolm perry , dmitry malyshev , matthew buican , piyush kumar , and konstantin bobkov .
this paper extends and builds upon the results of , in which we described how to use the tools of geometrical engineering to deform geometrically - engineered grand unified models into ones with lower symmetry . this top - down unfolding has the advantage that the relative positions of singularities giving rise to the many ` low energy ' matter fields are related by only a few parameters which deform the geometry of the unified model . and because the relative positions of singularities are necessary to compute the superpotential , for example , this is a framework in which the arbitrariness of geometrically engineered models can be greatly reduced . in , this picture was made concrete for the case of deforming the representations of an model into their standard model content . in this paper we continue that discussion to show how a geometrically engineered of can be unfolded into the standard model , and how the three families of the standard model uniquely emerge from the unfolding of a single , isolated singularity .
the information contained in the `` phase image '' is essential in many applications such as magnetic resonance imaging ( mri ) and interferometric synthetic aperture radar ( insar ) . in these techniquesthe phase is not observed directly but computed from a complex signal .therefore , the measured values are wrapped in the interval and the observed signal presents jumps .phase unwrapping is the procedure that allows us to practically remove these discontinuities to obtain the actual phase image .although , the modulo operation is quite trivial , its inversion can be very hard to solve .phase unwrapping techniques need to be able to overcome , among other problems , discontinuities , noise and under - sampling of the phase . in one dimension ,the unwrapping process is straightforward since there is only one possible `` path '' and a simple integration can recover the actual phase .however , this only works when the itoh smoothness condition is satisfied , _ i.e. _ , when the absolute value of the phase gradient is lower or equal to .the presence of noise or discontinuities could violate this condition , causing some unwrapping errors .most existing methods extend this integration principle to two dimensions ( 2-d ) and are denominated path - following algorithms .the problem in 2-d is the error propagation when the smoothness condition is violated .this occurs because the integration results depend on the chosen integration path and on the start and end points .the challenge remains in distinguishing jumps due to phase wrapping from those due to noise and discontinuities in the actual function .several works have considered additional information such as pixel quality maps to appropriately update the integration path . however , such algorithms have some difficulties to deal with high levels of noise .in addition to these algorithms , several efforts have been made in the development of path - independent methods .state - of - the - art techniques rely on the global minimization of an energy function based on the classical norm of the error or on a generalized norm .when minimizing the classical norm of the error as in , the retrieved phase is generally smooth and sensitive to noise . in authors prove that by minimizing a generalized norm using graph - cut techniques they are able to obtain an exact phase recovery ( in noiseless scenarios ) .the developed algorithm is denoted puma and is considered a state - of - the - art method in phase unwrapping without denoising .some works have included puma for unwrapping and they have added a denoising step either before or after the unwrapping step .both approaches have proved to outperform the classical norm minimization . while the approach of denoising before unwrapping deals better with high noise scenarios without discontinuities , it was reported that discontinuities are better preserved when the denoising is performed after the unwrapping .we aim at developing a general numerical reconstruction method that is able to simultaneously unwrap and denoise the observed phase .we propose a convex optimization approach based on the minimization of a sparsity prior on the phase image under a data fidelity constraint expressed in the phase derivative domain and adapted to gaussian distributed noise .moreover , to stabilize the algorithm , the first phase component is assumed to be zero .the problem is solved by means of the primal - dual algorithm proposed by chambolle and pock .the results are compared with the state - of - the - art technique puma , whose output is denoised in a post - processing step .although puma is faster and provides better results for the noiseless scenario , the proposed convex method provides better reconstruction quality for different noisy scenarios .our work is concerned by the reconstruction of a phase image discretized over pixels on a regular cartesian grid .the phase measurement process can be defined via the centralized modulo operator , or _ wrapping _ , : where \in [ -\pi,\pi ) ] since for most .in front of such a discrete formalism , several authors have proposed methods based on combinatorial optimization and graph - cut techniques . in this paper , we follow a different approach .we propose to relax the problem and to solve it using convex optimization by leveraging the differential relation . despite the wrapping operation, we expect that holds for the phase signal given an appropriate value . moreover , in order to circumvent the ill - conditioning of the problem in the presence of noise and `` non - itoh '' phase discontinuities , we also regularized our method by an appropriate wavelet analysis prior model based on the structure of the phase , _i.e. _ , a common tool used in many denoising methods .more specifically , we assume that the unwrapped phase image has a sparse or compressible representation in an orthonormal basis , _i.e. _ , the coefficients vector has few important values and its -norm is expected to be small .the regularization process can then proceed by promoting a small -norm in the wavelet projection of the phase image .the rationale of this is also to prevent fake phase jump reconstruction ( since these increase locally the wavelet coefficient values ) and to enforce the noise canceling .we also follow a common practice in the field which removes the ( unsparse ) scaling coefficients from the -norm computation .since both the differential fidelity and the wavelet prior are blind to the addition of a global constant , there is an ambiguity to estimate the phase up to such addition . in order to avoid this incertitude and to stabilize the convex optimization, we arbitrarily enforce the first phase component to be zero .we should note that the initial problem is itself ill - posed since , even if we solve it using directly with a perfect data prior model , a `` good '' solution would be determined up to a global addition of a multiple of .this constraint also induces the uniqueness of the solution .finally , since the noise level is assumed to be known and the noise -norm to be bounded , we also propose to explicitly recover the noise part in an additive model where the unknown phase and noise are summed up to faithfully satisfy .gathering all these aspects , the proposed reconstruction program reads anticipating the scope of the next section , we may notice that by forming the vector , the convex minimization described above can be recast as + \imath_{{\mathcal}c_2}({\boldsymbol}\nabla\,({{\bf i } } , { { \bf i}})\,{\boldsymbol}w ) + \imath_{\omega}({\boldsymbol}w),\end{gathered}\ ] ] where are the selection operators of the first and the last elements of a vector in , respectively ; is the identity matrix ; is the ( convex ) indicator function of a convex set , which equals to if and to otherwise ; and with the convex sets , and . in the next section we present the algorithm to solve numerically .we are interested in finding the phase candidate that minimizes , a problem that contains the sum of four lower semicontinuous convex functions from to , _i.e. _ , they belong to the space for some dimension .in particular , we aim at solving the general optimization with and the number of convex functions . for this, we use the chambolle - pock primal - dual algorithm defined in a product space . as any other proximal algorithm ,this one relies on the definition of the proximal operator that is uniquely defined for any for some . by writing ,the cp iterations are with tending to a minimizer of with . to match the formulation with the problem at hand , we set , and for ; for ; ; ; and . in order to apply the algorithm in, we must compute the proximal operators of , , and , the first three functions being the legendre - fenchel conjugate of their unstarred version .the proximal operator of is determined via the one of thanks to the conjugation property : the proximal operator of is given by the soft thresholding operator }({\boldsymbol}\zeta ) = \text{sign}({\boldsymbol}\zeta ) ( |{\boldsymbol}\zeta| - \tfrac{1}{\nu})_+.{\vspace{-2mm}}\ ] ] the proximal operators of , and are given by the projection onto the convex sets , and , respectively : }({\boldsymbol}\zeta - { \boldsymbol}q),\\ \operatorname{prox}_{\muh } { \boldsymbol}\zeta & = { \rm diag}(0,1,\,\cdots,1)\,{\boldsymbol}\zeta,{\vspace{-2mm}}\end{aligned}\ ] ] with if and , otherwise , is found by solving . in order to guarantee the convergence of the algorithm , _i.e. _ , to ensure that converges to the solution of when increases , we need to set and such that . the induced norm of the operator ( ) is estimated using the standard power iteration algorithm .in this section , we validate our convex approach by studying the quality of the unwrapped phase with respect to the amount of `` wraps '' in the measurements , the noise level and the presence of non - itoh discontinuities in the original phase image .results are fairly compared with a post - denoised phase unwrapping obtained by the conjunction of the puma algorithm with an optimal soft thresholding denoising using the same wavelet basis as in .hereafter , the solutions of our convex approach and of the post - denoised puma are denoted as and , respectively .two kinds of discrete phase images are selected in our experiments .they are defined on a pixel grid ( ) . in the first imagethe phase is simulated by a 2-d gaussian function of height , and standard deviations of 40 pixels horizontally and 25 pixels vertically . in the second image ,the phase is simulated by a truncated version of the 2-d gaussian , where the image is masked by a side triangle . by truncating the gaussian image , we are able to simulate phase discontinuities .+ the robustness of our method is tested against three different noise levels , each characterized by a different `` input snr '' , , namely , 10 db , 25 db and db ( no noise ) .since there is no reliable estimation of the initial signal mean in any phase unwrapping method , all reconstruction qualities are measured with centralized reconstructions and ground truths , _i.e. _ , by mean subtraction .after such procedure , the quality of a given reconstruction is measured with the `` reconstruction snr '' : . all algorithms were implemented in matlab and executed on a 3.2 ghz cpu , running a 64 bit linux system . for the behavior of the algorithm with respect to the amount of `` wraps '', we analyze the gaussian phase image and we vary its intensity by multiplying the image by a factor $ ] , providing no phase wraps since .we noticed that for the noiseless scenario ( isnr = db ) , the reconstruction quality is not affected by the value of , since the itoh condition is always satisfied . however, since there is no noise , puma outperforms the proposed method for all .table [ tab : rho_comparison ] presents a comparison for the two noisy scenarios , _i.e. _ , isnr = 25db and isnr = 10db .the rsnr is presented for an average of 5 trials ..comparison of the different rsnr obtained using denoised - puma ( dp ) and our convex approach ( c ) for different values of on the gaussian phase image . [ cols="^,^,^,^,^ " , ] we can notice that the convex approach outperforms the denoised - puma ( dp ) for the scenarios where the itoh condition is satisfied .however , for those cases where this condition is affected by the noise corrupting the phase , dp provides similar results . about the numerical complexity , for the first noise scenario ( isnr = 25db ), the convex algorithm convergence is reached for an average of 10000 iterations and it takes approximately 11 minutes ; while for the second noise scenario ( isnr = 10db ) , the convergence is reached for an average of 15000 iterations and takes approximately 16 minutes . fig .[ fig : fig1 ] depicts the resulting images for the gaussian and the truncated gaussian phases .results are shown for and isnr = 25db .for the gaussian phase image , the convex approach provides a good reconstruction quality with rsnr = 35.1db .we can note that for the truncated gaussian the reconstruction quality decreases with rsnr = 16.7db , because the algorithm is not able to completely recover the phase due to the high discontinuity at the peak of the triangle .we propose a general convex optimization approach for robust phase unwrapping .in contrast to state - of - the - art techniques , the proposed approach aims at simultaneously unwrap and denoise the phase image .the proposed approach is shown to outperform the post - denoised puma for those scenarios where the noisy phase is smooth enough to satisfy the itoh condition . however, when such condition is violated due to the noise level or discontinuities in the phase image , the algorithm is not capable of recovering the phase with high quality and it presents the same quality as the denoised puma . in future works, we could envisage to remove from the reconstruction problem the few pixels where the noisy phase is not smooth enough .however the question remains in how to obtain a good estimation on the position of those discontinuities since it depends on the phase to recover .t. lan , d. erdogmus , s.j .hayflick and j.u .szumowskil , `` phase unwrapping and background correction in mri '' , _ ieee workshop on machine learning for signal processing ( mlsp ) _ , 2008 .rosen , s. hensley , i.r .joughin , f.k .madsen , e. rodriguez and r.m .goldstein , `` synthetic aperture radar interferometry '' , _ proceedings of the ieee _ , vol .3 , pp . 333382 , 2000 .l. ying , phase unwrapping , _ wiley encyclopedia of biomedical engineering _ , john wiley & sons , inc . ,2006 . m. a. herrez , d.r .burton , m.j .lalor and m.a .gdeisat , `` a fast two - dimensional phase unwrapping algorithm based on sorting by reliability following a non - continuous path '' , _ applied optics _7437 - 7444 , 2002 .y. lu , x. wang and x. zhang , `` weighted least - squares phase unwrapping algorithm based on derivative variance correlation map '' , _ optik - international journal for light and electron optics _ , vol .62 - 66 , 2007 .ghiglia and m.d .pritt , _ two - dimensional phase unwrapping : theory , algorithms , and software _ , wiley , 1998 .j. bioucas - dias and g. valado , `` phase unwrapping via graph - cuts '' , _ ieee transactions on image processing _ , vol .698 - 709 , 2007 .j. bioucas - dias , v. katkovnik , j. astola and k. egiazarian , `` absolute phase estimation : adaptive local denoising and global unwrapping '' , _ applied optics_,vol .47 , pp . 53585369 , 2008 . g. valado and j. bioucas - dias , `` cape : combinatorial absolute phase estimation '' , _a _ , vol .9 , pp . 2093 - 2106 , 2009 .a. chambolle and t. pock , `` a first - order primal - dual algorithm for convex problems with applications to imaging '' , _ journal of mathematical imaging and vision _ , vol .120 - 145 , 2011 .m. ledoux and m. talagrand , _ probability in banach spaces : isoperimetry and processes _ , vol .23 , springer , 1991 .w. hoeffding``probability inequalities for sums of bounded random variables '' , _ journal of the american statistical association _13 - 30 , 1963 .donoho , `` de - noising by soft - thresholding '' , _ ieee transactions on information theory _ , vol .613 - 627 , 1995 .combettes and j.c .pesquet , `` proximal splitting methods in signal processing '' , _ fixed - point algorithms for inverse problems in science and engineering _ , pp .185 - 212 , 2011 .a. gonzalez , l. jacque , c. de vleeschouwer and p. antoine , `` compressive optical deflectometric tomography : a constrained total - variation minimization approach '' , _ inverse problems and imaging journal _ , vol .421 - 457 , 2014 .n. parikh and s. boyd,``proximal algorithms '' , _ foundations and trends in optimization _ , vol . 1 , no . 3 , pp .123 - 231 , 2013 .e. y. sidky , j. h. jrgensen and x. pan , `` convex optimization problem prototyping with the chambolle pock algorithm for image reconstruction in computed tomography '' , _ physics in medicine and biology _ , vol .3065 - 3095 , 2012 .
the 2-d phase unwrapping problem aims at retrieving a `` phase '' image from its modulo observations . many applications , such as interferometry or synthetic aperture radar imaging , are concerned by this problem since they proceed by recording complex or modulated data from which a `` wrapped '' phase is extracted . although 1-d phase unwrapping is trivial , a challenge remains in higher dimensions to overcome two common problems : noise and discontinuities in the true phase image . in contrast to state - of - the - art techniques , this work aims at simultaneously unwrap and denoise the phase image . we propose a robust convex optimization approach that enforces data fidelity constraints expressed in the corrupted phase derivative domain while promoting a sparse phase prior . the resulting optimization problem is solved by the chambolle - pock primal - dual scheme . we show that under different observation noise levels , our approach compares favorably to those that perform the unwrapping and denoising in two separate steps . phase unwrapping , convex optimization , chambolle - pock algorithm , sparse prior .
two of the main pillars of mathematical finance are modern portfolio theory ( mpt ) and the capital asset pricing model(capm ) .the seminal work on mpt is attributed to markowitz who presented his mean - variance approach to asset allocation in 1952 .it was soon amplified by sharpe in 1964 and by lintner in 1965 with the introduction of the concept of the capital market line and subsequent development of the capm .mpt permeates the teaching and practice of classical financial theory .substantial portions of most textbooks on finance are devoted to it and its implications .its influence has been profound .the notion that portfolio volatility , the square root of the variance of the portfolio yield , is an adequate proxy for risk is fundamental to mpt .similarly , the notion that there exists at least one risk - free asset is fundamental to the construction of the capital market line and the formulation of the capm . in the present paper , we discuss issues surrounding both of these notions and , abandoning them , introduce a novel method of portfolio optimization .the notion that variance measures risk is now viewed as a weak compromise with economic reality .variance measures uncertainty , and there are circumstances of interest in which great uncertainty implies little risk .similarly , supposing that there are risk - free assets or , more precisely , assets with unvarying yield is a poor approximation , particularly for long - time horizons .there have been attempts to develop mpt with alternative definitions of risk , including a semi - variance , rms loss , average downside risk , value at risk ( var ) and others but to our knowledge , none is based on the classic notion that the probability of failure to meet a preset goal is the proper quantitative measure of risk or on the elimination of the notion of a risk - free asset . in the following sections we give a brief introduction of mpt with critiques of each of the above two fundamental notions .we show that the probability of success can be interpreted as an expected utility that is deficient in some desirable features .we construct an additional utility with the desired properties and include it in the portfolio optimization .we discuss how to define a real portfolio optimization problem using historical data and report the result of our risk and utility evaluation using the daily closing prices for 13,000 stocks listed on the nyse and nasdaq during the period 1977 - 1996 .we conclude by presenting the results of our optimization for a portfolio drawn from a subset of low - risk , high utility stocks and discuss the implications of our main findings .the asset allocation problem is one of the fundamental concerns of financial theory . it can be phrased as a question : what is the optimal allocation of funds among a set of assets for a predetermined level of risk ? phrased in this way it leaves unspecified the meaning of optimality and of risk .modern portfolio theory offers a two step answer via a particular specification of optimality and risk .the first step was taken in 1952 with the introduction of the mean - variance approach of markowitz . by equating risk with variance, markowitz derived an efficient frontier of portfolios which maximize return for given risk and opened the door to further advances in this theoretical framework .the addition of a risk - free asset by sharpe and lintner in the mid 1960s led to the capital market line and the capm .they supposed that there exists a risk - free asset , whose yield , , did not fluctuate .the line drawn from tangent to the efficient frontier is then the locus of yields of all optimal portfolios which can be constructed by adding the risk - free asset to holdings drawn from the tangent set which via equilibrium arguments is indentified with the market portfolio . along that line ,termed the capital market line , return increases linearly with risk .the one - fund theorem follows , stating that any portfolio on the capital market line may be constructed from a combination of the risk - free asset and the market portfolio .we have two fundamental objections to mpt .first , volatility measures the uncertainty of yield .while it may be positively correlated with risk in some cases , it does not , in general , measure risk .suppose , for example , that the specific goal for the portfolio is that its mean yield must equal or exceed a minimum acceptable value .suppose also that the volatility of the portfolio is large , perhaps significantly larger than . nevertheless , if the mean yield is also large , enough so that significantly exceeds the volatility , , the probability that the goal is not met will be small .there can be large uncertainty with little risk .second , no asset is risk - free in the long term , and , depending on risk tolerance , perhaps not even in the short term .there are various risks associated with any supposedly risk - free asset , including e.g. , inflation risk , interest - rate risk , and exchange - rate risk .we conclude that , the candidate risk - free asset , should be added to the asset mix and optimized with the rest .the results of this addition are far reaching .the efficient frontier is modified , the capital market line disappears , and the one - fund theorem is in general not valid .the efficient market portfolio is no longer unique .for more than four centuries , the _ probability of failure has been taken as the quantitative measure of risk .let be the probability of success .then the risk is and the adverse odds .however , to define success there must be a goal .we take as the goal of asset allocation the one previously introduced , namely that the expected portfolio yield equal or exceed , a minimum acceptable yield ._ the average yields and volatilities of the individual assets depend on , the investment horizon ( holding period ) as of course does the entire probability distribution of , .we have investigated the dependence of and the volatility , , on for 13,000 stocks using price - time data from 1977 - 1996 .we found a non - universal power law dependence of on , we conclude that specifying is an essential part of defining the goal .the minimum acceptable , yield , has two components , which must be specified independently , where , is the minimum acceptable real ( deflated ) after - tax yield . is an allowance for transaction costs , inflation , and tax costs . in mpt, selecting a value for the volatility , , of the optimal portfolio establishes uncertainty tolerance .instead , we specify a minimum acceptable value of , . stating that must equal or exceed , establishes our risk tolerance . with , , and chosen and with knowledge of $ ] the joint probability distribution of the , can be evaluated for each allocation vector , , .\ ] ]optimization then consists of finding the supremum of for subject to .the definition of given by eq .( [ eqn3 ] ) can be rewritten as that is as the expectation value of the heaviside unit function , in this form , it can be interpreted as an expected _ utility with the _ utility function . punishes all losses equally and thus lacks the ability to discriminate .thus has shortcomings when viewed as a utility . __ the criterion should be kept as the specification of risk tolerance .to overcome the objections to as a utility , we add to our optimization constraints a supplementary utility tolerance which must be met as well .we define as the expectation of the following utility function , in eq .( [ eqn6 ] ) , , introduced in section 4.1 , is a natural choice for _ utility sensitivity .in contrast to the heaviside function , this function , while not unique , has the required utility characteristics .it penalizes failure to meet the goal by going negative and rapidly increasing in magnitude with decreasing yield below .it has positive and diminishing marginal utility as well . _for short enough horizons , , the random yield , , of a portfolio specified by is still linearly related to , as in section 4.2 .suppose now , that the are correlated gaussian random variables .given its linearity in the , is then normally distributed with probability distribution the definition in eq .( [ eqn4 ] ) of now becomes introducing , a modified sharpe s ratio , allows us to rewrite eq .( [ eqn9 ] ) as the minimum acceptable value of thus defines through eq .( [ eqn11 ] ) a minimum acceptable value of , .for example , if then .specifying that introduces a linear risk boundary in the plane , as shown in fig .portfolios having acceptable risk , , lie above the boundary .those with unacceptable risk , , lie below it . from eq .( [ eqn6 ] ) and eq .( [ eqn8 ] ) the expected utility , , is simply where the _ utility ratio is a minimum acceptable value of implies a minimum acceptable value of . for example , implies . specifying defines the utility boundary in the plane , also shown in fig .1 . above the boundary , portfolios have and are acceptable .below , they have and are unacceptable . _a portfolio is fully acceptable if it meets both the risk and utility criteria , that is if and ) .depending on the value of for a given , the resulting acceptability boundary in the plane can coincide with the utility boundary or can have one risk boundary and one or two utility boundary segments as illustrated in fig .the aceptability boundary is always effectively convex .portfolio optimization can now proceed by finding the acceptability boundary is convex , and the efficient frontier is concave . consequently , there are zero , one , or two intersections as illustrated in fig .if there are no intersections because the acceptability boundary lies above the efficient frontier , no portfolio can be constructed which meets the investment criteria .if there are two intersections , the upper intersection specifies the optimal portfolio . because of the convex / concave characteristics of the boundaries and the monotonic upward shift of the acceptability boundary with increasing and , this optimal portfolio has maximum allowable risk and minimum allowable utility .one intersection is the marginal case .the optimal portfolio is indicated by a solid square at the intersection of the acceptability boundary and the efficient frontier.,title="fig:",width=566 ] [ figure1 ]we choose to be 6% and to be 5% so that is 11% .we explore holding periods between 1 and 5 years and set a conservative value of =0.9 and an aggressive value of =0 .we do not allow short positions , .we select stocks as the candidate assets .our historical data source is the daily closing prices of 13,000 nyse and nsadaq stocks during the 20 year period 1977 - 1996 obtained from genesis financial services .the risk and utility criteria divide the plane into four sectors : 1 . , -unacceptable and values .2 . , -unacceptable and acceptable values .3 . , -acceptable and unacceptable values .4 . , -acceptable and values .typical scatter - plot results are shown in table i for and .the entries in each sector column give the number of stocks from among those in the entire universe of 13,000 . is the time span of the working data .the proportion of stocks which individually meet the and criteria is small . the total population of sectors b and d , where is substantially larger than that of sectors c and d where , , illustrating the more aggressive character of the utility criterion ..sector populations [ cols="^,^,^,^,^,^ " , ]optimizing instead of as an alternative to section 6.4 , we now construct an illustrative example of portfolio optimization .the investment criteria we use are =11% , =5% , years , =10 years , =0.9 , , , and comprises a set of 20 stocks drawn from the 129 stocks in sector d , all of which have and .we now find as from which we evaluate , , and .the results are , , and and . only 6 of the 20 stocks have nonzero allocation ratios : =0.35 , =0.043 , =0.043 , =0.185 , =0.336 , and =0.043 .we started with a brief summary of mpt to introduce the concepts of the efficient frontier , the capital market line , and the market portfolio .we then argued that the concept of a risk - free return is invalid for longer holding periods . to replace the volatility , which measures uncertainty not risk , we introduced the probability of failure to meet a preset investment goal as a measure of risk .the corresponding probability of success , , is a utility which neither penalizes failure nor incorporates diminishing positive marginal utility .we supplement with an appropriately defined utility and impose minimum acceptable values of and for the portfolio . to explore the feasibility of implementing based portfolio optimization , we computed the and values for individual stocks over various holding periods using historical data drawn from a database of 13,000 stocks .composing the asset set of 20 stocks from the acceptable sector of the plane , we optimized the probability of success for a lower bound to the expected yield .the results imply the feasibility of constructing a convex efficient frontier in the plane . the optimal portfolio can be that which maximizes on the frontier subject to and or simply that which maximizes .the , scatter plot is a powerful tool for candidate asset selection .all of this is an academic exercise unless it is accompanied by a measure of confidence that the use of historical data generates predictive power .fundamental analysis of the candidate companies , industries , etc ., must therefore be an essential component of portfolio construction .h. markowitz , journal of finance , vol .7 , no . 1 , 71 - 99 , 1952 w. f. sharpe , `` capital asset prices : a theory of market equilibrium under conditions of risk '' , journal of finance , vol 1 , 425 - 442 j. lintner , `` the valuation of risk assets and the selection of risky investment in stock portfolios and capital budgets '' , review of economics and statistics , 47 , 13 - 37 h. markowitz , `` portfolio selection : efficient diversification of assets '' , cowles foundation monograph no .16 , wiley , 1959 m. steinbach , `` markowitz revisited : mean - variance models in financial analysis '' , siam review , 43 , 31 - 85 , 2001 t. linsmeier and n. pearson , `` risk measurement : an introduction to value at risk '' , mimeo , university of illinois , 1996 p. jorion , `` value at risk , the new benchmark for controlling market risk '' , mcgraw - hill , 1997 d. luenberger , `` investment science '' , oxford university press , 1998 j. tobin , `` liquidity preference as behavior toward risk '' , review of economic studies , 26 , february , 65 - 86 p. bernstein , `` against the gods : the remarkable story of risk '' , wiley , 1998 w. sharpe , `` mutual fund performance '' , journal of business , january 1966 , 119 - 138
modern portfolio theory(mpt ) addresses the problem of determining the optimum allocation of investment resources among a set of candidate assets . in the original mean - variance approach of markowitz , volatility is taken as a proxy for risk , conflating uncertainty with risk . there have been many subsequent attempts to alleviate that weakness which , typically , combine utility and risk . we present here a modification of mpt based on the inclusion of separate risk and utility criteria . we define risk as the probability of failure to meet a pre - established investment goal . we define utility as the expectation of a utility function with positive and decreasing marginal value as a function of yield . the emphasis throughout is on long investment horizons for which risk - free assets do not exist . analytic results are presented for a gaussian probability distribution . risk - utility relations are explored via empirical stock - price data , and an illustrative portfolio is optimized using the empirical data . portfolio risk utility mean - variance finance 01.30.cc 02.50.-r 02.50.ey 89.90.+n
the stochastic uncertainty of random disturbances regarded as a discrepancy between an inexactly known probability distribution of a real - world noise and its nominal model can significantly degrade the designed performance of a control system if the applied controller synthesis procedure relies upon a specific probability law of the disturbance and the assumption that it is known precisely .such situations can also result from the inherent variability of the conditions of the control system operational environment .so , the and controllers are efficient in full only if the basic hypotheses on the nature of external disturbances are met closely enough .it is known that the ( or lqg ) controller may perform poorly if the input disturbance is a strongly correlated noise , while the controller designed for the deterministic worst case demonstrates excessive conservatism if the external disturbance is white or weakly correlated random signal .one of the first ideas aimed at overcoming the lack of performance of the lqg controller in the case when the external disturbance is not the gaussian white noise arose in work devoted to some modification of the performance criterion .this idea gave rise to development of the whole class of problems in the control theory called the risk sensitivity problems .the ideas of deriving controller which combines the positive features of lqg ( ) and controllers ( i.e. minimizes the quadratic cost sufficiently good and is robust enough ) appeared in the beginning of 1990 s .in particular , one can distinguish an approach concerned with minimization of norm of the closed - loop system under constraints on its norm and approach related to minimization of entropy functional under constraints on the closed - loop norm . as is shown in, the problem of synthesis of a controller which minimizes the entropy functional is equivalent in a sense to the problem of optimal risk - sensitive ( leqg ) controller synthesis .a lot of papers are devoted to the problems concerned with minimization of the entropy functional ( see e.g. ) .the ideas of the mixed control first introduced in were extended in based on splitting the external disturbance into signals with bounded spectrum and bounded power and using the multi - objective performance criterion . a solution to the stochastic mixed control problem for the discrete - time systemsis given in .all of the works mentioned above exploit the techniques based on solving certain ( sometimes cross - coupled ) riccati equations . in the mixed problemwas considered in terms of algebraic riccati inequalities rather than equations and solved by means of convex optimization . since then the efficient interior - point algorithms for solving convexoptimization problems had been developed , convex optimization has become a standard strategy for control system analysis and synthesis .the linear matrix inequalities have proved to be a powerful formulation and design technique for a variety of linear problems . after the controller synthesis problem had been solved via lmi ,the semidefinite programming was successfully applied to developing effective solutions to multi - objective control problems .a detailed survey of these extensive results is far beyond the topic of this paper and may be presented elsewhere .an approach to attenuation of uncertain stochastic disturbances based on minimax control was proposed in the middle of 1990 s and extended later to the mimo systems and synthesis of structured controllers via lmi in . instead of exact knowledge of the disturbance s covariance coefficients, it is only required that the covariance coefficients belong to an a priori known set .the designed controller minimizes the worst possible asymptotic output variance for all these disturbances .the considered problem is intermediate between the extreme and design scenarios and reduces to a robust control problem with uncertainty in the external disturbance signal . at the same time, another promising stochastic minimax alternative had emerged from ideas of i.g.vladimirov who originally developed the anisotropy - based theory of robust stochastic control presented in a series of papers . in the view of this approach, the robustness in stochastic control is achieved by explicitly incorporating different scenarios of the noise distribution into a single performance index to be optimized ; the statistical uncertainty is measured in entropy theoretic terms , and the robust performance index can be chosen so as to quantify the worst - case disturbance attenuation capabilities of the system .the main concepts of the anisotropy - based approach to robust stochastic control are the anisotropy of a random vector and anisotropic norm of a system .the anisotropy functional introduced by i.g.vladimirov is an entropy theoretic measure of the deviation of a probability distribution in euclidean space from gaussian distributions with zero mean and scalar covariance matrices .the mean anisotropy of a stationary random sequence is defined as the anisotropy production rate per time step for long segments of the sequence . in application to random disturbances ,the mean anisotropy describes the amount of statistical uncertainty which is understood as the discrepancy between the imprecisely known actual noise distribution and the family of nominal models which consider the disturbance to be a stationary gaussian white noise sequence with a scalar covariance matrix .another fundamental concept of i.g.vladimirovs theory is the -anisotropic norm of a linear discrete time invariant ( ldti ) system which quantifies the disturbance attenuation capabilities by the largest ratio of the power norm of the system output to that of the input provided that the mean anisotropy of the input disturbance does not exceed a given nonnegative level .a generalization of the anisotropy - based robust performance analysis to finite horizon time varying systems is developed in . in the context of robust stochastic control design aimed at suppressing the potentially harmful effects of statistical uncertainty, the anisotropy - based approach offers an important alternative to those control design procedures that rely on a precisely known specific probability law of the disturbance and the assumption that it is known precisely .minimization of the anisotropic norm of the closed - loop system as a performance criterion leads to internally stabilizing dynamic output - feedback controllers that are less conservative than the controllers and more efficient for attenuating the correlated disturbances than the controllers .a state - space solution to the anisotropic optimal control problem derived by i.g.vladimirov in involves the solution of three cross - coupled algebraic riccati equations , an algebraic lyapunov equation and an equation on the determinant of a related matrix . the resulted optimalfull - order estimator - based ( central ) controller is a unique one .an extension of these results to the systems with parametric uncertainties was given in .but solving these complex systems of equations requires special developing of homotopy - like numerical algorithms . besides, the applied equation - based synthesis procedure is not aimed at the synthesis of reduced- or fixed - order ( decentralized , structured , multi - objective ) controllers which still remains open .moreover , although the ideas of entropy - constrained induced norms and associated stochastic minimax find further development in the control literature , the anisotropy - based theory of stochastic robust control remains largely unnoticed .one of the reasons seems to be hard numerical tractability of the analysis and synthesis problems as well as a lack of additional degrees of freedom in the controller synthesis procedure .the anisotropic suboptimal controller design is a natural extension of the approach proposed by i.g.vladimirov in . instead of minimizing the anisotropic norm of the closed - loop system , a suboptimal controller is only required to keep it below a given threshold value . rather than resulting in a unique controller , the suboptimal synthesis yields a family of controllers , thus providing freedom to impose some additional specifications on the closed - loop systemone of such specifications , for example , may be a particular pole placement to achieve desirable transient performance .getting a solution to the anisotropic suboptimal controller synthesis problem requires a state - space criterion to verify whether the anisotropic norm of a system does not exceed a given value .an anisotropic norm bounded real lemma ( anbrl ) as a stochastic counterpart of the well - known norm bounded real lemma for ldti systems under statistically uncertain stationary gaussian random disturbances with limited mean anisotropy was presented in .the resulting criterion has the form of an inequality on the determinant of a matrix associated with an algebraic riccati equation which depends on a scalar parameter .a similar criterion for linear discrete time varying systems involving a time - dependent inequality and difference riccati equation is derived in .recently , a sufficient strict version of anbrl was introduced in in form of a convex feasibility problem employing a strict inequality in the determinant of a positive - definite matrix and a related lmi . moreover, the determinant constraint turns out to depend linearly on the squared threshold value , thus allowing to minimize it directly subject to the convex constraints and compute the -anisotropic norm of a ldti system as a solution to the convex optimization problem .the developed analysis procedure is numerically attractive and easily realizable by means of available convex optimization software .this paper is aimed at application of the powerful technique of convex optimization to synthesis of the anisotropic suboptimal and -optimal controllers generally of fixed order .the anisotropic controller seems to offer a promising and flexible trade - off between and controllers . in comparison with the state - space solution to anisotropic optimal controller synthesis problem derived before in , the proposed optimization - based approach is novel and does not require developing specific homotopy - like computational algorithms .the structure of the paper is as follows . in section [sect : problem statement ] we give the statement of the general problem of synthesis of the fixed - order anisotropic suboptimal controller . in section [sect : problem solution ] we introduce a solution to the general fixed - order synthesis problem and consider three important design cases : static state - feedback gain for full - information case , dynamic output - feedback controller , and static output - feedback gain .section [ sect : numerical example ] provides a number of illustrative numerical examples .concluding remarks are given in section [ sect : conclusion ] .the set of reals is denoted by the set of real - matrices is denoted by for a complex matrix ] for a real matrix ] for real symmetric matrices , stands for positive definiteness of in block symmetric matrices , symbol replaces blocks that are readily inferred by symmetry . the spectral radius of a matrix is denoted by where is -th eigenvalue of the matrix the maximum singular value of a complex matrix is denoted by denotes identity matrix , denotes zero matrix .the dimensions of zero matrices , where they can be understood from the context , will be omitted for the sake of brevity .the angular boundary value of a transfer function analytic in the unit disc of the complex plane is denoted by denotes the hardy space of ( )-matrix - valued transfer functions of a complex variable which are analytic in the unit disc and have bounded norm denotes the hardy space of ( )-matrix - valued transfer functions of a complex variable which are analytic in the unit disc and have bounded norm a ldti plant with -dimensional internal state -dimensional disturbance input -dimensional control input -dimensional controlled output and -dimensional measured output all these signals are double - sided discrete - time sequences related to each other by the equations = \left [ \begin{array}{ccc } a & b_w & b_u\\ c_z & d_{zw } & d_{zu}\\ c_y & d_{yw } & 0 \end{array } \right ] \left [ \begin{array}{c } x_{k}\\ w_k\\ u_k \end{array } \right],\quad -\infty < k < + \infty,\ ] ] where all matrices are assumed to be of appropriate dimensions and and are assumed to be stabilizable and detectable .the only prior information on the probability distribution of the disturbance sequence is as follows .it is assumed that is a stationary sequence of random vectors with zero mean unknown covariance matrix and gaussian pdf where and denotes the expectation .it is also assumed that the mean anisotropy of the sequence is bounded by a nonnegative parameter .the latter means that can be produced from -dimensional gaussian white noise with zero mean and scalar covariance matrix , by an unknown stable lti shaping filter in the family where is the mean anisotropy functional .we are generally interested in finding a fixed - order dynamic output - feedback controller in general compensator form = \left [ \begin{array}{cc } a_\c & b_\c\\ c_\c & d_\c \end{array } \right]\left [ \begin{array}{c } \xi_k\\ y_k \end{array } \right],\quad -\infty < k < + \infty,\ ] ] with -dimensional internal state to ensure stability of the closed - loop system ( figure [ fig : closed - loop system ] ) and guarantee some designed level of the external disturbance attenuation performance .let denote the closed - loop transfer function from to .recall that the -anisotropic norm of a transfer function quantifies the disturbance attenuation capabilities of the respective closed - loop system by the largest ratio of the power norm of the system output to that of the input provided that the mean anisotropy of the input disturbance does not exceed the level : moreover , it is known from that the -anisotropic norm of a given system is a nondecreasing continuous function of the mean anisotropy level which satisfies these relations show that the and norms are the limiting cases of the -anisotropic norm as , respectively .the statement of the general problem of synthesis of the fixed - order anisotropic suboptimal controller is as follows .[ problem : anisotropic suboptimal design ] given a ldti plant described by ( [ eq : standard plant ] ) , a mean anisotropy level of the external disturbance , and some designed threshold value , find a fixed - order ldti output - feedback controller defined by ( [ eq : dynamic controller ] ) which internally stabilizes the closed - loop system and ensures its -anisotropic norm does not exceed the threshold , i.e. we introduce a solution to the general fixed - order synthesis problem and consider three important design cases , namely static state - feedback gain for full - information case , dynamic output - feedback controller , and static output - feedback gain . to solve the synthesis problem, we apply a state - space criterion to verify if the anisotropic norm of a system does not exceed a given threshold value .this criterion called the strict anisotropic norm bounded real lemma ( sanbrl ) was recently presented in . but to apply sanbrl to the synthesis problem we should recast it in slightly different form . with the plant and controller defined as above , the closed - loop system admits the realization = \left [ \begin{array}{cc } \euscript{a } & \euscript{b}\\ \euscript{c } & \euscript{d } \end{array } \right]\left [ \begin{array}{c } \chi_k\\ w_k \end{array } \right],\quad -\infty < k < + \infty,\ ] ] where , .it is shown in that given , , the inequality ( [ eq : anisotropic suboptimality condition ] ) holds true if there exists such that the inequality holds for a real -matrix satisfying lmi \prec 0.\ ] ] note that the constraints described by the inequalities ( [ eq : determinant inequality gamma2 linear ] ) and ( [ eq : aninorm lmi 2x2 in phi eta ] ) are convex with respect to both variables and indeed , the function of a positive definite -matrix on the left - hand side of ( [ eq : determinant inequality gamma2 linear ] ) is convex ; see .being convex in both variables and , the conditions ( [ eq : determinant inequality gamma2 linear ] ) , ( [ eq : aninorm lmi 2x2 in phi eta ] ) of sanbrl are not directly applicable to solving the intended synthesis problem because of the cross - products of the unknown lyapunov matrix and the closed - loop realization matrices depending affinely on the controller parameters , which also appear in ( [ eq : determinant inequality gamma2 linear ] ) .moreover , just the inequality ( [ eq : determinant inequality gamma2 linear ] ) does not allow for the well - known projection lemma to be applied to get rid of the controller realization matrices in the synthesis inequalities . to overcome this obstacle , let us first move the positive definite matrix away from the determinant in ( [ eq : determinant inequality gamma2 linear ] ) by introducing a slack variable , real -matrix such that which is equivalent to ( [ eq : determinant inequality gamma2 linear ] ) .then , let us decouple the cross - products of , , and in ( [ eq : one - constraint s - procedure inequalities ] ) .for this purpose , the latter inequality in ( [ eq : one - constraint s - procedure inequalities ] ) can be rewritten as \left [ \begin{array}{cc } -\phi^{-1 } & 0\\ 0 & -i_{p_z } \end{array } \right]^{-1}\left [ \begin{array}{c } \eub\\ \eud \end{array } \right ] \prec 0,\ ] ] where \prec 0 $ ] , which is equivalent to \prec 0\ ] ] by virtue of the schur theorem ; see e.g. . to decouple the cross - products of , , and in ( [ eq : aninorm lmi 2x2 in phi eta ] ) ,represent it as -\left [ \begin{array}{c } \eua^{\mathrm{t}}\\\eub^{\mathrm{t}}\end{array } \right](-\phi^{-1})^{-1}\left [ \begin{array}{cc } \eua & \eub \end{array } \right ] \prec 0\ ] ] where evidently .then by the schur theorem the last inequality is equivalent to \prec 0.\ ] ] to decouple the cross - products of and , let us represent the inequality ( [ eq : aninorm lmi 3x3 in phi eta ] ) as \\ -\left [ \begin{array}{c } \euc^{\mathrm{t}}\\ \eud^{\mathrm{t}}\\ 0 \end{array } \right](-i_{p_z})^{-1}\left [ \begin{array}{ccc } \euc & \eud & 0 \end{array } \right]\prec 0\ ] ] where clearly .second application of the schur theorem to the above inequality gives the following formulation of sanbrl in reciprocal matrices .[ lemma : sanbrl inverse matrices ] let be a system with the state - space realization ( [ eq : cl system equations ] ) , where . then its -anisotropic norm ( [ eq : aninorm def ] ) is strictly bounded by a given threshold , i.e. if there exists such that the inequality holds true for some real -matrix and -matrix satisfying inequalities \prec 0,\ ] ] \prec 0.\ ] ] thus , with the notation , verifying if the condition holds true reduces to finding a positive scalar and two matrices , , , satisfying the lmis ( [ eq : lmi 3x3 anbrl inverse matrices ] ) , ( [ eq : lmi 4x4 anbrl inverse matrices ] ) under the convex constraint ( [ eq : det inequality anbrl inverse matrices ] ) or making sure of insolvability of this problem . for solving this nonconvex problem numerically, one can make use of known algorithms developed in suitable for finding reciprocal matrices under convex constraints . before to proceed to general synthesis problem [ problem : anisotropic suboptimal design ] ,let us consider the full - information case , when the state vector can be measured precisely and the plant is described by the equations = \left [ \begin{array}{ccc } a & b_w & b_u\\ c_z & d_{zw } & d_{zu}\\ i_{n_x } & 0 & 0 \end{array } \right ] \left [ \begin{array}{c } x_{k}\\ w_k\\ u_k \end{array } \right],\quad -\infty < k < + \infty,\ ] ] where as above all matrices are assumed to be of appropriate dimensions and is assumed to be stabilizable .[ problem : aniso sub sf ] given a ldti plant described by ( [ eq : plant full information ] ) , a mean anisotropy level of the external disturbance , and some designed threshold value , find a static state - feedback controller which internally stabilizes the closed - loop system with the state - space realization = \left [ \begin{array}{c|c } a+b_uk & b_w\\\hline c_z+d_{zu}k & d_{zw } \end{array } \right]\ ] ] and ensures its -anisotropic norm does not exceed the threshold , i.e. the inequality ( [ eq : anisotropic suboptimality condition ] ) holds .the following theorem gives sufficient conditions for the static state - feedback anisotropic suboptimal controller to exist .[ theorem : state - feedback problem solution ] given , , the state - feedback controller ( [ eq : static sf controller ] ) stabilizing the closed - loop system ( [ eq : cl realization sf ] ) ( ) and ensuring ( [ eq : anisotropic suboptimality condition ] ) exists if the convex problem \prec 0,\ ] ] \prec 0,\ ] ] is feasible with respect to the scalar variable , real -matrix , real -matrix , and real -matrix . if the problem ( [ eq : det psi inequality sf])([eq : pos def vars sf ] ) is feasible and the unknown variables have been found , then the state - feedback controller gain matrix is determined by .let a solution to the problem ( [ eq : det psi inequality sf])([eq : pos def vars sf ] ) exist .define . by definition of ,the lmis ( [ eq : psi lmi 3x3 sf ] ) , ( [ eq : aninorm lmi 4x4 sf ] ) can be rewritten as \prec 0,\ ] ] \prec 0.\ ] ] pre- and post - multiplying the last inequality by yields \prec 0.\ ] ] then , by lemma [ lemma : sanbrl inverse matrices ] , from ( [ eq : det psi inequality sf ] ) , ( [ eq : lmi 3x3 sf ] ) , ( [ eq : lmi 4x4 sf ] ) , ( [ eq : pos def vars sf ] ) it follows that the controller gain matrix is the solution to problem [ problem : aniso sub sf ] for the closed - loop realization ( [ eq : cl realization sf ] ) , which completes the proof .although it is not hard to prove that the synthesis inequalities ( [ eq : det psi inequality sf])([eq : pos def vars sf ] ) and the conditions ( [ eq : det inequality anbrl inverse matrices])([eq : lmi 4x4 anbrl inverse matrices ] ) of lemma [ lemma : sanbrl inverse matrices ] are equivalent , we can only establish and prove sufficient existence conditions for the controller ( [ eq : static sf controller ] ) since the conditions of lemma [ lemma : sanbrl inverse matrices ] are only sufficient .this also concerns two further synthesis theorems .the inequalities ( [ eq : det psi inequality sf])([eq : pos def vars sf ] ) are not only convex in and affine with respect to and , but also linear in obviously , minimizing under the convex constraints ( [ eq : det psi inequality sf])([eq : pos def vars sf ] ) , we minimize under the same constraints . with the notation , the conditions of theorem [theorem : state - feedback problem solution ] allow to compute the minimal via solving the convex optimization problem if the convex problem ( [ eq : minimum gamma2 ssf ] ) is solvable , the state - feedback controller gain matrix is constructed just as in theorem [ theorem : state - feedback problem solution ] .all anisotropic controllers obtained from solutions to optimization problems like ( [ eq : minimum gamma2 ssf ] ) will be referred to as anisotropic -_optimal _ controllers .direct application of the sufficient conditions ( [ eq : det inequality anbrl inverse matrices])([eq : lmi 4x4 anbrl inverse matrices ] ) of lemma [ lemma : sanbrl inverse matrices ] to the closed - loop realization = \left [ \begin{array}{cc|c } a+b_u{d_\c } c_y & b_u{c_\c } & b_w+b_u{d_\c }d_{yw}\\ { b_\c } c_y & { a_\c } & { b_\c } d_{yw } \\\hline c_z+d_{zu}{d_\c } c_y & d_{zu}{c_\c } & d_{zw}+d_{zu}{d_\c } d_{yw } \end{array } \right]\ ] ] yields the following corollary on the straightforward solution to general problem [ problem : anisotropic suboptimal design ] .[ corollay : fo of problem solution ] given , , a dynamic output - feedback controller of order defined by ( [ eq : dynamic controller ] ) solving problem [ problem : anisotropic suboptimal design ] exists if the inequalities \prec 0,\ ] ] \prec 0,\ ] ] \succ 0,\quad \pi : = \left [ \begin{array}{cc } \pi_{11 } & \pi_{12}\\ \pi_{12}^{\mathrm{t } } & \pi_{22 } \end{array } \right]\succ 0\ ] ] are feasible with respect to the scalar variable , real -matrix , matrices , , , and two reciprocal -matrices , such that where is the closed - loop system order .thus , the problem of finding the realization matrices of the fixed - order output - feedback dynamic controller ( [ eq : dynamic controller ] ) solving problem [ problem : anisotropic suboptimal design ] leads to solving the problem ( [ eq : det inequality fo of])([eq : block inverse matrices fo of ] ) or making sure of its insolvability . the problem ( [ eq : det inequality fo of])([eq : block inverse matrices fo of ] ) is nonconvex because of the condition ( [ eq : block inverse matrices fo of ] ) .although application of the known algorithms of can leads to a successful solution of the problem ( [ eq : det inequality fo of])([eq : block inverse matrices fo of ] ) , it should be kept in mind that any of them can converge to local minima .nevertheless , the full - order controller synthesis allows for a quite standard convexification procedure which is considered below to be applied . for full - order design ( ) one can effectively apply the well - known linearizing change of variables presented in and used in in the multi - objective control framework .from the block partitioning in ( [ eq : pos def vars fo of ] ) and the condition ( [ eq : block inverse matrices fo of ] ) it follows that = \left [ \begin{array}{c } i_{n_x}\\ 0 \end{array } \right]\ ] ] which leads to with the notation ,\qquad \pi_1 : = \left [ \begin{array}{cc } \pi_{11 } & i_{n_x}\\ \pi_{12}^{\mathrm{t } } & 0 \end{array } \right].\ ] ] it can be easily shown by direct calculationthat .\ ] ] the key linearizing change of the controller variables is defined as follows the new variables , , , dimensions , , , and , respectively , even if . it is noted in that if and have full row rank and if , , , , , and are known , one can always find the controller matrices , , , satisfying ( [ eq : euac])([eq : eudc ] ) .if the matrices and are square ( ) and invertible , then , , , and are unique , i.e. for full - order design , when one can always assume that and have full row rank , the mapping defined by ( [ eq : euac])([eq : eudc ] ) is bijective .more details can be found in , .the solution to problem [ problem : anisotropic suboptimal design ] in the full - order design case is given by [ theorem : full - order output - feedback problem solution ] given , , a dynamic output - feedback controller of full order defined by ( [ eq : dynamic controller ] ) solving problem [ problem : anisotropic suboptimal design ] exists if the convex problem \prec 0,\ ] ] \prec 0,\ ] ] \succ 0\ ] ] is feasible with respect to the scalar variable , real -matrix , matrices , , , and two real -matrices , .if the problem ( [ eq : det inequality of fullord])([eq : fullord arg posdef condition ] ) is feasible and the unknown variables have been found , then the controller matrices , , , are uniquely defined by and determined from finding two nonsingular -matrices , that satisfy let a solution to ( [ eq : det inequality of fullord])([eq : fullord arg posdef condition ] ) exist .from ( [ eq : phi1 pi1 def])([eq : eudc ] ) and ( [ eq : cl realization fo of ] ) it follows that = \phi_1^{\mathrm{t}}\eua\pi_1,\quad \left [ \begin{array}{c } b_w+b_u\eudc d_{yw}\\ \phi_{11}b_w+\eubc d_{yw } \end{array } \right ] = \phi_1^{\mathrm{t}}\eub,\ ] ] = \euc\pi_1,\quad \left[\begin{array}{cc } \pi_{11 } & i_{n_x}\\ i_{n_x } & \phi_{11 } \end{array } \right ] = \pi_1^{\mathrm{t}}\phi\pi_1 = \phi_1^{\mathrm{t}}\pi\phi_1,\ ] ] where and are defined by ( [ eq : pos def vars fo of ] ) and satisfy ( [ eq : block inverse matrices fo of ] ) with .substitution of the above identities to the inequalities ( [ eq : lmi 4x4 of fullord ] ) , ( [ eq : lmi 6x6 of fullord ] ) yields \prec 0,\quad \left [ \begin{array}{cccc } -\pi_1^{\mathrm{t}}{\phi}\pi_1 & 0 & \pi_1^{\mathrm{t}}\euscript{a}^{\mathrm{t}}\phi_1 & \pi_1^{\mathrm{t}}\euscript{c}^{\mathrm{t}}\\ 0 & -\eta i_{m_w } & \euscript{b}^{\mathrm{t}}\phi_1 & \euscript{d}^{\mathrm{t}}\\ \phi_1^{\mathrm{t}}\euscript{a}\pi_1 & \phi_1^{\mathrm{t}}\euscript{b } & -\phi_1^{\mathrm{t}}\pi\phi_1 & 0\\ \euscript{c}\pi_1 & \euscript{d } & 0 & -i_{p_z } \end{array } \right ] \prec 0.\ ] ] performing a congruence transformation with on the inequalities ( [ eq : anbrl fullord lmis ] ) , respectively , leads to \prec 0,\quad \left [ \begin{array}{cccc } -{\phi } & 0 & \euscript{a}^{\mathrm{t } } & \euscript{c}^{\mathrm{t}}\\ 0 & -\eta i_{m_w } & \euscript{b}^{\mathrm{t } } & \euscript{d}^{\mathrm{t}}\\ \euscript{a } & \euscript{b } & -\pi & 0\\ \euscript{c } & \euscript{d } & 0 & -i_{p_z } \end{array } \right ] \prec 0.\ ] ] then , by lemma [ lemma : sanbrl inverse matrices ] , from ( [ eq : det inequality of fullord ] ) , ( [ eq : anbrl fullord lmis implicit ] ) , ( [ eq : pos def vars fo of ] ) , ( [ eq : block inverse matrices fo of ] ) it follows that the closed - loop system ( [ eq : cl realization fo of ] ) is internally stable and its -anisotropic norm does not exceed the designed threshold , i.e. the inequality ( [ eq : anisotropic suboptimality condition ] ) holds .the procedure of reconstruction of the controller realization from the solution variables by ( [ eq : block inverse condition 1 ] ) , ( [ eq : dc backtrans])([eq : ac backtrans ] ) is quite standard , .as the inequalities ( [ eq : det inequality of fullord])([eq : fullord arg posdef condition ] ) are also linear in , the conditions of theorem [ theorem : full - order output - feedback problem solution ] allow to compute the minimal via solving the convex optimization problem if the convex problem ( [ eq : minimum gamma2 fo of ] ) is solvable , the controller matrices are constructed just as in theorem [ theorem : full - order output - feedback problem solution ] .it is stressed in that the applied synthesis procedure does not introduce any conservatism , if the analysis result does not involve any .the results of theorem [ theorem : full - order output - feedback problem solution ] make possible application of the anisotropic norm as a closed - loop performance specification or objective for specific closed - loop channels in the multi - objective control problems based on a common lyapunov functions together with other performance specifications and objectives that can be captured in the lmi framework .let us now consider the special and very important case of static output - feedback controller [ problem : aniso sub sof ] given ldti plant described by ( [ eq : standard plant ] ) , a mean anisotropy level of the external disturbance , and some designed threshold value , find the static output - feedback controller ( [ eq : static of controller ] ) which internally stabilizes the closed - loop system with the state - space realization = \left [ \begin{array}{c|c } a+b_uk c_y & b_w+b_uk d_{yw}\\\hline c_z+d_{zu}k c_y & d_{zw}+d_{zu}k d_{yw } \end{array } \right]\ ] ] and ensures its -anisotropic norm does not exceed the threshold , i.e. direct application of the sufficient conditions ( [ eq : det inequality anbrl inverse matrices])([eq : lmi 4x4 anbrl inverse matrices ] ) of lemma [ lemma : sanbrl inverse matrices ] to the closed - loop realization ( [ eq : cl realization sof ] ) yields the following corollary on the straightforward solution to problem [ problem : aniso sub sof ] .[ corollary : sof problem solution nonconv ] given , , the static output - feedback controller ( [ eq : static of controller ] ) solving problem [ problem : aniso sub sof ] exists if the inequalities \prec 0,\ ] ] \prec 0,\ ] ] are feasible with respect to the scalar variable , real -matrix , real -matrix , and two reciprocal real -matrices , such that so , the problem of finding the output - feedback gain matrix solving problem [ problem : aniso sub sof ] leads to solving the problem ( [ eq : det inequality sof nonconv])([eq : inverse matrices sof nonconv ] ) or making sure of its insolvability .the inequalities ( [ eq : det inequality sof nonconv])([eq : inverse matrices sof nonconv ] ) derived from the straightforward application of lemma [ lemma : sanbrl inverse matrices ] are not convex because of the condition ( [ eq : inverse matrices sof nonconv ] ) .one can try to solve this general problem by the algorithms of suitable for finding reciprocal matrices under convex constraints .however , the specific linearizing change of variables presented in can make the resulting optimization problem convex for a specific class of plants defined by a certain structural property .namely , suppose that the transfer function of the plant ( [ eq : standard plant ] ) from the control input to measured output vanishes , i.e. for the stabilizable and detectable plant ( [ eq : standard plant ] ) , if ( [ eq : tyu vanishes ] ) holds , then there exists a similarity transformation such that = \left [ \begin{array}{cc|cc } a_{11 } & a_{12 } & b_{w_1 } & b_{u_1}\\ 0 & a_{22 } & b_{w_2 } & 0\\\hline c_{z_1 } & c_{z_2 } & d_{zw } & d_{zu}\\ 0 & c_{y_2 } & d_{yw } & 0 \end{array } \right]\ ] ] where is controllable , is observable , and the matrix is stable ; see also .the representation ( [ eq : kalman canonical decomposition sof ] ) implies that the closed - loop system realization after static output feedback becomes = \left [ \begin{array}{cc|c } a_{11 } & a_{12}+b_{u_1}k c_{y_2 } & b_{w_1}+b_{u_1}k d_{yw}\\ 0 & a_{22 } & b_{w_2}\\\hline c_{z_1 } & c_{z_2}+d_{zu}k c_{y_2 } & d_{zw}+d_{zu}k d_{yw } \end{array } \right].\ ] ] the lyapunov matrix in the inequalities ( [ eq : lmi 3x3 anbrl inverse matrices ] ) , ( [ eq : lmi 4x4 anbrl inverse matrices ] ) of lemma [ lemma : sanbrl inverse matrices ] is partitioned according to the representation of in ( [ eq : cl realization kalman decomp sof ] ) as \succ0.\ ] ] the key linearizing change of variables is defined in as = \left [ \begin{array}{cc } \phi_{11}^{-1 } & -\phi_{11}^{-1}\phi_{12}\\ -\phi_{12}^{\mathrm{t}}\phi_{11}^{-1 } & \phi_{22}-\phi_{12}^{\mathrm{t}}\phi_{11}^{-1}\phi_{12 } \end{array } \right].\ ] ] it is noted in that the transformation ( [ eq : scherer linearizing change sof ] ) maps the set of all positive definite matrices into the set of all matrices with positive definite diagonal blocks ; this map is bijective ; its inverse is given by = \left [ \begin{array}{cc } q^{-1 } & -q^{-1}s\\ -s^{\mathrm{t}}q^{-1 } & r - s^{\mathrm{t}}q^{-1}s \end{array } \right].\ ] ] the transformation ( [ eq : scherer linearizing change sof ] ) is motivated by the factorization with ,\qquad p_2 : = \left [ \begin{array}{cc }i & -s\\ 0 & r \end{array } \right].\ ] ] [ theorem : sof problem solution convex ] suppose that the plant described by ( [ eq : standard plant ] ) is such that , i.e. ( [ eq : tyu vanishes ] ) holds .given , , a static output - feedback controller defined by ( [ eq : static of controller ] ) solving problem [ problem : aniso sub sof ] exists if the convex problem \prec 0,\ ] ] \prec 0,\ ] ] is feasible with respect to the scalar variable , real -matrix , controller gain matrix and real matrices , , and .let a solution to ( [ eq : det inequality sof conv])([eq : pos def vars sof conv ] ) exist . then from ( [ eq : scherer congr trans fact ] ) , ( [ eq : cl realization kalman decomp sof ] ) , ( [ eq : scherer back linearizing change sof ] ) it follows that & = & p_1\phi p_1^{\mathrm{t}},\\ \left [ \begin{array}{cc } a_{11}q & a_{11}s - sa_{22}+a_{12}+b_{u_1}k c_{y_2}\\ 0 & ra_{22 } \end{array } \right ] & = & p_1\phi\eua p_1^{\mathrm{t}},\\ \left [ \begin{array}{c } b_{w_1}+b_{u_1}k d_{yw}-sb_{w_2}\\ rb_{w_2 } \end{array } \right ] & = & p_1\phi\eub,\\\label{eq : transfromation identities scherer sof 4 } \left [ \begin{array}{cc } c_{z_1}q & c_{z_1}s+c_{z_2}+d_{zu}k c_{y_2 } \end{array } \right ] & = & \euc p_1^{\mathrm{t}}.\end{aligned}\ ] ] substituting the identities ( [ eq : transfromation identities scherer sof 1])([eq : transfromation identities scherer sof 4 ] ) to the lmis ( [ eq : lmi 3x3 sof conv ] ) , ( [ eq : lmi 6x6 sof conv ] ) , we have \prec 0,\quad \left [ \begin{array}{cccc } -p_1\phi p_1^{\mathrm{t } } & 0 & p_1\eua^{\mathrm{t}}\phi p_1^{\mathrm{t } } & p_1\euc^{\mathrm{t}}\\ 0 & -\eta i_{m_w } & \eub^{\mathrm{t}}\phi p_1^{\mathrm{t } } & \eud^{\mathrm{t}}\\ p_1\phi\eua p_1^{\mathrm{t } } & p_1\phi\eub & -p_1\phi p_1^{\mathrm{t } } & 0\\ \euc p_1^{\mathrm{t } } & \eud & 0 & -i_{p_z } \end{array } \right ] \prec 0.\ ] ] performing a congruence transformation with where is defined by ( [ eq : scherer congr trans fact ] ) , on the inequalities ( [ eq : lmis 3x3 4x4 sof cl ] ) , respectively , yields \prec 0,\qquad \left [ \begin{array}{cccc } -\phi & 0 & \eua^{\mathrm{t}}\phi & \euc^{\mathrm{t}}\\ 0 & -\eta i_{m_w } & \eub^{\mathrm{t}}\phi & \eud^{\mathrm{t}}\\ \phi\eua & \phi\eub & -\phi & 0\\ \euc & \eud & 0 & -i_{p_z } \end{array } \right]\prec 0.\ ] ] pre- and post - multiplying the inequalities ( [ eq : anbrl sof bmis ] ) by respectively , we have \prec 0,\qquad \left [ \begin{array}{cccc } -\phi & 0 & \eua^{\mathrm{t } } & \euc^{\mathrm{t}}\\ 0 & -\eta i_{m_w } & \eub^{\mathrm{t } } & \eud^{\mathrm{t}}\\ \eua & \eub & -\phi^{-1 } & 0\\ \euc & \eud & 0 & -i_{p_z } \end{array } \right]\prec 0.\ ] ] then , by lemma [ lemma : sanbrl inverse matrices ] , from ( [ eq : det inequality sof conv ] ) , ( [ eq : anbrl sof lmis ] ) , ( [ eq : pos def vars sof conv ] ) , ( [ eq : phi partition scherer sof ] ) it follows that the controller gain matrix is the solution to problem [ problem : aniso sub sof ] for the plant ( [ eq : kalman canonical decomposition sof ] ) and the closed - loop system ( [ eq : cl realization kalman decomp sof ] ) , which completes the proof .[ corollary : sof problem convex opt ] the convex constraints ( [ eq : det inequality sof conv])([eq : pos def vars sof conv ] ) are also linear in . with the notation , the conditions of theorem [ theorem : sof problem solution convex ] allow for to be minimized via solving the convex optimization problem the controller gain matrix enters the synthesis lmis ( [ eq : lmi 3x3 sof conv ] ) , ( [ eq : lmi 6x6 sof conv ] ) directly .it is noted in that this allows for some structural requirements on this controller gain to be incorporated making possible even the synthesis of decentralized controllers ( with block - diagonal ) via convex optimization .the results of theorem [ theorem : sof problem solution convex ] make possible application of the anisotropic norm as a closed - loop performance specification or objective for specific closed - loop channels in the multi - objective control problems with lmi specifications considered in .it should be also noted that in general case , when the structural property ( [ eq : tyu vanishes ] ) does not hold , one can follow the way of and make use of the youla - kuera parametrization of stabilizing controller to parametrize affinely the closed - loop system , enforce the said property , and bring the closed - loop realization to the form ( [ eq : kalman canonical decomposition sof ] ) .then the synthesis of the anisotropic controller can be treated as finding the youla parameter that enters the closed - loop system affinely by applying the results of theorem [ theorem : sof problem solution convex ] and corollary [ corollary : sof problem convex opt ] . besides the class of systems which satisfy the structural property ( [ eq : tyu vanishes ] ) , there are two particular cases of the system s structure which allow for the static output - feedback design problem to lead to some convex optimization problem by applying a nonsingular state coordinate transformation and introducing structured slack variables just as it was done for synthesis problem in .these cases are the so called singular control and filtering problems .let us first consider the singular control problem when the matrix of the plant ( [ eq : standard plant ] ) is zero and the matrix is of full column rank .then there exists a nonsingular state coordinate transformation matrix such that .\ ] ] under this transformation , the plant realization matrices become [ theorem : singular control sof convex ] suppose that the plant described by ( [ eq : standard plant ] ) is such that and .given , , a static output - feedback controller defined by ( [ eq : static of controller ] ) solving problem [ problem : aniso sub sof ] for the closed - loop realization = \left [ \begin{array}{c|c } a+b_uk c_y & b_w+b_uk d_{yw}\\\hline c_z & d_{zw } \end{array } \right]\ ] ] exists if the convex problem \prec 0,\ ] ] \prec 0,\ ] ] where , , , are defined by ( [ eq : realization under tu ] ) , is feasible with respect to the scalar variable , real -matrix , -matrix , and two structured matrix variables ,\quad l : = \left [ \begin{array}{c } l_1\\ 0 \end{array } \right].\ ] ] if the problem ( [ eq : det inequality sof conv sc])([eq : pos def vars sof conv sc ] ) is feasible and the unknown variables have been found , then the output - feedback controller gain matrix is determined by . the proof is similar to that of where it is derived for the norm performance criterion .let a solution to the problem ( [ eq : det inequality sof conv sc])([eq : pos def vars sof conv sc ] ) exist . performing a congruence transformation with on the inequalities ( [ eq : lmi 3x3 sof conv sc ] ) ,( [ eq : lmi 4x4 sof conv sc ] ) , respectively , leads to \prec 0,\ ] ] \prec 0\ ] ] where the plant realization matrices are derived from the backward transformation of ( [ eq : realization under tu ] ) .let us denote , .then from ( [ eq : s l def ] ) and definition of it follows that = t_u^{\mathrm{t}}\left [ \begin{array}{cc } \bs_1 & 0\\ 0 & \bs_2 \end{array } \right]\left [ \begin{array}{c } i_{m_u}\\0 \end{array } \right]k = t_u^{\mathrm{t}}\bs\bb_u k = sb_uk,\ ] ] and the above lmis can be rewritten as \prec 0,\ ] ] \prec 0,\ ] ] or , in terms of the closed - loop realization ( [ eq : cl realization sof singular control ] ) , as \prec 0,\quad \left [ \begin{array}{cccc } -\phi & 0 & \eua^{\mathrm{t}}s^{\mathrm{t } } & \euc^{\mathrm{t}}\\ 0 & -\eta i_{m_w } & \eub^{\mathrm{t}}s^{\mathrm{t } } & \eud^{\mathrm{t}}\\ s\eua & s\eub & \phi - s - s^{\mathrm{t } } & 0\\ \euc & \eud & 0 & -i_{p_z } \end{array } \right ] \prec 0.\ ] ] then , performing a congruence transformation with on the last inequalities , respectively , we have \prec 0,\ ] ] \prec 0.\ ] ] from the inequality it is clear that then , by lemma [ lemma : sanbrl inverse matrices ] , from ( [ eq : det inequality sof conv sc ] ) , ( [ eq : aninorm lmi 3x3 sof sc slack ] ) , ( [ eq : aninorm lmi 4x4 sof sc slack ] ) , ( [ eq : pos def vars sof conv sc ] ) it follows that the controller gain matrix is the solution to problem [ problem : aniso sub sof ] for the closed - loop realization ( [ eq : cl realization sof singular control ] ) , which completes the proof . unlike the proofs of theorems [ theorem : state - feedback problem solution][theorem : sof problem solution convex ] , there is no equivalence between the synthesis inequalities ( [ eq : det inequality sof conv sc])([eq : pos def vars sof conv sc ] ) and the conditions ( [ eq : det inequality anbrl inverse matrices])([eq : lmi 4x4 anbrl inverse matrices ] ) of lemma [ lemma : sanbrl inverse matrices ] . the synthesis lmis ( [ eq : lmi 3x3 sof conv sc ] ) , ( [ eq : lmi 4x4 sof conv sc ] ) establich only sufficient conditions for the inequalities ( [ eq : lmi 3x3 anbrl inverse matrices ] ) , ( [ eq : lmi 4x4 anbrl inverse matrices ] ) of lemma [ lemma : sanbrl inverse matrices ] to be solvable .this also concerns a synthesis theorem below .[ corollary : sof problem convex opt sc ] with the notation , the conditions of theorem [ theorem : singular control sof convex ] allow for to be minimized via solving the convex optimization problem if the problem ( [ eq : minimum gamma2 sof sc ] ) is solvable , the controller gain matrix is constructed just as in theorem [ theorem : singular control sof convex ] .now consider the singular filtering problem when the matrix of the plant ( [ eq : standard plant ] ) is zero and the matrix is of full row rank .then there exists a nonsingular state coordinate transformation matrix such that .\ ] ] under this transformation , the plant realization matrices become [ theorem : singular filtering sof convex ] suppose that the plant described by ( [ eq : standard plant ] ) is such that and .given , , a static output - feedback controller defined by ( [ eq : static of controller ] ) solving problem [ problem : aniso sub sof ] for the closed - loop realization = \left [ \begin{array}{c|c } a+b_uk c_y & b_w\\\hline c_z+d_{zu}kc_y & d_{zw } \end{array } \right]\ ] ] exists if the convex problem \prec 0,\ ] ] \prec 0,\ ] ] where , , , are defined by ( [ eq : realization under ty ] ) , is feasible with respect to the scalar variable , real -matrix , -matrix , and two structured matrix variables ,\quad m : = \left [ \begin{array}{cc } m_1 & 0 \end{array } \right].\ ] ] if the problem ( [ eq : det inequality sof conv sf])([eq : pos def vars sof conv sf ] ) is feasible and the unknown variables have been found , then the output - feedback controller gain matrix is determined by .the proof is dual to that of theorem [ theorem : singular control sof convex ] and similar to that of where it is derived for the norm performance criterion .let a solution to the problem ( [ eq : det inequality sof conv sf])([eq : pos def vars sof conv sf ] ) exist .substitute the realization matrices defined by ( [ eq : realization under ty ] ) to the lmis ( [ eq : lmi 3x3 sof conv sf ] ) , ( [ eq : lmi 4x4 sof conv sf ] ) . perform a congruence transformation with on the lmis ( [ eq : lmi 3x3 sof conv sf ] ) , ( [ eq : lmi 4x4 sof conv sf ] ) , respectively . then define and . from ( [ eq : r m def ] ) and definition of it follows that and the lmis ( [ eq : lmi 3x3 sof conv sf ] ) , ( [ eq : lmi 4x4 sof conv sf ] ) can be rewritten as \prec 0,\ ] ] \prec 0,\ ] ] or , in terms of the closed - loop realization ( [ eq : cl realization sof singular filtering ] ) , as \prec 0,\quad \left [ \begin{array}{cccc } \pi - r - r^{\mathrm{t } } & 0 & r^{\mathrm{t}}\eua^{\mathrm{t } } & r^{\mathrm{t}}\euc^{\mathrm{t}}\\ 0 & -\eta i_{m_w } & \eub^{\mathrm{t } } & \eud^{\mathrm{t}}\\ \eua r & \eub & -\pi & 0\\ \euc r & \eud & 0 & -i_{p_z } \end{array } \right ] \prec 0.\ ] ] then , performing a congruence transformation with on the last inequality leads to \prec 0.\ ] ] from the inequality it is clear that let us define .then , by lemma [ lemma : sanbrl inverse matrices ] , from ( [ eq : det inequality sof conv sf ] ) , ( [ eq : aninorm lmis sof sf slack ] ) , ( [ eq : aninorm lmi 4x4 sof sf slack ] ) , ( [ eq : pos def vars sof conv sf ] ) it follows that the controller gain matrix is the solution to problem [ problem : aniso sub sof ] for the closed - loop realization ( [ eq : cl realization sof singular filtering ] ) , which completes the proof .[ corollary : sof problem convex opt sf ] with the notation , the conditions of theorem [ theorem : singular filtering sof convex ] allow for to be minimized via solving the convex optimization problem if the problem ( [ eq : minimum gamma2 sof sf ] ) is solvable , the controller gain matrix is constructed just as in theorem [ theorem : singular filtering sof convex ] . it is noted in that since the singular control and filtering problems are dual , the convex feasibility problems ( [ eq : det inequality sof conv sc])([eq : pos def vars sof conv sc ] ) and ( [ eq : det inequality sof conv sf])([eq : pos def vars sof conv sf ] ) of theorems [ theorem : singular control sof convex ] and [ theorem : singular filtering sof convex ] are in a sense dual too , just as the convex optimization problems ( [ eq : minimum gamma2 sof sc ] ) and ( [ eq : minimum gamma2 sof sf ] ) of corollaries [ corollary : sof problem convex opt sc ] and [ corollary : sof problem convex opt sf ] . replacing the realization matrices and the variables under the coordinate transformation in the formulas of theorem [ theorem : singular control sof convex ] and corollary [ corollary : sof problem convex opt sc ] as we obtain the respective formulas of theorem [ theorem : singular filtering sof convex ] and corollary [ corollary : sof problem convex opt sf ] and the controller is given by it is shown in that the results of theorem [ theorem : singular control sof convex ] and corollary [ corollary : sof problem convex opt sc ] can be applied to synthesis of decentralized anisotropic suboptimal and -optimal static output - feedback and fixed - order controllers . in turn ,theorem [ theorem : singular filtering sof convex ] and corollary [ corollary : sof problem convex opt sf ] allow to get a solution to simultaneous anisotropic output - feedback control problems .these topics are beyond the limits of this paper and may be discussed elsewhere .it is well - known ( see e.g. ) that the fixed - order dynamic controller synthesis problem can be embedded into a static output - feedback design problem by augmentation of the plant states with the controller states as : = \left [ \begin{array}{cc|c|cc } a & 0 & b_w & 0 & b_u\\ 0 & 0 & 0 & i_{n_\xi } & 0\\\hline c_z & 0 & d_{zw } & 0 & d_{zu}\\\hline 0 & i_{n_\xi } & 0 & 0 & 0\\ c_y & 0 & d_{yw } & 0 & 0 \end{array } \right].\ ] ] the closed - loop realization is then given by = \left [ \begin{array}{cc } \cala & \calb_w\\ \calc_z & \cald_{zw } \end{array } \right ] + \left [ \begin{array}{c } \calb_u\\ \cald_{zu } \end{array } \right]k\left [ \begin{array}{cc } \calc_y & \cald_{yw } \end{array } \right ] = \left [ \begin{array}{cc } \cala+\calb_uk\calc_y & \calb_w+\calb_uk\cald_{zw}\\ \calc_z+\cald_{zw}k\calc_y & \cald_{zw}+\cald_{zu}k\cald_{yw } \end{array } \right]\ ] ] where the gain matrix incorporates the controller parameters .\ ] ] therefore , if the realization of the plant ( [ eq : standard plant ] ) has one of the matrices or identically zero with or of full column / row rank , respectively , we can make use of theorem [ theorem : singular control sof convex ] and corollary [ corollary : sof problem convex opt sc ] or theorem [ theorem : singular filtering sof convex ] and corollary [ corollary : sof problem convex opt sf ] to find the fixed - order anisotropic -optimal ( suboptimal ) controller as the static output - feedback gain ( [ eq : augmented theta def ] ) for the realization ( [ eq : plant augmentation sof ] ) of the augmented plant .in this section we provide several purely illustrative numerical examples of the anisotropic -optimal controller design via convex optimization .only two special design cases are considered , namely , the full - order output - feedback controller and static output - feedback gain defined in theorems [ theorem : full - order output - feedback problem solution ] and [ theorem : singular filtering sof convex ] , respectively . as regards general problems [ problem : anisotropic suboptimal design ] ,[ problem : aniso sub sof ] of the anisotropic suboptimal controller design with the solutions defined by corollaries [ corollay : fo of problem solution ] , [ corollary : sof problem solution nonconv ] , testing and benchmark of various algorithms for finding reciprocal matrices under convex constraints ( e.g. , ) is the issue of future work and will be presented elsewhere .however , it should be mentioned that the algorithms of have been tested on some reasonable number of state - space realizations randomly generated by the matlab control systems toolbox function ` drss ` and some models from the _ comp _collection .the numerical experiments have shown that application of both of that algorithms often leads to convergence to local minima and depend on initial conditions .the randomized technique proposed in aimed at generation of the initial conditions seems to be able to improve the situation .all computations have been carried out by means of matlab 7.9.0 ( r2009b ) , control system toolbox , and robust control toolbox in combination with the yalmip interface and the sedumi solver with cpu p8700 .first we consider the problem of longitudinal flight control in landing approach under the influence of both deterministic and stochastic external disturbances in conditions of a windshear and noisy measurements .the control aims at disturbance attenuation and stabilization of the aircraft longitudinal motion along some desired glidepath . the linearized discrete time - invariant model of tu-154 aircraft landing is given in , where the problem was solved by means of the anisotropic optimal controller derived in .here we present the results of solving the anisotropic -optimal full - order synthesis problem via convex optimization as defined in theorem [ theorem : full - order output - feedback problem solution ] and corollary [ corollay : fo of problem solution ] . the mathematical model of the aircraft longitudinal motion defining deviation from a nominal trajectorywas derived in at the trajectory point characterized by the airspeed m / sec , flying path slope angle deg , pitch angle rate deg / sec , pitch angle deg , height m , and thrust newton .the model has order , two control inputs ( the signal generated by the controller to deflect the generalized ailerons and the throttle lever position ) and two measured outputs ( the airspeed and the height ) .the sampling time of the model sec .the anisotropic -optimal controller was derived from a solution to the convex optimization problem ( [ eq : minimum gamma2 fo of ] ) as defined in theorem [ theorem : full - order output - feedback problem solution ] .the state - space realization of the anisotropic -optimal controller computed for the mean anisotropy level is presented below together with the realizations of and optimal controllers and computed by matlab robust control toolbox functions ` h2syn ` ( riccati equations technique ) and ` hinfsyn ` ( lmi optimization technique ) : ,\ ] ] ,\ ] ] .\ ] ] the results of simulation of the closed - loop systems in conditions of a windshear and noisy measurements are presented together with the problem solution results in table [ table : tu-154 ] below and illustrated in figures [ fig : tu154 output][fig : tu154 disturb ] . in the simulationwe use a typical wind profile described by the ring vortex downburst model . [table : tu-154 ] & & & & + + & & 0.516 & 5.4203 & 10.894 + & & 0.516 & 1.1473 & 3.1448 + & & 7.8391 & 5.1768 & 5.5944 + & & 15.855 & 10.93 & 10.891 + cpu time , & sec & 0.78001 & 5.928 & 1.7004 + + & m / sec & 11.3 & 3.559 & 4.329 + & m & 54.79 & 46.87 & 39.79 + & deg & 14.86 & 16.04 & 31.6 + & deg / sec & 4.884 & 5.043 & 10.56 + & deg & 19.06 & 19 & 38.08 + & kn & 7.263 & 22.58 & 42.48 + & deg & 20.7 & 20.8 & 21.91 + & deg & 8.224 & 29.25 & 29.23 + from the solution results in table [ table : tu-154 ] we can conclude that * the respective minimum square root values of the objective functions satisfy ; * the -anisotropic norm of the closed - loop system with the anisotropic -optimal controller satisfies the controller is actually suboptimal .analysis of the simulation results presented in table [ table : tu-154 ] and figures [ fig : tu154 output][fig : tu154 disturb ] shows that * the anisotropic -optimal controller results in the least maximal absolute deviation of the airspeed and admissible maximal absolute deviation of the height ; * the worst maximal absolute deviations of the controlled variables are demonstrated by the optimal controller ; * the anisotropic controller provides the maximal absolute deviation of the thrust required for the manoeuvre _ almost two times less _ than the additional thrust required by the system with the controller ; * the same concerns the maximal absolute deviations of the trajectory slope angle , pitch rate , and pitch ; * the least maximal additional thrust is required by the closed - loop system with the optimal controller ; * the maximal values of the control signals of the anisotropic and controllers are close , the control generated by the anisotropic controller looks more realistic .the anisotropic -optimal controller is obviously more effective than the controller and less conservative than the controller in this example of the disturbance attenuation problem ., height ( left plots ) and control signals , ( right plots),width=642 ] , pitch angle rate ( left plots ) , pitch angle , thrust ( right plots),width=642 ] the anisotropic -optimal full - order controllers have been computed for some models from the _ comp _collection listed below in table [ table : complib foc ] .all of them were converted from continuous- to discrete - time models with the sampling time .it is known from that almost all of these models ( excepting roc5 ) are sof - stabilizable , but here the respective problems are solved by the dynamic full - order output - feedback controllers for the testing purpose solely . in it is shown that satisfying the conditions of sanbrl with ensures the and norms not to exceed a given threshold value .therefore the and controllers for the respective problems have also been derived as the limiting cases of the anisotropic controller from a solution to the convex optimization problem ( [ eq : minimum gamma2 fo of ] ) as defined in theorem [ theorem : full - order output - feedback problem solution ] but with the respective input mean anisotropy levels and ..examples from the _ comp collection . full - order design [ cols="^,^,^,^,^,^,^,^,^,^ " , ] for the purely illustrative purpose , below we present the solution and simulation results for the aircraft control problem ( ac1 ) initially considered in .the model ac1 from the _ comp _ collection is recast into a disturbance attenuation singular filtering problem with noiseless measurements . the anisotropic -optimal static gain computed for the mean anisotropy level presented below together with the and gains and ,\ ] ] ,\ ] ] .\ ] ] the results of simulation of the closed - loop systems in conditions of a windshear are presented together with the problem solution results in table [ table : ac1 ] below and illustrated in figures [ fig : ac1 out contr][fig : ac1 responses ] . in the simulation we use the same wind profile as in the example of tu-154 aircraft flight control in section [ subsubsect : tu-154 ] .[ table : ac1 ] & & & & + + & & 0.00045695 & 0.0034448 & 0.0036873 + & & & & + & & 0.00050863 & & + & & 0.00075676 & & + cpu time , & sec & 0.81121 & 3.042 & 0.546 + + & m / sec & & & + & deg & 0.0003134 & & + & m & 3.152 & 3.412 & 3.35 + & m / sec & 0.1647 & 0.1108 & 0.124 + & deg & 0.02948 & 0.0192 & 0.02172 + & deg / sec & 0.008596 & 0.006704 & 0.006841 + & m / sec & 0.406 & 0.278 & 0.3097 + & & 0.1648 & 0.1108 & 0.124 + & m / sec & 0.0299 & 0.01922 &0.02173 + & deg & 0.2117 & 0.1355 & 0.154 + the solution results presented in table [ table : ac1 ] shows that * the respective minimum square root values of the objective functions satisfy ; * the -anisotropic norm of the closed - loop system with the anisotropic -optimal static gain satisfies the controller is actually suboptimal ; * the and norms of the closed - loop systems with the respective -optimal gains satisfy , the and controllers are actually suboptimal too .the simulation results presented in table [ table : ac1 ] and figures [ fig : ac1 out contr][fig : ac1 responses ] allow to conclude that * the anisotropic -optimal output - feedback static gain leads to the least maximal absolute deviations of the forward speed , pitch angle , pitch angle rate , and vertical speed , at that the least maximal absolute deviation of the height error is achieved with the -optimal static gain ; * the worst maximal absolute values of the controlled output are demonstrated by -optimal static gain ; * the anisotropic -optimal static gain leads to the least maximum absolute amplitudes of the control signals ., forward speed , vertical speed ( left plots ) , pitch angle , pitch angle rate ( right plots),width=642 ]in this paper , we have proposed a solution to the anisotropic suboptimal and -optimal controller synthesis problems by convex optimization technique .the anisotropic suboptimal controller design is a natural extension of the optimal approach developed in . instead of minimizing the anisotropic norm of the closed - loop system , the suboptimal controller is only required to keep it below a given threshold value .the general fixed - order synthesis procedure employs solving an inequality on the determinant of a positive definite matrix and two linear matrix inequalities in reciprocal matrices which make the general optimization problem nonconvex . by applying the known standard convexification procedures it have been shown that the resulting optimization problem can be made convex for the full - information state - feedback , output - feedback full - order controllers , and static output - feedback controller for some specific classes of plants defined by certain structural properties . in the convex cases ,the anisotropic -optimal controllers are obtained by minimizing the squared norm threshold value subject to convex constraints . in comparison with the solution to the anisotropic optimal controller synthesis problem derived in which results in a unique full - order estimator - based controller defined by a complex system of cross - coupled nonlinear matrix algebraic equations , the proposed optimization - based approach is novel and does not require developing specific homotopy - like computational algorithms .k.zhou , k.glover , b.a.bodenheimer , and j.c.doyle .mixed and performance objectives i : robust performance analysis ,ii : optimal control ._ ieee trans . ac _ , 39:0 15641574 , 15751587 , 1994 .h.a.hindi , b.hassibi , and s.p.boyd .multiobjective -optimal control via finite dimensional -parametrization and linear matrix inequalities .american control conf ._ , pages 32443248 , 1998 .i.g.vladimirov , a.p.kurdjukov , and a.v.semyonov .state - space solution to anisotropy - based stochastic -optimization problem .13th ifac world congress _ , san francisco , usa , pages 427432 , 1996 .p.diamond , a.p.kurdjukov , a.v.semyonov , and i.g.vladimirov .homotopy methods and anisotropy - based stochastic optimization of control systems ._ report 97 - 14 of the university of queensland _ , australia , pages 122 , 1997 .d.v.balandin and m.m.kogan .synthesis of controllers on the basis of a solution of linear matrix inequalities and a search algorithm for reciprocal matrices ._ automat . & remote contr ._ , 66:0 7491 , 2005 .b.t.polyak and e.n.gryazina .markov chain monte carlo method exploiting barrier functions with applications to control and optimization .ieee multi - conf . on systems and control _ ,pages 15531557 , 2010 ._ comp _ : constraint matrix - optimization problem library a collection of test examples for nonlinear semidefinite programs , control system design and related problems ._ tech . rep . of the university of trier _, germany , 2004 , http://www.complib.de .a.p.kurdyukov , b.v.pavlov , v.n.timin , and i.g.vladimirov .longitudinal anisotropy - based flight control in a wind shear .16th ifac symposium on automatic control in aerospace _, saint - petersburg , russia , 2004 .y.s.hung and a.g.j.macfarlane . _ multivariable feedback : a quasi - classical approach _ , volume 40 of _ lecture notes in control and information sciences_. springer - verlag , berlin , heidelberg , new york , 1982 .
this paper considers a disturbance attenuation problem for a linear discrete time invariant system under random disturbances with imprecisely known distributions . the statistical uncertainty is measured in terms of the mean anisotropy functional . the disturbance attenuation capabilities of the system are quantified by the anisotropic norm which is a stochastic counterpart of the norm . the designed anisotropic suboptimal controller generally is a dynamic fixed - order output - feedback compensator which is required to stabilize the closed - loop system and keep its anisotropic norm below a prescribed threshold value . rather than resulting in a unique controller , the suboptimal design procedure yields a family of controllers , thus providing freedom to impose some additional performance specifications on the closed - loop system . the general fixed - order synthesis procedure employs solving a convex inequality on the determinant of a positive definite matrix and two linear matrix inequalities in reciprocal matrices which make the general optimization problem nonconvex . by applying the known standard convexification procedures it is shown that the resulting optimization problem is convex for the full - information state - feedback , output - feedback full - order controllers , and static output - feedback controller for some specific classes of plants defined by certain structural properties . in the convex cases , the anisotropic -optimal controllers are obtained by minimizing the squared norm threshold value subject to convex constraints . in a sense , the anisotropic controller seems to offer a promising and flexible trade - off between and controllers which are its limiting cases . in comparison with the state - space solution to the anisotropic optimal controller synthesis problem presented before which results in a unique full - order estimator - based controller defined by a complex system of cross - coupled nonlinear matrix algebraic equations , the proposed optimization - based approach is novel and does not require developing specific homotopy - like computational algorithms . _ keywords : _ discrete time , linear systems , random disturbance , stochastic uncertainty , norm , anisotropy , state feedback , full - order , fixed - order controller , static output feedback , convex optimization , reciprocal matrices
the notion of hitting time in classical markov chains plays an important role in computer science . the hitting time is used in monte carlo algorithms , and in randomized algorithms in general , as the running time to find a solution .expressions for the classical hitting time were calculated analytically for many graphs .it is not straightforward to generalize the classical definition of hitting time to the quantum realm .kempe has provided two definitions and proved that a quantum walker hits the opposite corner of a -hypercube in time .krovi and brun have provided a definition of average hitting time that requires a partial measurement of the position of the walker at each step .kempf and portugal have discussed the relation between hitting times and the walker s group velocity. inspired on ambainis algorithm for solving the element distinctness problem , szegedy was able to abstract out the mathematical structure of that algorithm and to provide a definition of quantum hitting time , that is a natural generalization of the classical definition of hitting time .years of effort show that the establishment of that definition is far from trivial .recently , magniez _ et al . _ have extended szegedy s work to non - symmetric ergodic markov chains and have improved the probability to find a marked state using tulsi s method . in this workwe calculate analytically szegedy s hitting time and the probability of finding a set of marked vertices on the complete graph .this calculation clarifies many points of szegedy s definition , such as the analytical behavior of the time average of the quantity , where is the initial condition and is the evolution operator after steps .we show why the calculation of the hitting time is easier than the calculation of the success probability .the eigenspace associated with the eigenvalue 1 of the evolution operator plays no role in the calculation of the hitting time , but must be taken into account in the calculation of the success probability . the paper is organized as follows . in sec .[ ht_sec_1 ] we review the basic operators of a bipartite graph that are needed in the definition of the evolution operator . in sec . [ ht_sec_u ] we review szegedy s definition of the quantum walk s evolution operator and the method to obtain part of its spectral decomposition . in sec .[ ht_sec_ht ] we review szegedy s definition of hitting time . in sec .[ ht_sec_cg ] we calculate the hitting time and the probability of finding a marked vertex on the complete graph .in order to define the quantum hitting time in a graph , szegedy has proposed a quantum walk driven by reflection operators in an associated bipartite graph obtained from the original one by a process of duplication , as explained in sec .[ ht_sec_ht ] . consider a bipartite graph between the set of vertices and of same cardinality .denote by and generic vertices in sets and .the stochastic matrices and associated with this graph are defined such that is the inverse of the outdegree of the vertex , if there is a directed edge from to , otherwise .analogously , is either the inverse of the outdegree of the vertex or zero .the variables and satisfy to define a quantum walk in the bipartite graph , we associate with the graph a hilbert space , where . the computational basis of the first component is and of the second .the computational basis of is . in the quantum case , instead of using the stochastic matrices and of the classical random walk , we define the operators and as follows where and are matrices .( [ ht_a ] ) and ( [ ht_b ] ) tell us that the columns of are the vectors and the columns of are . vectors and obey therefore these equations imply that and preserve the norm of vectors , so if is a unit vector of , then is a unit vector of . the same for . of course we will investigate the product in the reverse order .using eqs .( [ ht_a ] ) and ( [ ht_b ] ) we obtain using eqs .( [ ht_ata ] ) and ( [ ht_bta ] ) we have and .so let us define the projectors eqs .( [ ht_aat ] ) and ( [ ht_bbt ] ) show that project a generic vector of to the subspace spanned by and to the subspace spanned by .we can now define the reflection operators associated with each of these projectors reflects a generic vector in around and around .now it is time to establish a connection between the subspaces and .the best choice is to analyze the angles between the set of vectors with .let us define the matrix of inner products such that .using eqs .( [ ht_alpha_x ] ) and ( [ ht_beta_y ] ) we can express the components of in terms of transition probabilities as , and in matrix form is a square matrix of dimension .it provides essential information on the quantum walk that will be defined on the bipartite graph . is not a normal operator in general .its singular values and vectors play an important role in the dynamics of the quantum walk .the theorem of singular value decomposition states that there are unitary matrices and such that where is a diagonal matrix of dimension with nonnegative real components .the diagonal elements are called singular values and univocally determined .matrices and can be determined through the application of the spectral theorem to , which is a semidefined positive matrix .let and be the right and left singular vectors respectively and the corresponding singular values , then multiplying eq .( [ ht_c_nu_i ] ) by and eq .( [ ht_ct_mu_i ] ) by we obtain the action of operators and preserves the norm of vectors , then vectors and are unitary .projectors either decrease the norm of vectors or maintain invariant . using eq .( [ ht_pi_a_nu_i ] ) we conclude that the singular values satisfy the inequalities .so , we can define such that , where .the geometric interpretation of is the angle between the vectors and , that can be confirmed by using eqs .( [ ht_c ] ) and ( [ ht_c_nu_i ] ) .let us consider a bipartite graph such that , and .szegedy has defined the one - step evolution operator in the hilbert space of this graph as where and are given by eqs .( [ ht_ra ] ) and ( [ ht_rb ] ) .( [ ht_pi_a_nu_i ] ) and ( [ ht_pi_b_mu_i ] ) show that the projectors and have a symmetric action over vectors and for each .it is expected that the action of the reflection operators and on a linear combination of and results in a vector in the plane spanned by and that is , this plane is invariant under the action of .so let us try the following _ ansatz _ for the eigenvectors of the goal is to find , and that obey eq .( [ ht_u a_b ] ) . using definition ( [ ht_u_ev ] ) for , we eventually obtain that the vectors are normalized eigenvectors with eigenvalues when .we have obtained at most eigenvectors of so far , because has dimension .in fact , the exact number depends on the multiplicity of the singular value . for , and do not span a two dimensional subspace , because they are colinearlet us consider vectors . from eqs .( [ ht_pi_a_nu_i ] ) and ( [ ht_pi_b_mu_i ] ) we verify that they are invariant under the action of and .then , they are invariant under the action of and . then are eigenvectors of with eigenvalue 1 .if the multiplicity of the singular value is , then we have obtained eigenvectors of so far .the remaining eigenvectors can not be found by using the singular values and vectors of matrix , on the other hand , it is straightforward to show that the missing ones have eigenvalue 1 .szegedy has defined a notion of quantum hitting time that is a natural generalization of the concept of classical hitting time .let be a connected , undirected and non - bipartite graph , where is the set of vertices and is the set of edges .define a bipartite graph associated with through a process of duplication . and are the sets of vertices of same cardinality of the bipartite graph . each edge in of the original graph is converted into two edges in the bipartite graph and .the quantum walk on the bipartite graph is defined by the evolution operator given by eq .( [ ht_u_ev ] ) . in the bipartite graph, an application of corresponds to two quantum steps of the walk , from to and from to .we have to take the partial trace over the space associated with to get the state on the set . in the _ classical case_ , the hitting time is the expected number of steps in a random walk that starts at and ends upon first reaching .this definition can be generalized to what is called average hitting time . instead of departing from vertex , the initial vertex can be sampled according to a probability distribution , such that .also , instead of reaching vertex , one may consider the case of reaching a subset of .so , the hitting time is the expected number of steps in a random walk that starts at a vertex that is sampled according to a probability distribution and ends upon first reaching any vertex of .szegedy s definition is the quantum analogue of that last version .it is at least quadratically faster than the classical case . to define the _ quantum hitting time _, szegedy has used a modified evolution operator associated with a modified directed bipartite graph obtained in the following form .each edge of an undirected graph can be viewed as two opposite directed edges , since directed edges are fused to form the non - directed edge .the modified directed graph is the bipartite graph obtained by removing all directed edges leaving the vertices of the set , but keeping the directed edges that are arriving .this means that if the walker reaches a marked vertex , it will be stuck in that vertex in the following steps . to calculate the classical hitting time , the original and the modified bipartite graphs are equivalenthowever , since the stochastic matrix has been modified , in the quantum case the evolution operator is different from .if the walk starts uniformly distributed , the modulus of the amplitude probabilities at the marked vertices will increase at some specific moments .the modified stochastic matrix is given by the initial condition of the quantum walk is note that is an eigenvector of with eigenvalue , when the probability distribution is symmetric .however , is not an eigenvector of in general . before describing the evolution of the quantumwalk driven by the modified operator , let us define the _ quantum hitting time _ . *definition * the _ quantum hitting time _ of a quantum walk with evolution operator given by eq .( [ ht_u_ev ] ) and initial condition is defined as the least number of steps such that where is the number of marked vertices , is the number of vertices of the original graph and is where and is the evolution operator after steps using the modified stochastic matrix .only the singular values of that are different from 1 are used in the calculation of hitting time . to make this point clear ,let us write the initial condition in the eigenbasis of the evolution operator where is the multiplicity of the singular value 1 .the coefficients are given by and obey the constraint applying to we obtain when we take the difference , the terms in the eigenspace associated with eigenvalue 1 vanish .vectors are self conjugates and is real , then eq . ( [ ht_c_j_pm ] ) implies that .let us call and by .using eqs .( [ ht_psi_0_generic ] ) and ( [ ht_psi_t_generic ] ) , we obtain where is the -th chebyshev polynomial of the first kind . using eq .( [ ht_diff ] ) , we obtain where is the -th chebyshev polynomial of the second kind . the quantum hitting time is given by us label the vertices of the complete graph from 1 to and suppose that the last vertices are the marked ones .the stochastic matrix of the complete graph is where is the normalized uniform vector with components and stands for the -th vector of the computational basis .let be the matrix obtained from by removing the lines and columns corresponding to the marked elements , then the characteristic polynomial of is the eigenvector with eigenvalue is and the eigenvectors with eigenvalue are for .that set of eigenvectors forms an orthonormal basis .the modified stochastic matrix is all operators of sec .[ ht_sec_1 ] must be calculated using the modified matrix . to find the spectral decomposition of ,the key operator is given by eq .( [ ht_c ] ) .the components of are .we have to replace and by . using eq .( [ ht_pprime_cg ] ) we obtain is hermitian , then the nontrivial singular values are obtained by taking the modulus of the eigenvalues of .the right singular vectors are the eigenvectors of .if the eigenvalue of is negative , the left singular vector is the negative of the eigenvector of .these vectors must be increased with zeros to have the correct dimension , compatible with the dimension of .summarizing , and , are the right and left singular vectors , respectively , with singular value , is both the right and left singular vectors with singular value .finally , the submatrix in eq .( [ ht_c_cg ] ) adds to the list the singular value 1 with multiplicity with the associated singular vectors , where .the eigenvectors and eigenvalues of , that can be obtained from the singular values and vectors of are given in table [ ht_table_eigen_cg ] .it is missing eigenvectors , all of them associated with eigenvalue 1 ..eigenvalues and normalized eigenvectors of obtained from the singular values and vectors of . the vectors and are given by eqs .( [ ht_nu_n - m_cg ] ) and ( [ ht_nu_i_cg ] ) respectively . [ cols="^,^,^ " , ] the initial condition in the complete graph reduces to using the eigenvectors of table [ ht_table_eigen_cg ] , the expression for and the definition ( [ ht_c_j_pm ] ) , we obtain where is given by the uniform singular vector given by eq .( [ ht_nu_n - m_cg ] ) is the only one used in the calculation of the hitting time . the quantity defined in eq .( [ ht_d_t ] ) reduces to the graph in fig .[ fig : ht_cg ] shows the behavior of the function . grows rapidly through the dashed line , then oscillates around the limiting value given by ( dotted line ) .( solid line ) , ( dashed line ) and ( dotted line ) for and . the hitting time can be seen in the graph at time such that , which is about in this case ., width=240 ] for , the hitting time is obtained by employing the method of series inversion on the equation .the first terms are where is the first spherical bessel function or the unnormalized sinc function .the value of is around 1.9 . the hitting time is used in search algorithms as the running time .it is important to calculate the probability of success at the stopping time .the calculation of the probability of finding a marked element is more elaborated than the calculation of the hitting time , because we have to find explicitly , and therefore the eigenvectors with eigenvalue 1 must be considered . for the complete graph ,the eigenvectors that are not orthogonal to the initial condition are and some of the eigenvectors associated with the eigenvalue 1 . using eqs .( [ ht_a ] ) to ( [ ht_beta_y ] ) and ( [ ht_pprime_cg ] ) we can obtain . substituting and , given by eq .( [ ht_cpm ] ) , into eq .( [ ht_psi_t_generic ] ) , we obtain the component associated with the eigenvalue 1 can be determined by trial and error directly from the structure of matrix .the result is the probability of finding a marked element is calculated by using the projector in the vector space spanned by the marked elements , that is the probability is given by . using eq .( [ ht_psi_t_2_cg ] ) and ( [ ht_eigen_1 ] ) , we obtain the graph of is depicted in fig .[ fig : ht_prob_cg ] when and . andthe value at is and the function has period ., width=240 ] the first point of maximum occurs at time the asymptotic expansion of which is for . substituting that result into the expression of probability, we obtain for any values of and , the probability of finding a marked vertex is greater than if the measurement is carried out at time .the instant is smaller than the hitting time given by eq .( [ ht_h_cg ] ) , since while .the value of the success probability of an algorithm that uses the hitting time as the running time will be less than the probability at time . evaluating at time and taking the asymptotic expansion , we obtain the first term is around and is independent of or .this shows that the hitting time is a good parameter for the stopping point of searching algorithms on the complete graph .we acknowledge fruitful discussions with f. marquezino and d. santiago . r.a.m.s . acknowledges a capes fellowship and r.p . acknowledges cnpq s grant n. 306024/2008 .f. magniez , a. nayak , p.c . richter and m. santha , on the hitting times of quantum versusrandom walks , soda 09 : proceedings of the nineteenth annual acm -siam symposium on discrete algorithms , 8695 , 2009 .
quantum walks play an important role in the area of quantum algorithms . many interesting problems can be reduced to searching marked states in a quantum markov chain . in this context , the notion of quantum hitting time is very important , because it quantifies the running time of the algorithms . markov chain - based algorithms are probabilistic , therefore the calculation of the success probability is also required in the analysis of the computational complexity . using szegedy s definition of quantum hitting time , which is a natural extension of the definition of the classical hitting time , we present analytical expressions for the hitting time and success probability of the quantum walk on the complete graph .
[ [ quantum - correlations ] ] quantum correlations : + + + + + + + + + + + + + + + + + + + + + consider the following game .alice receives a bit , and bob receives a bit , both chosen uniformly at random .their task is to output one bit each in such a way that the xor of the bits they output is equal to . in other words, they should output the same bit , except when both input bits are .notice that no communication is allowed between them . a moments reflection shows that their best strategy is to always output , say , .this allows them to win on three of the four possible inputs .it is also not difficult to show that equipping them with a shared source of randomness can not help : the average success probability over the four possible questions will always be at most ( simply because one can always fix the shared randomness so as to maximize the average success probability ) .this bound of , known as the clauser - horne - shimony - holt ( chsh ) inequality , is the simplest example of a _ bell inequality _ .a remarkable and well - known fact is that if alice and bob are allowed to share _ quantum entanglement _ then they can win the game with probability , no matter which questions are asked . indeed, sharing entanglement allows remote parties to realize correlations that are impossible to obtain classically , without imparting them with the ability to communicate instantaneously .this distinction is one of the most peculiar aspects of quantum theory and required many years to be properly understood . in this paperwe address the topic of quantum correlations from a communication complexity perspective .namely , we are asking how many bits of communication are needed to explain the phenomenon of quantum correlations .more precisely , we consider the following communication complexity problem , corresponding to the quantum mechanical scenario of a shared bipartite quantum state and local two - outcome measurements and , with the goal being to simulate the correlation ( i.e. , the parity ) of the measurement results . [ cols= " < , < " , ] the protocol is given as protocol [ prot : maximal - k - bit ] . roughly speaking ,alice and bob start by projecting their vectors onto a random -dimensional subspace .alice then sends to bob the orthant inside the -dimensional space in which her vector lies , and bob uses the half - space determined by this orthant to determine his output . to be more precise , instead ofa random orthogonal projection we use here a random gaussian matrix .this leads to a much cleaner analysis , and moreover , in the limit of large , the two distributions are essentially the same .we also use the same trick used in the majority protocol to reduce the communication from the naive bits to bits .we now analyze the correlation function given by this protocol . for any unit vectors , the output of the protocol satisfies & = { { \mathrm e}}[\operatorname{sign}[\alpha_0 \cdot { { \langle { g\vec b , ( 1,c_1,\ldots , c_{k } ) } \rangle } } ] ] \\ & = { { \mathrm e}}[\operatorname{sign}[{{\langle { g\vec b , ( \alpha_0,\alpha_1,\ldots,\alpha_{k } ) } \rangle } } ] ] , \end{aligned}\ ] ] where expectations are taken over the choice of .the expression inside the last expectation is or depending on whether is in the half - space defined by the center of the orthant containing . by symmetryit is enough to consider the positive orthant , and hence the above is equal to - 1.\end{aligned}\ ] ] we now claim that the joint distribution of and is a -dimensional gaussian variable with mean and covariance matrix where each is a identity matrix and denotes the inner product . to see this , notice that by the rotational invariance of the gaussian distribution , we can assume that and claim now follows by using the fact that the first two columns of are two independent -dimensional standard gaussians , i.e. , a gaussian with mean 0 and covariance .our next observation is that the probability in eq . depends only on the sum of coordinates of .we therefore define the real random variable to be .the joint distribution of and is given by a -dimensional gaussian with mean and covariance matrix where is the linear transformation taking to .we therefore see that the probability in eq .is exactly the probability that a vector sampled from a gaussian distribution with mean and covariance matrix is in the positive orthant . by the cholesky decomposition , we can write for the matrix it is easy to see that since , applying the linear transformation to a gaussian random variable with mean and covariance matrix transforms it into a standard gaussian variable . under this transformation , the positive orthant , which isthe cone spanned by the standard basis vectors , becomes the cone spanned by the rows of .we conclude that the probability in eq .is exactly the probability that a vector sampled from a standard gaussian distribution is in the cone spanned by the rows of . by the spherical symmetry of the standard gaussian distribution, we can equivalently ask for the relative area of the sphere that is contained inside the cone spanned by the rows of .[ [ the - case - k0 ] ] the case : + + + + + + + + + + + + + + + + + + + + + + + + + + + we can now compute the probability in eq . for each of .we start with the simplest case of . here, we are interested in the relative length of the circle contained in the cone spanned by the rows of .obviously , this is given by the angle between the two vectors divided by , which is .hence by eq .the correlation function in this case is simply we could also obtain this result by noting that the protocol is essentially identical to the one from section [ ssec : localprim ] . [ [ the - case - k1 ] ] the case : + + + + + + + + + + + + + + + + + + + + + + + + + + + we now analyze the more interesting case . here , we are interested in the relative area of the sphere contained in the cone spanned by the three rows of . the intersection of with a cone spanned by three vectors is known as a _spherical triangle _ , see figure [ fig : sphericaltriangle ] .its area , as given by girard s formula ( see , e.g. , ( * ? ? ?* page 278 ) ) , is where are the three angles of the triangle ( as measured on the surface ) . in more detail , if are the vectors spanning the cone , then is the angle between the two vectors obtained by projecting and on the plane orthogonal to ( and similarly for and ) . in our case , the coneis spanned by , , and .clearly and a short calculation shows that . plugging this into girard s formula , and using the fact that the area of the sphere is , we obtain that the relative area of contained in the cone spanned by the rows of is . hence by eq .the correlation function in this case is [ [ the - case - k2 ] ] the case : + + + + + + + + + + + + + + + + + + + + + + + + + + + we finally arrive at the most important case . herewe are considering _ spherical tetrahedra _, defined as the intersection of with a cone spanned by four vectors . unlike the case of spherical triangles , no closed formula is known for the volume of a spherical tetrahedron ( see for further discussion and references ) .fortunately , there is a simple formula for the _ derivative _ of the volume , as we describe in the sequel .we start with some preliminaries on spherical tetrahedra , closely following appendix a in .a spherical tetrahedron is defined by four unit vectors forming its vertices . for let be the angle between and .equivalently , is the spherical length of the edge .another set of six parameters associated with a spherical tetrahedron are its _ dihedral angles _ , , describing the angle between the two faces meeting at the edge .they are defined as and similarly for the other five dihedral angles , where the _ high - dimensional inner product _ is defined as and the _ high - dimensional norm _ is given by finally , in order to compute the volume of a spherical tetrahedron we use a formula due to schlfli , which says that for every , where is the volume of a spherical tetrahedron with the given edge lengths .our goal is to compute the volume of the spherical tetrahedron whose vertices are the rows of normalized to be of norm , from this it easily follows that moreover , a straightforward calculation reveals that and that the derivative of the latter term as a function of is . by using schlfli s formula and integrating along , we obtain that the volume of our spherical tetrahedron is where we used that for this volume is . since the total area of is , we obtain using eq . that the correlation function in this case is this section we describe how to take any protocol whose correlation function is ` strong enough ' , and use it to solve problem [ prob : simulatingclassical ] .this section as well as the next one rely on some basic facts from the theory of real analytic functions which can be found in , e.g. , . as we said earlier, the idea is to carefully choose a mapping such that when alice and bob apply the protocol on and , the resulting correlation function will be correct .alice and bob map their vectors to and and run the original protocol on these vectors .fix some arbitrary correlation function \to [ -1,1] ] and all its coefficients are nonnegative .this is shown in the following lemma .[ lemma:1 ] if satisfies the above conditions , then has a power series expansion that converges on ] is well - defined and is odd .moreover , by the real analytic inverse function theorem ( see ( * ? ? ?* theorem 1.5.3 ) ) , is analytic on and hence has a series expansion about , as in eq . .in order to analyze this series , we use a known formula for the coefficients of an inverse function ( see , e.g. , ( * ? ? ? * eq . ( 4.5.12 ) ) ) : where the sum runs over nonnegative integers satisfying .since in our case every term in the sum is nonnegative , it follows that for all , as required .it remains to show that the series converges on ] be a function with a power series expansion that converges on ] , and hence for all and .our earlier analysis shows that for the orthant protocol ( protocol [ prot : maximal - k - bit ] ) with , as goes to , approaches .we conjecture that for sufficiently large and sufficiently small , all one - bit protocols satisfy ( and therefore do not solve problem [ prob : simulatingclassical ] ) .in fact , we conjecture that the orthant protocol with is optimal for , i.e. , we conjecture that for all one - bit protocols , . one approach to prove these conjectures is the following . first , since we are only interested in minimizing the value of and not in obtaining the correct correlations , we can restrict attention to deterministic protocols .any deterministic protocol partitions alice s sphere into four sets , depending on which bit she outputs and which bit she sends .once we have specified alice s strategy , we can assume bob acts optimally to minimize .the contribution to comes from regions near which meets , since it is in these areas that bob can not tell whether alice outputs or , and hence can not correlate his answer perfectly with alice s . therefore , in order to prove the conjectures , one should argue that any protocol must have local regions where meets and that the way these regions meet in the orthant protocol is optimal .formalizing this notion would seem to require topological arguments , perhaps an extension of the borsuk - ulam theorem . in another attempt to shed light on the problem, we show now how to extend a lower bound of barrett , kent , and pironio , who improved on an earlier result of pironio .barrett et al. showed that if we examine the transcript of communication between alice and bob of any protocol for problem [ prob : simulatingclassical ] , then with probability ( over the shared randomness used by the protocol ) the transcript must show some communication .in other words , it can not be the case that alice and bob sometimes output results using shared randomness alone .but this leaves open the possibility that , say , alice almost always sends the same message to bob . here, we show a lower bound on the ( min-)entropy of the communication transcript .more specifically , we show an upper bound on the maximum probability with which a transcript can appear in a protocol for problem [ prob : simulatingclassical ]. there exists a distribution on inputs such that in any protocol that solves problem [ prob : simulatingclassical ] no transcript can appear with probability greater than when applied to this input distribution .let be a protocol that solves problem [ prob : simulatingclassical ] . as mentioned in the introduction, in particular allows us to solve the following problem with probability : alice and bob receive bits and respectively , and their task is to output one bit each in such a way that the xor of the bits they output is equal to .assume the bits and are chosen uniformly at random , and consider the resulting distribution on transcripts created by .consider the most likely transcript and let denote the probability with which it occurs .we now construct a protocol with no communication as follows .alice checks whether the transcript is consistent with her input . if so , she outputs a bit as in ; if not , she outputs a random bit .bob does the same .note that with probability , behaves identically to .with probability , however , at least one of the parties detects that the transcript is not consistent with his or her input and outputs a random bit . by the definition of the problem , in this casethe success probability is .therefore the overall success probability of is at least which must be at most by the chsh inequality .part of this work was done while the authors were visiting the institut henri poincar as part of the program `` quantum information , computation and complexity '' , and we would like to thank the organizers for their efforts .we thank aram harrow for discussions about lower bounds , falk unger for assistance with the proof of lemma [ lemma:1 ] , and peter harremos for discussions about schoenberg s theorem .10 n. alon , k. makarychev , y. makarychev , and a. naor .quadratic forms on graphs ., 163(3):499522 , 2006 .n. alon and a. naor . approximating the cut - norm via grothendieck s inequality . , 35(4):787803 , 2006 .preliminary version in stoc04 .s. arora , e. berger , e. hazan , g. kindler , and s. safra . .in _ proc . of the 46th annual ieee symposium on foundations of computer science ( focs ) _ , pages 206215 , 2005 .d. bacon and b. f. toner . how to simulate quantum correlations . unpublished .j. barrett , a. kent , and s. pironio .maximally nonlocal and monogamous quantum correlations .97:170409 , 2006 . j. s. bell . on the instein - odolsky - osen paradox ., 1:195200 , 1964 .m. berger . .springer - verlag , berlin , 1987 .g. brassard , r. cleve , and a. tapp .cost of exactly simulating quantum entanglement with classical communication ., 83:18741877 , 1999 .n. j. cerf , n. gisin , and s. massar .classical teleportation of a quantum bit . , 84:2521 , 2000 .n. j. cerf , n. gisin , s. massar , and s. popescu .simulating maximal quantum entanglement without communication ., 94(22):220403 , 2005 .m. charikar and a. wirth . .in _ proc . of the 45th annual ieee symposium on foundations of computer science ( focs ) _ , pages 5460 , 2004 .j. f. clauser , m. a. horne , a. shimony , and r. a. holt .proposed experiment to test local hidden - variable theories ., 23:880884 , 1969 .j. degorre , s. laplante , and j. roland . simulating quantum correlations as a distributed sampling problem ., 72(6):062314 , 2005 .j. degorre , s. laplante , and j. roland .classical simulation of traceless binary observables on any bipartite quantum state ., 75(1):012309 , 2007 .a. einstein , p. podolsky , and n. rosen .can quantum - mechanical description of physical reality be considered complete ?, 47:777780 , 1935 .m. x. goemans and d. p. williamson .improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming ., 42(6):11151145 , 1995 .preliminary version in stoc94 .a. grothendieck .rsum de la thorie mtrique des produits tensoriels topologiques ., 8:1 , 1953 .h. karloff and u. zwick .a 7/8-approximation algorithm for max 3sat ? in _ proc . of the 38th annual ieee symposium on foundations of computer science ( focs ) _ , pages 406415 , 1997 .s. g. krantz and h. r. parks . .birkhuser advanced texts : basler lehrbcher .birkhuser boston inc . , boston , ma , second edition , 2002 .i. kremer , n. nisan , and d. ron .on randomized one - round communication complexity . ,8(1):2149 , 1999 .preliminary version in stoc95 .j. l. krivine .constantes de rothendieck et fonctions de type positif sur les sphres ., 31:1630 , 1979 .s. massar , d. bacon , n. j. cerf , and r. cleve .classical simulation of quantum entanglement without local hidden variables ., 63(5):052305 , apr 2001 .t. maudlin .bell s inequality , information transmission , and prism models . in d. hull , m. forbes , and k. okruhlik , editors , _ psa 1992 , volume 1 _ , pages 404417 , east lansing , 1992 .philosophy of science association .m. morse and h. feshbach . .mcgraw - hill , new york , 1953 .s. pironio .violations of ell inequalities as lower bounds on the communication cost of nonlocal correlations . , 68(6):062102 , dec 2003 .l. schlfli . . ,2:269300 , 1858 .continued in vol . 3 ( 1860 ) ,54 - 68 and pp .. i. j. schoenberg .positive definite functions on spheres . , 9:96108 , 1942 . y. shi and y. zhu .tensor norms and the classical communication complexity of nonlocal quantum measurement ., 38(3):753766 , 2008 .preliminary version in focs05 .m. steiner .towards quantifying non - local information transfer : finite - bit non - locality . , 270:239244 , 2000. b. f. toner and d. bacon .communication cost of simulating ell correlations ., 91:187904 , 2003 .b. s. tsirelson .quantum analogues of the ell inequalities .he case of two spatially separated domains ., 36:557570 , 1987 .t. vrtesi and e. bene .lower bound on the communication cost of simulating bipartite quantum correlations , 2009 .the equivalence between problem [ prob : simulating ] and problem [ prob : simulatingclassical ] is due to tsirelson .here we sketch only the easy direction of this equivalence : a solution to problem [ prob : simulatingclassical ] implies a solution to problem [ prob : simulating ] .let be a state on , and let and be hermitian matrices whose eigenvalues are in .the goal is for alice and bob to output bits and such that = { \mathsf{tr}}\left ( a \otimes b \cdot \rho \right).\ ] ] let be the entries of the matrix , and similarly let be those of .let and define the -dimensional real vectors then and similarly .moreover , hence , alice and bob can use and as input to problem [ prob : simulatingclassical ] in order to solve problem [ prob : simulating ] .
assume alice and bob share some bipartite -dimensional quantum state . a well - known result in quantum mechanics says that by performing two - outcome measurements , alice and bob can produce correlations that can not be obtained locally , i.e. , with shared randomness alone . we show that by using only two bits of communication , alice and bob can classically simulate any such correlations . all previous protocols for exact simulation required the communication to grow to infinity with the dimension . our protocol and analysis are based on a power series method , resembling krivine s bound on grothendieck s constant , and on the computation of volumes of spherical tetrahedra .
compressed sensing and quantum tomography are two disparate scientific fields .the fast developing field of compressed sensing provides innovative data acquisition techniques and supplies efficient accurate reconstruction methods for recovering sparse signals and images from highly undersampled observations [ see ] .its wide range of applications include signal processing , medical imaging and seismology .the problems to solve in compressed sensing often involve large data sets with complex structures such as data on many variables or features observed over a much smaller number of subjects . as a result ,the developed theory of compressed sensing can shed crucial insights on high - dimensional statistics .matrix completion , a current research focus point in compressed sensing , is to reconstruct a low rank matrix based on under - sampled observations .trace regression is often employed in noisy matrix completion for low rank matrix estimation .recently several methods were proposed to estimate a low rank matrix by minimizing the squared residual sum plus some penalty .the penalties used include nuclear - norm penalty [ cands and plan ( ) , and ] , rank penalty [ and ] , the von neumann entropy penalty [ ] , and the schatten - p quasi - norm penalty [ ] .contemporary scientific studies often rely on understanding and manipulating quantum systems .examples include quantum computation , quantum information and quantum simulation [ and wang ( , ) ] .the studies particularly frontier research in quantum computation and quantum information stimulate great interest in and urgent demand on quantum tomography .a quantum system is described by its state , and the state is often characterized by a complex matrix on some hilbert space .the matrix is called density matrix .a density matrix used to characterize a quantum state usually grows exponentially with the size of the quantum system . for the study of a quantum system ,it is important but very difficult to know its state .if we do not know in advance the state of the quantum system , we may deduce the quantum state by performing measurements on the quantum system .in statistical terminology , we want to estimate the density matrix based on measurements performed on a large number of quantum systems which are identically prepared in the same quantum state . in the quantum literature , quantum state tomography refers to the reconstruction of the quantum state based on measurements obtained from measuring identically prepared quantum systems . in this paper , we investigate statistical relationship between quantum state tomography and noisy matrix completion based on trace regression .trace regression is used to recover an unknown matrix from noisy observations on the trace of the products of the unknown matrix and matrix input variables .its connection with quantum state tomography is through quantum probability on quantum measurements .consider a finite - dimensional quantum system with a density matrix . according to the theory of quantum physics ,when we measure the quantum system by performing measurements on observables which are hermitian ( or self - adjoint ) matrices , the measurement outcomes for each observable are real eigenvalues of the observable , and the probability of observing a particular eigenvalue is equal to the trace of the product of the density matrix and the projection matrix onto the eigen - space corresponding to the eigenvalue , with the expected measurement outcome equal to the trace of the product of the density matrix and the observable .taking advantage of the connection has applied matrix completion methods with nuclear norm penalization to quantum state tomography for reconstructing low rank density matrices . as trace regression and quantum state tomography share the common goal of recovering the same matrix parameter, we naturally treat them as two statistical models in the le cam paradigm and study their asymptotic equivalence via le cam s deficiency distance . hereequivalence means that each statistical procedure for one model has a corresponding equal - performance statistical procedure for another model .the equivalence study motivates us to introduce a new fine scale trace regression model .we derive bounds on the deficiency distances between trace regression and quantum state tomography with summarized measurement data and between fine scale trace regression and quantum state tomography with individual measurement data , and then under suitable conditions we establish asymptotic equivalence of trace regression and quantum state tomography for both cases .the established asymptotic equivalence provides a sound statistical foundation for applying matrix completion procedures to quantum state tomography under appropriate circumstances .we further analyze the asymptotic equivalence of trace regression and quantum state tomography for sparse matrices and low rank matrices .the detailed analyses indicate that the asymptotic equivalence does not require sparsity nor low rank on matrix parameters , and depending on the density matrix class as well as the set of observables used for performing measurements , sparsity and low rank may or may not make the asymptotic equivalence easier to achieve . in particular , we show that the pauli matrices as observables are bad for establishing the asymptotic equivalence for sparse matrices and low rank matrices ; and for certain class of sparse or low rank density matrices , we can obtain the asymptotic equivalence of quantum state tomography and trace regression in the ultra high dimension setting where the matrix size of the density matrices is comparable to or even exceeds the number of the quantum measurements on the observables . the rest of paper proceeds as follows .section [ sec2 ] reviews trace regression and quantum state tomography and states statistical models and data structures .we consider only finite square matrices , since trace regression handles finite matrices , and density matrices are square matrices .section [ sec3 ] frames trace regression and quantum state tomography with summarized measurements as two statistical experiments in le cam paradigm and studies their asymptotic equivalence .section [ sec4 ] introduces a fine scale trace regression model to match quantum state tomography with individual measurements and investigates their asymptotic equivalence .we illustrate the asymptotic equivalence for sparse density matrix class and low rank density matrix class in sections [ sec5 ] and [ sec6 ] , respectively .we collect technical proofs in section [ proofs ] , with additional proofs of technical lemmas in the .suppose that we have independent random pairs from the model where is matrix trace , denotes conjugate transpose , is an unknown by matrix , are zero mean random errors , and are matrix input variables of size by . we consider both fixed and random designs .for the random design case , each is randomly sampled from a set of matrices . in the fixed design case , are fixed matrices .model ( [ trace - regression ] ) is called trace regression and employed in matrix completion .matrix input variables are often sparse in a sense that each has a relatively small number of nonzero entries .trace regression masks the entries of through , and each observation is the trace of the masked corrupted by noise . the statistical problem is to estimate all the entries of based on observations , , which is often referred to as noisy matrix completion . model ( [ trace - regression ] ) and matrix completion are matrix generalizations of a linear model and sparse signal estimation in compressed sensing .see cands and plan ( ) , , , , , and , and .matrix input variables are selected from a matrix set , where are by matrices .below we list some examples of such matrix sets used in matrix completion .let \\[-8pt ] & & \hspace*{5.1pt } j=1,\ldots , p = d^2 , \ell_1 , \ell _ 2=1,\ldots , d \bigr\},\nonumber\end{aligned}\ ] ] where is the canonical basis in euclid space . in this case , if , then , and the observation is equal to some entry of plus noise . more generally , instead of using single , we may define as the sum of several , and then is equal to the sum of some entries of .set where we identify with , , , for , and for define where , and are called the pauli matrices . for with integer , we may use -fold tensor products of , , and to define general pauli matrices and obtain the pauli matrix set where denotes tensor product .the pauli matrices are widely used in quantum physics and quantum information science .matrices in ( [ basisi ] ) are of rank 1 and have eigenvalues and . for matrices in ( [ hermitian - basis ] ) , the diagonal matrices are of rank and have eigenvalues and , and the nondiagonal matrices are of rank and have eigenvalues and .pauli matrices in ( [ pauli - basis ] ) are of full rank , and except for the identity matrix all have eigenvalues . denote by the space of all by complex matrices and define an inner product for . then both ( [ hermitian - basis ] ) and ( [ pauli - basis ] ) form orthogonal bases for all complex hermitian matrices , and the real matrices in ( [ hermitian - basis ] ) or ( [ pauli - basis ] ) form orthogonal bases for all real symmetric matrices . for the random design case , with , we assume that matrix input variables are independent and sampled from according to a distribution on , the observations from ( [ trace - regression ] ) are , , with sampled from according to the distribution . for the fixed design case ,matrix input variables form a fixed set of matrices , and we assume and . the observations from ( [ trace - regression ] ) are , , with deterministic . for a finite - dimensional quantum system , we describe its quantum state by a density matrix on -dimensional complex space , where density matrix is a by complex matrix satisfying ( 1 ) hermitian , that is , is equal to its conjugate transpose ; ( 2 ) semi - positive definite ; ( 3 ) unit trace , that is , .experiments are conducted to perform measurements on the quantum system and obtain data for studying the quantum system .common quantum measurements are on some observable , which is defined as a hermitian matrix on .assume that the observable has the following spectral decomposition : where are different real eigenvalues of , and are projections onto the eigen - spaces corresponding to .for the quantum system prepared in a state , we need a probability space to describe measurement outcomes when performing measurements on the observable .denote by the measurement outcome of . according to the theory of quantum mechanics, is a random variable on taking values in , with probability distribution given by see , , and .suppose that an experiment is conducted to perform measurements on independently for quantum systems which are identically prepared in the same quantum state . from the experiment we obtain individual measurements , which are i.i.d . according to distribution ( [ measurement ] ) , and denote their average by .the following proposition provides a simple multinomial characterization for the distributions of and .[ prop1 ] as random variables take eigenvalues , we count the number of taking and define the counts by , . then the counts jointly follow the following multinomial distribution : ^{u_1 } \cdots \bigl[\operatorname{tr}({{\mathbf q}}_r { \bolds\rho})\bigr]^{u_r},\nonumber\\[-8pt]\\[-8pt ] \sum_{a=1}^r u_a & = & m\nonumber\end{aligned}\ ] ] and we note the difference between the observable which is a hermitian matrix and its measurement result which is a real - valued random variable . to illustrate the connection between density matrix and the measurements of , we assume that has different eigenvalues . as in , we use the normalized eigenvectors of to form an orthonormal basis , represent under the basis and denote the resulting matrix by . then from ( [ measurement ] ) we obtain that is , with the representation under the eigen basis of , measurements on single observable contain only information about the diagonal elements of . no matter how many measurements we perform on , we can not draw any inference about the off - diagonal elements of based on the measurements on .we usually need to perform measurements on enough different observables in order to estimate the whole density matrix .see , and . in physics literaturequantum state tomography refers to the reconstruction of a quantum state based on measurements obtained from quantum systems that are identically prepared under the state .statistically it is the problem of estimating the density matrix from the measurements .suppose that quantum systems are identically prepared in a state , is a set of observables available to perform measurements , and each has a spectral decomposition where are different real eigenvalues of , and are projections onto the eigen - spaces corresponding to .we select an observable , say , and perform measurements on for the quantum systems . according to the observable selectionwe classify the quantum state tomography experiment as either a fixed design or a random design . in a random design ,we choose an observable at random from to perform measurements for the quantum systems , while a fixed design is to perform measurements on every observable in for the quantum systems .consider the random design case .we sample an observable from to perform measurements independently for quantum systems , , where observables are independent and sampled from according to a distribution on , specifically we perform measurements on each observable independently for quantum systems that are identically prepared under the state , and denote by the measurement outcomes and the average of the measurement outcomes .the resulting individual measurements are the data , , and the summarized measurements are the pairs , , where , , , are independent , and given for some , the conditional distributions of are given by \\[-8pt ] & & \eqntext{a=1,\ldots , r_{j_k } , \ell = 1,\ldots , m , j_k \in\{1,\ldots , p\ } , } \\\label{tomography2 } e(r_{k\ell}|{{\mathbf m}}_k={{\mathbf b}}_{j_k } ) & = & \operatorname{tr } ( { { \mathbf b}}_{j_k } { \bolds\rho } ) , \nonumber\\[-8pt]\\[-8pt ] \operatorname{var}(r_{k\ell}|{{\mathbf m}}_k= { { \mathbf b}}_{j_k } ) & = & \operatorname{tr } \bigl({{\mathbf b}}_{j_k}^2 { \bolds\rho}\bigr ) - \bigl[\operatorname{tr } ( { { \mathbf b}}_{j_k } { \bolds\rho } ) \bigr]^2.\nonumber \ ] ] the statistical problem is to estimate from the individual measurements , , or from the summarized measurements . for the fixed design case , we take and . we perform measurements on every observable independently for quantum systems that are identically prepared under the state , and denote by the measurement outcomes and the average of the measurement outcomes .the resulting individual measurements are the data , , and the summarized measurements are the pairs , , where is the same as in ( [ tomography0 ] ) , , , , are independent , and the distributions of are given by ^ 2.\ ] ] the statistical problem is to estimate from the individual measurements , , or from the summarized measurements . because ofconvenient statistical procedures and fast implementation algorithms , the summarized measurements instead of the individual measurements are often employed in quantum state tomography [ , , ] .however , in section [ section - fine ] we will show that quantum state tomography based on the summary measurements may suffer from substantial loss of information , and we can develop more efficient statistical inference procedures by the individual measurements than by the summary measurements . in order to estimate all free entries of , we need the quantum state tomography model identifiable .suppose that all have exact distinct eigenvalues .the identifiability may require ( which is at least ) and for the individual measurements and for the summarized measurements .there is a trade - off between and in the individual measurement case . for large , we need less observables but more measurements on each observable , while for small , we require more observables but less measurements on each observable . in terms of the total number , , of measurement data , the requirement becomes .[ section - p - equivalence ] quantum state tomography and trace regression share the common goal of estimating the same unknown matrix , and it is nature to put them in the le cam paradigm for statistical comparison . we compare trace regression and quantum state tomography in either the fixed design case or the random design case . first , we consider the fixed design case . trace regression ( [ trace - regression ] ) generates data on dependent variables with deterministic matrix input variables , and we denote by the joint distribution of , .quantum state tomography performs measurements on a fixed set of observables and obtains average measurements on whose distributions are specified by ( [ tomography0 ] ) and ( [ tomography4])([tomography5 ] ) , and we denote by the joint distribution of , .both and are probability distributions on measurable space , where is the borel -field on .second we consider the random design case .trace regression ( [ trace - regression ] ) generates data on the pairs , , where matrix input variables are sampled from according to the distribution given by ( [ design - pi ] ) .we denote by the joint distribution of , , for the trace regression model .quantum state tomography yields observations in the form of observables and average measurement results on , , where the distributions of are specified by ( [ design - xi])([tomography2 ] ) .we denote by the joint distribution of , , for the quantum state tomography model . both and probability distributions on measurable space , where consists of all subsets of .denote by a class of semi - positive hermitian matrices with unit trace . for trace regression and quantum state tomography ,we define two statistical models where measurable spaces , , are either for the random design case or for the fixed design case .models and are called statistical experiments in the le cam paradigm .we use le cam s deficiency distance between and to compare the two models .let be a measurable action space , : a loss function , and . formodel , , denote by a decision procedure and the risk from using procedure when is the loss function and is the true value of the parameter .we define deficiency distance between and as the maximum of and , where is referred to as the deficiency of with respect to . if , then every decision procedure in one of the two experiments and has a corresponding procedure in another experiment that comes within of achieving the same risk for any bounded loss .two sequences of statistical experiments and are called asymptotically equivalent if , as . for two asymptotic equivalent experiments and , any sequence of procedures in model has a corresponding sequence of procedures in model with risk differences tending to zero uniformly over and all loss with , and the procedures and are called asymptotically equivalent .see , and . to establish the asymptotic equivalence of trace regression and quantum state tomography ,we need to lay down technical conditions and make some synchronization arrangement between observables in quantum state tomography and matrix input variables in trace regression .assume that , and each is a hermitian matrix with at most distinct eigenvalues , where is a fixed integer .matrix input variables in trace regression and observables in quantum state tomography are taken from . for the fixed design case ,we assume , and , . for the random design case , and independently sampled from according to distributions and , respectively , and assume that as , , where .\ ] ] suppose that two models and are identifiable . for trace regression , we assume that are independent , and given , follows a normal distribution with mean zero and variance ^ 2\bigr\}.\ ] ] for with spectral decomposition ( [ basis - diagonal ] ) , , let let and be two fixed constants with . assume for , [ rem1 ] condition ( c1 ) synchronizes matrices used as matrix input variables in trace regression and as observables in quantum state tomography so that we can compare the two models .the synchronization is needed for applying matrix completion methods to quantum state tomography [ ] .the finiteness assumption on is due to the practical consideration .observables in quantum state tomography and matrix input variables in trace regression are often of large size .mathematically the numbers of their distinct eigenvalues could grow with the size , however , in practice matrices with a few distinct eigenvalues are usually chosen as observables to perform measurements in quantum state tomography and as matrix input variables to mask the entries of in matrix completion [ , , , , , , , ] .condition ( c2 ) is to match the variance of in quantum state tomography with the variance of random error in trace regression in order to obtain the asymptotic equivalence , since and always have the same mean .regarding condition ( c3 ) , from ( [ measurement - u])([measurement - ru ] ) and ( [ tomography0])([tomography5 ] ) we may see that each is determined by the counts of random variables taking eigenvalues , and the counts jointly follow a multinomial distribution with parameters of trials and cell probabilities , .condition ( c3 ) is to ensure that the multinomial distributions ( with uniform perturbations ) can be well approximated by multivariate normal distributions so that we can calculate the hellinger distance between the distributions of ( with uniform perturbations ) in quantum state tomography and the distributions of in trace regression and thus establish the asymptotic equivalence of quantum state tomography and trace regression .index in ( [ ci ] ) is to exclude all the cases with or , under which measurement results on are certain , either never yielding measurement results or always yielding results , and their contributions to are deterministic and can be completely separated out from .see further details in remark [ rem4 ] below and the proofs of theorems [ theo1 ] and [ theo2 ] in section [ proofs ] .the following theorem provides bounds on deficiency distance and establishes the asymptotic equivalence of trace regression and quantum state tomography under the fixed or random designs .[ theo1 ] assume that conditions are satisfied .for the random design case , we have where is a generic constant depending only on , integer and constants are , respectively , specified in conditions and , is defined in ( [ pi ] ) , and is given by in particular , if for , then where now can be simplified as for the fixed design case , we have where is the same as in , and is given by ( [ zeta - p1 ] ) . [ rem2 ] theorem [ theo1 ] establishes bounds on the deficiency distance between trace regression and quantum state tomography .if the deficiency distance bounds in ( [ randompp ] ) , ( [ randompp1 ] ) and ( [ fixpp ] ) go to zero , trace regression and quantum state tomography are asymptotically equivalent under the corresponding cases . defined in ( [ zeta - p ] ) and ( [ zeta - p1 ] ) has an intuitive interpretation as follows .proposition [ prop1 ] shows that each observable corresponds to a multinomial distribution in quantum state tomography .of the multinomial distributions in quantum state tomography , is the maximum of the average fraction of the nondegenerate multinomial distributions ( i.e. , with at least two cells ) . as we discussed in remark [ rem1 ] , the multinomial distributions have cell probabilities , .since for each , is the trace of the density matrix restricted to the corresponding eigen - space , and , thus if , can not live on any single eigen - space corresponding to one eigenvalue of ; otherwise measurement results on are certain , and the corresponding multinomial and normal distributions are reduced to the same degenerate distribution and hence are always equivalent . therefore , to bound the deficiency distance between quantum state tomography and trace regression we need to consider only the nondegenerate multinomial distributions , and thus appears in all the deficiency distance bounds .since is always bounded by , from theorem [ theo1 ] we have that if , the two models are asymptotically equivalent . as we will see in sections [ section - sparse ] and [ section - low - rank ] , depending on density matrix class as well as the matrix set , may or may not go to zero , and we will show that if it approaches to zero , we may have asymptotic equivalence in ultra - high dimensions where may be comparable to or exceed .[ rem3 ] the asymptotic equivalence results indicate that we may apply matrix completion methods to quantum state tomography by substituting from quantum state tomography for from trace regression .for example , suppose that is an orthonormal basis and has an expansion with .for trace regression , we may estimate by the average of those with corresponding .replacing from trace regression by from quantum state tomography we construct an estimator of by taking the average of those with corresponding .in fact , the resulting estimator based on can be naturally derived from quantum state tomography . from ( [ measurement ] ) , ( [ tomography2 ] ) and ( [ tomography5 ] ) , we have , where is the outcome of measuring , and hence it is natural to estimate by the average of quantum measurements with corresponding .as statistical procedures and fast algorithms are available for trace regression , these statistical methods and computational techniques can be easily used to implement quantum state tomography based on the summarized measurements [ and ] .[ section - fine ] in section [ section - p - equivalence ] for quantum state tomography we define and in ( [ experiment ] ) based on the average measurements , and the asymptotic equivalence results show that trace regression matches quantum state tomography with the summarized measurements , . we may use individual measurements instead of their averages [ see ( [ tomography0])([tomography5 ] ) for their definitions and relationships ] , and replace in ( [ experiment ] ) by the joint distribution , , of , , for the random design case [ or , , for the fixed design case ] to define a new statistical experiment for quantum state tomography with the individual measurements , where measurable space is either for the random design case or for the fixed design case . in general, and may not be asymptotically equivalent .as individual measurements may contain more information than their average , may be more informative than , and hence but may be bounded away from zero . as a consequence, we may have goes to zero but and are bounded away from zero . for the special case of where all have at most two distinct eigenvalues such as pauli matrices in ( [ pauli - basis ] ) , are sufficient statistics for the distribution of , and hence and are equivalent , that is , , , and and can still be asymptotically equivalent . in summary , generally trace regression can be asymptotically equivalent to quantum state tomography with summarized measurements but not with individual measurements .in fact , the individual measurements , , from quantum state tomography contain information about , , while observations , , from trace regression have information only about . from ( [ basis - diagonal ] )we get , so the individual measurements from quantum state tomography may be more informative than observations from trace regression for statistical inference of . to match quantum state tomography with individual measurements , we may introduce a fine scale trace regression model and treat trace regression ( [ trace - regression ] ) as a coarse scale model aggregated from the fine scale model as follows .suppose that matrix input variable has the following spectral decomposition : where are real distinct eigenvalues of , and are the projections onto the eigen - spaces corresponding to .the fine scale trace regression model assumes that observed random pairs obey where are random errors with mean zero .models ( [ trace - regression ] ) and ( [ trace - regression1 ] ) are trace regression at two different scales and connected through ( [ trace - regression2 ] ) and the following aggregation relations : the fine scale trace regression model specified by ( [ trace - regression1 ] ) matches quantum state tomography with the individual measurements , . indeed , as ( [ trace - regression2 ] ) indicates a one to one correspondence between and , , we replace by and in ( [ experiment ] ) by the joint distribution , , of , , for the random design case [ or , , for the fixed design case ] , and define the statistical experiment for fine scale trace regression ( [ trace - regression1 ] ) as follows : where measurable space is either for the random design case or for the fixed design case . to study the asymptotic equivalence of fine scale trace regression and quantum state tomography with individual measurements , we need to replace condition ( c2 ) by a new condition for fine scale trace regression: suppose that two models and are identifiable . for fine scale trace regression ( [ trace - regression1 ] ) , random errors , , are independent , and given , is a multivariate normal random vector with mean zero and for , , ,\nonumber\\[-8pt]\\[-8pt ] \operatorname{cov}(z_{ka } , z_{kb}|{{\mathbf x}}_k)&= & - \frac{1}{m } \operatorname{tr}\bigl ( { { \mathbf q}}^x_{ka } { \bolds\rho}\bigr ) \operatorname{tr}\bigl({{\mathbf q}}^x_{kb } { \bolds\rho}\bigr).\nonumber\end{aligned}\ ] ] we provide bounds on and establish the asymptotic equivalence of and in the following theorem . [ theo2 ]assume that conditions , and are satisfied . for the random design case, we have where as in theorem [ theo1 ] , is a generic constant depending only on , integer and constants are , respectively , specified in conditions and , and and are given by ( [ pi ] ) and ( [ zeta - p ] ) , respectively . in particular ,if for , then where is given by ( [ zeta - p1 ] ) . for the fixed design case , we have where is the same as in , and is given by ( [ zeta - p1 ] ) . [ rem4 ] for quantum state tomography we regard summarized measurements and individual measurements as quantum measurements at coarse and fine scales , respectively. then theorems [ theo1 ] and [ theo2 ] show that quantum state tomography and trace regression are asymptotically equivalent at both coarse and fine scales .moreover , as measurements at the coarse scale are aggregated from measurements at the fine scale for both quantum state tomography and trace regression , their asymptotic equivalence at the coarse scale is a consequence of their asymptotic equivalence at the fine scale .specifically , the deficiency distance bounds in ( [ randomqq])([fixqq ] ) of theorem [ theo2 ] are derived essentially from the deficiency distance between independent multinomial distributions in quantum state tomography and their corresponding multivariate normal distributions in fine scale trace regression , and the deficiency distance bounds in ( [ randompp ] ) , ( [ randompp1 ] ) and ( [ fixpp ] ) of theorem [ theo1 ] are the consequences of corresponding bounds in theorem [ theo2 ] .fine scale trace regression ( [ trace - regression1 ] ) and condition ( c2 ) indicate that for each , follows a multivariate normal distribution . from ( [ measurement - u ] ) and ( [ tomography1])([tomography5 ] ) we see that given , is jointly determined by the counts of taking the eigenvalues of , and the counts jointly follow a multinomial distribution , with mean and covariance matching with those of . to prove theorems [ theo1 ] and [ theo2 ], we need to derive the hellinger distances of the multivariate normal distributions and their corresponding multinomial distributions with uniform perturbations . has established a bound on deficiency distance between a multinomial distribution and its corresponding multivariate normal distribution through the total variation distance between the multivariate normal distribution and the multinomial distribution with uniform perturbation .the main purpose of the multinomial deficiency bound in is the asymptotic equivalence study for density estimation .consequently , the multinomial distribution in is allowed to have a large number of cells , with bounded cell probability ratios , and his proof techniques are geared up for managing such a multinomial distribution under total variation distance . since quantum state tomography involves many independent multinomial distributions all with a small number of cells , carter s result is not directly applicable for proving theorems [ theo1 ] and [ theo2 ] , nor his approach suitable for the current model setting . to show theorems [ theo1 ] and [ theo2 ], we deal with independent multinomial distributions in quantum state tomography by deriving the hellinger distances between the perturbed multinomial distributions and the corresponding multivariate normal distributions , and then we establish bounds on the deficiency distance between quantum state tomography and trace regression at the fine scale .moreover , from ( [ measurement - ru ] ) , ( [ tomography0 ] ) and ( [ trace - regression0 ] ) we derive from the counts of individual measurements for quantum state tomography and from fine scale observations for trace regression by the same aggregation relationship , and ( [ c2 * -var ] ) implies ( [ c2-var ] ) , so bounds on can be obtained from those on .thus , theorem [ theo1 ] may be viewed as a consequence of theorem [ theo2 ] . for more detailssee the proofs of theorems [ theo1 ] and [ theo2 ] in section [ proofs ] .[ section - sparse ] since all deficiency distance bounds in theorems [ theo1 ] and [ theo2 ] depend on , we further investigate for two special classes of density matrices : sparse density matrices in this section and low rank density matrices in section [ section - low - rank ] .[ cor - sparse][cor1 ] denote by a collection of density matrices with at most nonzero entries , where is an integer .assume that is selected as basis ( [ hermitian - basis ] ) , and .then where is the maximum number of nonzero diagonal entries of over .furthermore , if conditions and are satisfied , we have where is the same generic constant as in theorems [ theo1 ] and [ theo2 ] .[ rem5 ] since , , and the deficiency distance bounds in corollary [ cor - sparse ] are of order ^{1/2} ] . since and , may go to zero very fast as .as , if , we obtain the asymptotic equivalence of quantum state tomography and trace regression .for example , consider the case that and are bounded , and is of order ( suggested by the bounded and the identifiability discussion at the end of section [ sec2.3 ] ) . in this case the deficiency distance bounds in corollary [ cor - lowrank ] are of order , and we conclude that if , the two models are asymptotically equivalent for any compatible with the model identifiability condition .a particular example is that and grows exponentially faster than .[ rem8 ] the low rank condition on a density matrix indicates that it has a relatively small number of positive eigenvalues , that is , its positive eigenvalues are sparse .we may also explain the condition on the eigenvectors in ( [ rank - r ] ) via sparsity as follows .since is an orthonormal basis in , the real part , , and imaginary part , , of have the following expansions under the basis : where and are coefficients .then a low rank density matrix with representation ( [ rank - r ] ) belongs to , if for , and have cardinality at most , that is , there are at most nonzero coefficients in the expansions ( [ rank - r-1 ] ) . as , the eigenvectors have sparse representations .thus , the subclass of density matrices imposes some sparsity conditions on not only the eigenvalues but also the eigenvectors of its members .in fact , indicates that we need some sparsity on both eigenvalues and eigenvectors for estimating large matrices .an important class of quantum states are pure states , which correspond to density of rank one . in order to have a pure state in , its eigenvector corresponding to eigenvalue must be a liner combination of at most basis vectors .such a requirement can be met for a large class of pure states through the selection of proper and suitable bases in .it is interesting to see that matrices themselves in of corollary [ cor4 ] may not be sparse .for example , taking as the haar basis in [ see ] , we obtain that rank one matrix and rank two matrix , which are inside for and , respectively , but not sparse .[ rem9 ] from corollaries [ cor1][cor4 ] , we see that whether goes to zero or not is largely dictated by used in the two models . as we discussed in remarks [ rem5 ] and [ rem7 ] , for certain classes of sparse or low rank density matrices, goes to zero , and we can achieve the asymptotic equivalence of quantum state tomography and trace regression when is comparable to or exceeds .in particular for a special subclass of low rank density matrices we can obtain the asymptotic equivalence even when grows exponentially faster than .we should emphasize that the claimed asymptotic equivalences in the ultra high dimension setting are under some sparse circumstances for which goes to zero , that is , of the multinomial distributions in the quantum state tomography model , a relatively small number of multinomial distributions are nondegenerate , and similarly , the trace regression model as the approximating normal experiment consists of the same small number of corresponding nondegenerate normal distributions . in other words , the asymptotic equivalence in ultra high dimensions may be interpreted as the approximation of a sparse quantum state tomography model by a sparse gaussian trace regression model .this is the first asymptotic equivalence result in ultra high dimensions .it leads us to speculate that sparse gaussian experiments may play an important role in the study of asymptotic equivalence in the ultra high dimension setting .[ proofs ] we need some basic results about the markov kernel method which are often used to bound and prove asymptotic equivalence of and [ see and ] .a markov kernel is defined for and such that for a given , is a probability measure on the -field , and for a fixed , is a measurable function on .the markov kernel maps any into another probability measure (a ) = \int k(\omega , a ) \mathbb { p}_{2,n,{\bolds\rho}}(d \omega ) \in{{{\mathcal}p}}_{1n} ] , and random variable has the distribution .we give bounds on the hellinger distances between the perturbed multinomial distributions and their corresponding multivariate normal distributions in next two lemmas whose proofs are collected in the . [ lemmulti - norm ] suppose that is a multinomial distribution , where is a fixed integer , and are two fixed constants .denote by the multivariate normal distribution whose mean and covariance are the same as .let be the convolution of the distribution and the distribution of , where are independent and follow a uniform distribution on , and .then [ lemmany - multi - norm ] suppose that for , is a multinomial distribution , where , is a fixed integer , , and for constants and , denote by the multivariate normal distribution whose mean and covariance are the same as . if , following the same way as in lemma [ lemmulti - norm ] we define as the convolution of and an independent uniform distribution on , and if let .assume that for different are independent , and define product probability measures then we have we need the following lemma on total variation distance of two joint distributions whose proof is in the . [ lemtv ]suppose that and are discrete random variables , and random variables and have joint distributions and , respectively .let and , where and are the respective marginal distributions of and , and and are the conditional distributions of given and given , respectively. then \\[-8pt ] & & { } + e_{f_1 } \bigl [ \bigl\|f_{2|1}(\cdot|u_1 ) - g_{2|1 } ( \cdot|v_1 ) \bigr\|_{\mathrm{tv } } |u_1=v_1 \bigr],\nonumber \ ] ] where denotes expectation under , denotes the total variation norm of the difference of the two conditional distributions and , and the value of the second term on the right - hand side of ( [ tv2 ] ) is clearly specified as follows : \\ & & \qquad= \sum_{x } \bigl\|f_{2|1}(\cdot|x ) - g_{2|1}(\cdot|x ) \bigr\|_{\mathrm{tv } } p(u_1=x).\end{aligned}\ ] ] proof of theorem [ theo1 ] denote by the distribution of and the distribution of , . for different , from trace regressionare independent , and from quantum state tomography are independent , so and for different are independent , and where and are given in ( [ experiment ] ) .suppose that has different eigenvalues , and let , , and .denote by the distribution of .if , we let be the distribution of , where , is equal to plus an independent uniform random variable on , and . note that is the distribution of , and analog to the expression ( [ nru ] ) of in terms of , we define and denote by the distribution of . if , let and . as , , and for different are independent , define their product probability measures note that , since and have a one to one correspondence , and the two statistical experiments formed by the distribution of and the distribution of have zero deficiency distance , without confusion we abuse the notation by using it here for the joint distribution of , , as well as in ( [ experimentq2 ] ) for the joint distribution of , .given , let , and follows a multinomial distribution , where and are defined in ( [ basis - diagonal ] ) , and , \nonumber\\ \operatorname{cov}(u_{ka},u_{kb } | { { \mathbf m}}_k= { { \mathbf b}}_{j_k } ) & = & - m \operatorname{tr}({{\mathbf q}}_{j_k a } { \bolds\rho } ) \operatorname{tr}({{\mathbf q}}_{j_k b } { \bolds\rho}),\nonumber\\ & & \eqntext{a \neq b , a , b = 1,\ldots , r_{j_k}.}\end{aligned}\ ] ] then \\ & & { } - \frac{2}{m}\sum_{a=1}^{r_{j_k } } \sum _ { b = a+1}^{r_{j_k } } \lambda_{j_k a } \lambda_{j_k b } \operatorname{tr}({{\mathbf q}}_{j_k a } { \bolds\rho } ) \operatorname{tr}({{\mathbf q}}_{j_k b } { \bolds\rho } ) \\ & = & \frac{1}{m } \bigl\{\operatorname{tr}\bigl({{\mathbf b}}_{j_k}^2 { \bolds\rho}\bigr ) - \bigl[\operatorname{tr}({{\mathbf b}}_{j_k } { \bolds\rho})\bigr]^2\bigr\}\\ & = & \frac{1}{m } \bigl\{\operatorname{tr}\bigl({{\mathbf m}}_k^2 { \bolds\rho}\bigr ) - \bigl[\operatorname{tr}({{\mathbf m}}_k { \bolds\rho})\bigr]^2\bigr\}.\end{aligned}\ ] ] from ( [ trace - regression2 ] ) and ( [ trace - regression1 ] ) , we have that given , , and multivariate normal random vector has conditional mean and conditional covariance matching those of .with we may rewrite ( [ trace - regression1 ] ) and ( [ trace - regression0 ] ) as follows : \\[-8pt ] y_k & = & \frac{1}{m } \sum_{a=1}^{r_{j_k } } \lambda_{ka } v_{ka},\qquad \varepsilon_k= \sum _ { a=1}^{r_{j_k } } \lambda_{ka } z_{ka}.\nonumber\end{aligned}\ ] ] denote by the distribution of .then for different are independent , and where is the joint distribution of , .note that , since , and the two statistical experiments formed by the distribution of and the distribution of have zero deficiency distance , without confusion we abuse the notation by using it here for the joint distribution of , , as well as in ( [ experimentq1 ] ) for the joint distribution of , .conditional on , for , if , and are the same degenerate distribution ; if , is a multinomial distribution with its uniform perturbation , and is a multivariate normal distribution with mean and covariance matching those of .thus applying lemma [ lemmany - multi - norm ] , we obtain that given , where the first inequality is due to ( [ tv - hellinger - kl ] ) . as( [ n*u ] ) and ( [ epsilonv ] ) imply that and are the same weighted averages of components of and , respectively , and are the same respective marginal probability measures of and . hence ,conditional on , with and are sampled from according to distributions and , respectively , we have \bigr ) \\ & & \qquad\leq n \max_{1 \leq j \leq p}\biggl { \vert}1 - \frac{\pi(j)}{\xi(j ) } \biggr{\vert}\nonumber\\ & & \qquad\quad{}+ e_\pi\bigl ( e_\pi\bigl [ \bigl\| \mathbb{q}_{1,n,{\bolds\rho } } - \mathbb{q}_{2,n,{\bolds\rho}}^ { * } \bigr\|_{\mathrm{tv } } | { { \mathbf x}}_{1}={{\mathbf m}}_1,\ldots , { { \mathbf x}}_n={{\mathbf m}}_n \bigr ] \bigr ) \nonumber \\ & & \qquad\leq n \gamma_p + \frac{c \kappa}{\sqrt{m } } e_\pi\biggl ( \biggl [ \sum_{k=1}^n 1\bigl(\bigl| { { { \mathcal}i}}_{j_k}({\bolds\rho})\bigr|\geq2\bigr ) \biggr]^{1/2 } \biggr ) \nonumber \\ & & \qquad\leq n \gamma_p + \frac{c \kappa}{\sqrt{m } } \biggl ( \sum _ { k=1}^n e_\pi\bigl [ 1\bigl(\bigl| { { { \mathcal}i}}_{j_k}({\bolds\rho})\bigr|\geq2\bigr ) \bigr ] \biggr)^{1/2 } \nonumber \\ & & \qquad\leq n \gamma_p + \frac{c \kappa}{\sqrt{m } } \biggl ( \sum _ { k=1}^n \sum_{j=1}^p \pi(j ) 1\bigl(\bigl|{{{\mathcal}i}}_{j}({\bolds\rho})\bigr|\geq2\bigr ) \biggr)^{1/2 } \nonumber \\ & & \qquad= n \gamma_p + \frac{c \kappa}{\sqrt{m } } \biggl ( n \sum _ { j=1}^p \pi(j ) 1\bigl(\bigl|{{{\mathcal}i}}_{j}({\bolds\rho})\bigr| \geq2\bigr ) \biggr)^{1/2 } \nonumber \\ & & \qquad\leq n \gamma_p + c \kappa\biggl(\frac{n \zeta_p}{m } \biggr)^{1/2},\nonumber\end{aligned}\ ] ] where the first three inequalities are , respectively , from lemma [ lemtv ] , ( [ pq ] ) and ( [ qq ] ) , the fourth inequality is applying hlder s inequality , and the fifth inequality is due the fact that and are the i.i.d .sample from . combining ( [ kernel ] ) and( [ pp ] ) , we obtain to bound , we employ a round - off procedure to invert the uniform perturbation used to obtain and in ( [ thm - q2q*2p*2 ] ) [ also see carter ( ) , section 5 ] .specifically let , where is a random vector obtained by rounding off to the nearest integer , , and .denote by the distribution of and the distribution of , and let it is easy to see that for any integer - valued random variable , = w,\ ] ] and thus the round - off procedure inverts the uniform perturbation procedure .denote by and the uniform perturbation and the round - off procedure , respectively . then from ( [ thm - q2q*2p*2 ] ) , ( [ thm - q1 ] ) and ( [ thm - q*1p*1 ] ) we have \\[-8pt ] k_1\bigl[k_0(\mathbb{q}_{2,n,{\bolds\rho}})\bigr ] & = & k_1\bigl[\mathbb{q}_{2,n,{\bolds\rho}}^ * \bigr]= \mathbb{q}_{2,n,{\bolds\rho}}.\nonumber\end{aligned}\ ] ] from ( [ thm - k1k0 ] ), we show that conditional on , \bigr\|_{\mathrm{tv } } \nonumber\\ & = & \bigl\| k_1\bigl [ \mathbb{q}_{1,n,{\bolds\rho } } - k_0(\mathbb{q}_{2,n,{\bolds\rho}})\bigr ] \bigr\| _ { \mathrm{tv } } \nonumber\\[-8pt]\\[-8pt ] & \leq&\bigl\| \mathbb{q}_{1,n,{\bolds\rho } } - k_0(\mathbb{q}_{2,n,{\bolds\rho } } ) \bigr\| _ { \mathrm{tv } } \nonumber\\ & = & \bigl\| \mathbb{q}_{1,n,{\bolds\rho } } - \mathbb{q}_{2,n,{\bolds\rho}}^ * \bigr\|_{\mathrm{tv}},\nonumber\end{aligned}\ ] ] which is bounded by ( [ qq ] ) . using the same arguments for showing ( [ pq ] ) and ( [ pp ] ) we derive from ( [ qq ] ) and ( [ q*1q2 ] ) the following result : and applying ( [ kernel ] ) we conclude collecting together the deficiency bounds in ( [ deltap21 ] ) and ( [ deltap12 ] ) we establish ( [ randompp ] ) to bound the deficiency distance for the random design case . for the special case of , and the result ( [ randompp1 ] )follows . for the fixed design case , the arguments for proving ( [ fixpp ] ) are the same except for now we simply combine ( [ qq ] ) , ( [ pq ] ) and ( [ q*1q2 ] ) but no need for ( [ pp ] ) and ( [ ppreverse ] ) . proof of theorem [ theo2 ] the proof of theorem [ theo1 ] has essentially established theorem [ theo2 ] .all we need is to modify the arguments as follows . as in the derivation of ( [ pp ] )we apply lemma [ lemtv ] directly to and and use ( [ qq ] ) to get \bigr ) \\ & & \qquad\leq n \gamma_p + c \kappa\biggl(\frac{n \zeta_p}{m } \biggr)^{1/2},\end{aligned}\ ] ] and then we obtain , instead of ( [ deltap21 ] ) , the following result : as in the derivation of ( [ ppreverse ] ) , we apply lemma [ lemtv ] to and and use ( [ qq ] ) and ( [ q*1q2 ] ) to get and then we obtain , instead of ( [ deltap12 ] ) , the following result : putting together the deficiency bounds in ( [ deltap21-thm2 ] ) and ( [ deltap12-thm2 ] ) we establish ( [ randomqq ] ) to bound the deficiency distance for the random design case . to prove corollaries , from theorems [ theo1 ] and [ theo2 ] we need to show the given bounds on and then substitute them into ( [ randompp1 ] ) and ( [ randomqq1 ] ). below we will derive for each case .proof of corollary [ cor - sparse ] we first analyze the eigen - structures of basis matrices given by ( [ hermitian - basis ] ) . for diagonal basis matrix with on entry and elsewhere ,its eigenvalues are and .corresponding to eigenvalue , the eigenvector is , and corresponding to eigenvalue , the eigen - space is the orthogonal complement of .denote by and the projections on the eigen - spaces corresponding to eigenvalues and , respectively . for real symmetric nondiagonal with on and entries and elsewhere ,the eigenvalues are , and .corresponding to eigenvalues , the eigenvectors are , respectively , and corresponding to eigenvalue , the eigen - space is the orthogonal complement of .denote by , and the projections on the eigen - spaces corresponding to eigenvalues , and , respectively . for imaginary hermitian with on entry , on entry and elsewhere ,the eigenvalues are , and .corresponding to eigenvalues , the eigenvector are , respectively , and corresponding to eigenvalue , the eigen - space is the orthogonal complement of .denote by , and the projections on the eigen - spaces corresponding to eigenvalues , and , respectively . for diagonal with on entry ,it is a binomial case , and in order to have , we need .since has at most nonzero diagonal entries , among all the diagonal matrices there are at most of diagonal matrices for which it is possible to have and thus . for nondiagonal , it is a trinomial case , and depend on whether is real or complex . for real symmetric nondiagonal with on and entries , and for imaginary hermitian nondiagonal with on entry and on entry, as is semi - positive with trace , matrix must be semi - positive with trace no more than .of and , if one of them is zero , the semi - positiveness implies .thus , the by matrix has four scenarios : for the last three scenarios under both real symmetric and imaginary hermitian cases , we obtain for both real symmetric and imaginary hermitian cases , in order to have possible , at lease one of and needs to be nonzero . since has at most nonzero diagonal entries , among real symmetric nondiagonal matrices [ or imaginary hermitian nondiagonal matrices ] , there are at most of real symmetric nondiagonal ( or imaginary hermitian nondiagonal matrices ) for which it is possible to have or and thus . finally , for , putting together the results on the number of for which it is possible to have in the diagonal , real symmetric and imaginary hermitian cases , we conclude and proof of corollary [ cor - sparse - pauli ] the pauli basis ( [ pauli - basis ] ) has matrices with .we identify index with , corresponds to , and . in two dimensions , pauli matricessatisfy , and .consider . ; ; for [ or , and has eigenvalues .denote by the projections onto the eigen - spaces corresponding to eigenvalues , respectively .then for , and solving the equations we get for , and are orthogonal , and further if , which imply for any density matrix with representation ( [ sparse ] ) under the pauli basis ( [ pauli - basis ] ) , we have and hence .consider special density matrices with expression where is a real number with , and index .to check if , we need to evaluate for given by ( [ sparse1 ] ) , . for , , and since , we have for , from ( [ tr - bq ] ) we have and , and thus for or [ i.e. , or , from ( [ tr - bq1 ] ) we have , and thus equations ( [ tr - qrho1])([tr - qrho3 ] ) immediately show that for given by ( [ sparse1 ] ) and , ] and ] means rounding off to the nearest integer . for trinomial random variable , we have , the conditional distribution of given : , and , where , , . since are between and , and are between and .we have decomposition .denote by the distribution of and the conditional distribution of given .then is the convolution of and an independent uniform distribution on .since the added uniforms are independent of , and is the round - off of , the conditional distribution of given is equal to the conditional distribution of given ] .again we apply lemma [ lembin - norm ] to bound ] .consider the case .write , , and decompose , where , is the conditional distribution of given , , , . since are between and , all are between and that are bounded away from and .as there are differences in conditional variance between and , we handle the differences by introducing as follows .given we define , where the conditional distribution of given is for , , and .then given , add independent uniforms on to , denote the resulting corresponding random variables by , and let .then .note that , and is equal to the round - off of .let , where we denote by the distribution of and the conditional distribution of given .then is the convolution of and an independent uniform distribution on .since the added uniforms are independent of , and is the round - off of , the conditional distribution of given is equal to the conditional distribution of given ,\ldots , u_{j-1}=[u^*_{j-1}] ] and get the last inequality , and for , ^{2/3 } \bigr\}.\ ] ] note that on , ^{2/3}$ ] . then for we have on , ^{2/3 } \nonumber \\ & & \qquad\geq ( m - t_{j-3 } ) ( 1-\beta_{j-2 } ) ( 1 - \beta_{j-1})\nonumber\\ & & \qquad\quad { } - ( 1-\beta_{j-1 } ) \bigl[m \beta_{j-2 } ( 1-\beta_{j-2})\bigr]^{2/3 } - \bigl[m \beta_{j-1 } ( 1- \beta_{j-1})\bigr]^{2/3 } \geq \cdots\nonumber \\ & & \qquad\geq m ( 1-\beta_1 ) \cdots(1-\beta_{j-1})\\ & & \qquad\quad { } - m^{2/3 } \sum_{\ell=1}^{j-1 } \bigl [ \beta_{\ell } ( 1-\beta_{\ell})\bigr]^{2/3 } ( 1- \beta_\ell ) \cdots(1-\beta_{j-1 } ) \nonumber \\ & & \qquad\geq c m\nonumber\end{aligned}\ ] ] and thus we evaluate as follows : \\ & \leq & \exp\bigl [ - c m^{1/3 } \bigr ] + \sum _ { \ell=2}^j e_{p^ * } \bigl(1_{a_1 \cdots a_{\ell-1 } } \exp\bigl [ - c ( m - t_{\ell -1})^{1/3 } \bigr ]\bigr ) \nonumber \\ & \leq & \sum_{\ell=1}^j \exp\bigl [ - c m^{1/3 } \bigr ] \leq j \exp\bigl [ - c m^{1/3 } \bigr],\nonumber\end{aligned}\ ] ] where lemma [ lembin - norm ] is employed to bound and , and we use ( [ m - t ] ) to bound . plugging ( [ ear ] ) and ( [ pac ] ) into ( [ rpqprime ] ) and combining it together with ( [ rp*q3 ] ) and ( [ rqqprime ] ) , we obtain + \frac{c}{m } \biggr \}^{1/2 } \\ & \leq&\frac{c r } { \sqrt{m } } + r^2 \exp\bigl [ - c m^{1/3 } \bigr],\end{aligned}\ ] ] which proves the lemma for the case .proof of lemma [ lemmany - multi - norm ] since for different are independent , an application of the hellinger distance property for product probability measures [ ] leads to we note that if , both and are point mass at and thus .hence , applying lemma [ lemmulti - norm ] , we obtain 1(\nu_k\geq2).\ ] ] for exceeding certain integer , and hence for , for , we may adjust constant to make the above inequality still holds for .proof of lemma [ lemtv ] \bigr\|_{\mathrm{tv } } \\ & & { } + \bigl\|f_1(x ) g(x , y)/g_2(x ) - g(x , y ) \bigr\|_{\mathrm{tv}},\end{aligned}\ ] ] where \bigr\|_{\mathrm{tv } } \\ & & \qquad= e_{f_1 } \bigl[\bigl\| f_{2|1 } ( \cdot|u_1 ) - g_{2|1}(\cdot|v_1 ) \bigr\|_{\mathrm{tv } } |u_1=v_1\bigr ] , \\ & & \bigl\|f_1(x ) g(x , y)/g_2(x ) - g(x , y ) \bigr\|_{\mathrm{tv } } \\ & & \qquad= \bigl\|\bigl[f_1(x ) / g_1(x ) -1\bigr ] g(x , y ) \bigr\|_{\mathrm{tv } } \\ & & \qquad\leq\max_x \biggl\ { \biggl{\vert}\frac{p(u_1=x)}{p(v_1=x ) } -1 \biggr{\vert}\bigl\|g(x , y ) \bigr\|_{\mathrm{tv } } \biggr\ } = \max_x \biggl{\vert}\frac { p(u_1=x)}{p(v_1=x)}-1 \biggr{\vert}.\end{aligned}\ ] ]
matrix completion and quantum tomography are two unrelated research areas with great current interest in many modern scientific studies . this paper investigates the statistical relationship between trace regression in matrix completion and quantum state tomography in quantum physics and quantum information science . as quantum state tomography and trace regression share the common goal of recovering an unknown matrix , it is nature to put them in the le cam paradigm for statistical comparison . regarding the two types of matrix inference problems as two statistical experiments , we establish their asymptotic equivalence in terms of deficiency distance . the equivalence study motivates us to introduce a new trace regression model . the asymptotic equivalence provides a sound statistical foundation for applying matrix completion methods to quantum state tomography . we investigate the asymptotic equivalence for sparse density matrices and low rank density matrices and demonstrate that sparsity and low rank are not necessarily helpful for achieving the asymptotic equivalence of quantum state tomography and trace regression . in particular , we show that popular pauli measurements are bad for establishing the asymptotic equivalence for sparse density matrices and low rank density matrices .
advances in pulse shaping for ultrafast lasers , fast detection techniques , and their integration via closed - loop algorithms have made it possible to control the dynamics of a variety of quantum systems in the laboratory .excitation may be either in the strong or weak field regime , with the goal of obtaining some desired final state .success in achieving that goal is gauged by a detected signal ( _ e.g. _ , the mass spectrum in the case of selective molecular fragmentation ) , and this information is fed back into a learning algorithm , which alters the laser pulse shape for the next round of experiments .high duty cycles of seconds or less per control experiment make it possible to iterate this process many times and perform efficient experimental searches over a control parameter space defining the laser pulse shape . as an example of this process ,experiments have employed closed - loop methods for selective fragmentation and ionization of organic and organometallic compounds , as well as for enhancing optical response in solid - state and other chemical systems .yields of targeted species are typically enhanced considerably over those obtained by non - optimized methods .it is found that the optimal pulse shapes achieving these enhancements can be quite complicated , and understanding their physical significance has proven difficult .the same general observations also apply to the many optimal control design simulations carried out in recent years .the present paper will address the identification of control mechanisms in theoretical calculations as well as for direct application in the laboratory . in [ beables ] we will first describe john bell s beable model for finite dimensional hilbert spaces , in order to obtain a precise ( but non - unique ) definition of `` mechanism '' for quantum systems in terms of trajectories over their associated classical state spaces . for instance , in molecular systems a trajectory would take the form of a sequence of transitions that starts with a given initial molecular configuration and switches to another configuration at a distinct time , and then to another at , _ etc._to be contrasted with a continuously changing superposition of many such configurations . the means to numerically implement this mechanism concept is presented in [ simulating ] .an application to the problem of population transfer for a model 7-level system is given in [ model ] , which illustrates the usefulness of mechanism information in understanding control processes .we then show how the beable approach leads to the laboratory working relations ( [ jminlogm ] ) and ( [ jlogm ] ) , which make it possible to identify some basic aspects of control mechanisms directly from experimental data .we illustrate this process in [ simulated ] on simulated experimental data for the model 7-level system .the overall laboratory algorithm for extracting control mechanism information is condensed into a general - purpose procedure in [ summary ] .consider a control problem posed in terms of the quantum evolution over a finite dimensional hilbert space with basis where . here incorporates the effect of the control field , and we can explicitly follow the evolution of into a desired final state . this paper is concerned with the question : what is the importance of a given sequence of actual transitions or , more specifically , of a given trajectory defined as a continuous function of time in achieving the desired final state ?in other words , it is clear that the system _ is _ being driven into a desired state , but can we find a physical picture of _ how _ this is being accomplished ?a conventional answer to the question raised above , essentially that given by bohr on first seeing feynman s path integral , is to reject the question as ill - posed because quantum mechanics is said to forbid consideration of precisely defined trajectories over the classical state space .nevertheless , it is well established that there exist dynamical models generating an ensemble of trajectories whose statistical properties exactly match those associated with at each . in the case of a continuous state space ,the first such model was that of de broglie , later rediscovered and completed by bohm .they reintroduce classical - like particle trajectories into quantum theory by taking the probability current p_n(0 ) = for all , which expresses the equivalence of bbb theory and ordinary quantum mechanics in terms of statistical predictions .the answer to the initial question regarding the importance of a given trajectory in achieving the desired state is now very simple .the importance may be taken as just the probability of realizing that trajectory with the jump rule ( [ t ] ) .we can express the final state population in terms of these path probabilities via the integral ( path sum ) version of ( [ dfp ] ) : where is the probability of realizing the path under ( [ t ] ) , and gives the beable configuration at .the first sum is taken over all such paths ending on at , and = \{p\ ; |\ ; n_{p+1 } \ne n_p\} ] as \,dr\,ds \ ; \sim \ ; \left(\frac{\mue}{\hbar}\right)^2 \epsilon^3 \omega \ , .\ ] ] the right hand estimate is obtained by expanding to first order about and noticing that the term in commutes with .the error ( [ com ] ) would generally dominate third order terms like .if the control field is given as , where with and possibly adiabatic , we can evaluate by writing ( simply writing is not appropriate because we do not want to exclude weak field excitation , _ , so that may hold . )thus in the adiabatic case can be propagated in steps determined by and rather than the phase factors .consider the evolution of beable trajectories according to ( [ t ] ) , which appears to require a time step small enough that each part of , including the terms , not vary much over the step .nevertheless , the total probability of jumping from to over is given by the integral over that range with corrections .thus we can take an effective jump probability for the interval as given by ( [ t ] ) with evaluated using ( [ inta ] ) .if does not hold , care must be taken to extend the integration in ( [ zint ] ) only over for which , leading to additional boundary terms in the phase difference part of ( [ inta ] ) . moving the ratio outside the integral in ( [ zint ] ) produces an error per time step of order which is again comparable to ( [ com ] ) .therefore beable trajectories may be propagated in steps determined by and , _i.e. _ in sink with the schrodinger propagator .the beable trajectory methodology for identification of control mechanisms will be illustrated with a 7-level system where and are given in fig . [ 7levels ] .the ( non - adiabatic ) control field shown in fig .[ efield ] is obtained from a steepest descents algorithm over the space of field histories .it is optimized to transfer population from the ground state to the highest excited state . by , the transfer is found to be completed with approximately 97% efficiency ( see fig . [ psi6 ] ) . together with the second - order schrodinger propagator , using time step fs , an ensemble of beable trajectoriesis evolved , all starting in the ground state at . at each time step , a given beable at site is randomly made either to jump to a neighboring site according to the probabilities given by ( [ t ] ) with ( [ zint ] ) , or else stay at .four sample trajectories are shown in fig .[ 4traj ] . as a check, one can count the number of beables residing on each site at time to estimate the occupation probabilities and verify that they match the quantum prescriptions .the finite - ensemble deviations are observed to be consistent with a convergence law .about 60% of the trajectories generated are found to involve four jumps , and of these the trajectories passing through sites are noticeably more probable than those passing through .6-jump trajectories comprise about 30% of the ensemble . andit becomes increasingly less likely to find trajectories with more and more jumps .the largest number of jumps observed in a single trajectory was 14 .three such trajectories occurred out of the ensemble total .a natural expectation is that the optimal field would concentrate on the higher probability trajectories and not waste much effort on guiding highly improbable trajectories , such as the 14-jumpers , to the target state , as the latter have essentially no impact on the control objective ( final population of the target state ) .interestingly , though , the vast majority of even the lowest probability trajectories are still guided to .apparently , the optimal field is able to coordinate its effect on low probability trajectories with that on other trajectories at no real detriment to the latter .we shall come back to this point later .one way to conveniently categorize the large set of trajectories , each expressible as a sequence of time - labeled jumps , is to drop the time labels , leaving only the `` pathway '' .the importance of a given pathway is then computed as the frequency of trajectories associated with that pathway .table [ tab ] lists some important and/or interesting pathways and their probabilities .[ 46jumpers ] shows some typical trajectories associated with the first and fifth pathways listed in table [ tab]involving 4 and 6 jumps respectively . guides the 4-jumpers upward in energy , and they begin to arrive at around fs , early enough that stragglers can catch up but too late for the over - achievers of the group to head off elsewhere .this corresponds to the onset of heavy growth for around fs ( see fig .[ psi6 ] ) .the 6-jumpers first reach around fs , but almost all fall back to by fs , reuniting with the 4-jumpers just as they begin to jump up to .these 6-jumpers , along with other high - order contributions , thus explain the small surge in between 50 and 80 fs. another much smaller surge around fs and one still smaller around fs ( see inset of fig .[ psi6 ] ) are attributable to 8-th and higher order trajectories `` ringing '' back and forth on . for ,many of the 6-jumpers are at and need to be de - excited on the transition before they can jump back up to .simultaneously , many of the 4-jumpers are at and should not be prematurely excited on , lest they not remain at through fs .the optimal field thus faces a conundrum : how to stimulate the transition preferentially for the 6-jumpers ( in ) over the 4-jumpers ( in ) .the means by which this feat is accomplished may be understood by reference to the jump rule ( [ t ] ) . induces jumps through the explicit factor but also through the quotient , which depends on through .in particular , ( [ rez ] ) implies that at any one time jumps on this transition must be either all upward or all downward .the active direction is switched back and forth according to the sign of .[ rezefield ] plots and , which controls the upward jump rate . for sees that when is large , most often dips below zero , disallowing any upward jumps .the correlation coefficient between and in this range is . on the other hand ,the correlation between and , which controls downward jumping , is over the same range .looking at the trajectories in more detail , one notices a distinct bunching of jumps .beables tend to jump together in narrow time bands , or else to abstain in unison from jumping .this behavior can be gauged by calculating the two - time jump - jump correlation function : where is the number of jumps of type occurring in , and is a subset of the entire ensemble of trajectories .for instance , the two - time function with taken as the set of jumps on the transition is plotted in fig.[j2 ] .the fs time - scale oscillations correspond to the level splittings and the dominant frequency components of .enhanced correlations around correspond to the jump bunching noticeable in the trajectories .two side - bands around fs are associated with 6-jump and higher order trajectories that go up , down , and up again on over the approximate time window .this conclusion can be verified by computing two - time functions with specialized to particular pathways .other much smaller features for fs ( see inset of fig .[ j2 ] ) are attributable to higher order trajectories ringing on . in general ,the fs oscillations characteristic of these two - time functions show that works in an essentially discrete way , turning on the flow of beables over a given transition and then turning it off with a duty cycle of fs .the associated bandwidth of is small enough to discriminate between all non - degenerate except between and , which differ by only .this circumstance leaves effectively three distinguishable transitions . with a total time of 100 fs , the control field potentially enact roughly separate flow operations .the fact that trajectories with pathway probability % are still almost always guided successfully to suggests that these operations are more than necessary to obtain the 97% success rate achieved by the optimal control algorithm in this simulation .it appears that the algorithm actively sweeps these aberrant trajectories back into the mainstream so as to maximize even their minute contribution to the control objective .using these beable trajectory methods to extract mechanism information directly from closed - loop data is complicated by the fact that we can not assume knowledge of a time - dependent wavefunction , hamiltonian , or possibly even the energy level structure of the system . frequently in the laboratory ,the only available information consists of final state population measurements and knowledge of the control field .the following analysis aims to show how a limited statistical characterization of beable trajectories may be generated from laboratory data associated with a given optimal control field .in particular , we will show how to extract , the minimum number of jumps necessary to reach the final state from the initial state ; also , the average number of such jumps over an ensemble of beable trajectories ; and possibly higher moments as well .after a general formulation of this analysis is presented , it will be applied to simulated experimental data in the case of the model 7-level system considered above .we propose to obtain mechanism information by examining the effect on the final state population of variations in the control field _away from _ optimality . consider the simplest such scheme , wherein the amplitude of the control field is modulated by a constant independent of time : giving rise to a new time - dependent solution particular , a new final state population and new path probabilities p .these quantities are obtained by taking in ( [ pathsum ] ) , which is to say using and in the jump rule ( [ t ] ) . to express p in terms of prob , we can write where is the number of jumps in and to simplify ( [ prodtx ] ) , note that if were very large , then successive terms in each of the last two products would tend to cancel , leaving only endpoint contributions . making the reasonable approximation that they do completely cancel yields further, we can make the expansion about , where the depend on the path but not on . and similarly : combining these expansions gives a relationship between the path probabilities p in the modulated case to those , prob , in the unmodulated case , which are the ones containing mechanism information regarding the actual optimal control field .we can thus write the final population as where , and higher order terms in the expansion have been dropped .( this approximation is not as crude as it might seem , since for small away from 1 , the behavior of is dominated by the factor . ) cancelling one power of , and recalling that the sum is taken only over paths ending on so that , we have where denotes an average over the trajectory ensemble generated by the ( unmodulated ) optimal field . beables in this ensemble are taken as initially distributed at according to , and only trajectories that successfully reach at are counted .note that for close enough to 0 , the minimum value taken on by will dominate the expectation value in ( [ m^j ] ) , and gives the dominant behavior independent of .if we suppose that , where it is relevant , depends primarily on the endpoints of , which are fixed , and only weakly on the rest of the path , then can be approximated by some characteristic value .putting in ( [ m^j ] ) and expanding in powers of now gives for the final state population under a modulated field , expressed in terms of the desired statistical properties of the trajectory ensemble under the optimal field itself .here , enters as an additional parameter that must be extracted from the data .equations ( [ jminlogm ] ) and ( [ jlogm ] ) form the working relations to extract mechanism information from laboratory data .in order to extract quantities like using the results ( [ jminlogm ] ) and ( [ jlogm ] ) data must be generated for the final state population at many values of the modulation factor over some range .the desired quantities are obtained as parameters in fitting ( [ jminlogm ] ) and ( [ jlogm ] ) to the data as a function of .one set of simulated data for the above 7-level system is shown in fig.[fit ] ; the sampling increment is .noise has been introduced by multiplying the exact values by an independent gaussian - distributed random number for each value of , where the distribution is chosen to have mean 1 , and various standard deviations have been sampled .we can determine from the data using ( [ jminlogm ] ) , which implies for instance , fig .[ jmin ] plots the derivative in ( [ dlogpsidlogm ] ) , calculated with finite differences from the simulated data for , which correctly gives as the limiting value .determination of proved robust to multiplicative gaussian noise up to the 40% level ( ) .the quantity is more difficult to extract , because while the sum in ( [ jlogm ] ) converges to 0 as , the terms of the sum individually diverge and must cancel in a delicate manner . therefore truncatingthe sum to an upper limit becomes a very bad approximation near .this unstable behavior can be controlled by carefully setting the range of data to be fitted , given a choice of .it is also convenient to constrain the fit by the previous determination of .we have done this by noting that if is truly optimal , then must have a maximum at , which implies that .this can be used as a weaker constraint on the auxiliary parameter by just requiring in the fit without necessarily supposing that is exactly optimal .we then check that is satisfied in the fit .[ fit ] shows one such fit where the fitting range is .one can see that the fit closely tracks the data for in this range but quickly diverges from the data just below ( and , less severely , above ) due to the sum - truncation instability mentioned previously . in order to identify appropriate ranges in general ,we have searched over all combinations such that mathematica s implementation of the levenberg - marquardt non - linear fitting algorithm was used on simulated data for each value of between 0 and .5 with a .01 increment .the best fit at each was used to determine the value of most consistent with the simulated data at the given noise level . for this analysis chosen somewhat arbitrarily to balance computational cost and precision . in practiceit is likely that the moments for lower values will be most reliably extracted from the data , especially considering the laboratory noise . in the simulationsit was found that could be reliably extracted , but higher moments were unstable and unreliable .for example , was frequently found to lie slightly below the corresponding fit values for , which is inconsistent with the interpretation of these values as statistical moments of an underlying random variable .further constraints could be introduced to attempt to stabilize the extraction of higher moments , but care is needed so as not to overfit the data .[ javg ] shows the values obtained by fitting data with for each choice of and fig .[ fq ] shows the corresponding quality of each fit as measured by its mean squared deviations . in fig .[ javg ] , as well as in the corresponding plots for all other values of studied , two diagonal strips emerge running above a set of smaller islands .the surrounding white `` sea '' comprises fits that give , which we know to be ruled out by the determination of .a virtually identical pattern arises in the fit quality plots .the two strips and underlying islands are seen to give much better fits than the white sea .an additional connected region of good fits is found to extend across the lower - left corner of fig .[ fq ] , nearly all of which are ruled out by .this connected region is somewhat pathological because much of it corresponds to fitting ranges that fail to capture the important behavior of near , and therefore can be ignored .then the best fits for all values of sampled are found to come from the cluster of islands at . as increased from 0 to .5 , these islands flow from to , carrying with them the best fit site . note that the small triangular area in the lower right corner , most noticeable in fig .[ fq ] , is a region excluded from consideration by the third constraint in ( [ range ] ) .the best fit values of are shown as a function of in fig .[ javgsigma ] .these values are to be compared with the exact result obtained from the trajectory ensemble calculations in [ model ] , which require explicit knowledge of the level structure and dipole moments of the system . the ramping behavior in fig .[ javgsigma ] results from the sampling increment of the simulated data .transitioning between one ramp and another corresponds to the shifting of the best fit location by one or two units of .these values are in good agreement ( 3% discrepancy ) with the exact value for noise at the level of 025% .it should be noted that a qualitative change occurs in the case of no noise ( ) , where the islands all disappear and the strips become extended much further on the downward diagonal .inspecting the fits individually indicates that mean squared deviation does not give an adequate measure of fit quality in this special case .this anomaly seems due to the fact that , in the absence of gaussian noise from experimental statistics , systematic deviations from ( [ jlogm ] ) associated with the approximation ( [ prodt ] ) become important .in order to extract the basic mechanism information comprising and from quantum control experimental data , the methods of [ simulated ] can be distilled into the following general procedure : 1 .perform a closed - loop optimization of population transfer , giving an associated optimal laser pulse shape .2 . apply a modulated field to the system and measure the the resulting final state population for many values of over some range , where is a positive value near 0 determined by experimental sensitivity .3 . extract from the data by extrapolating the limit ( [ dlogpsidlogm ] ) .4 . choose a truncation of the sum in ( [ jlogm ] ) , _ _e.g.__ , and perform a non - linear fit to the data for each of a set of fitting ranges , _e.g. _ the set ( [ range ] ) .one may choose to constrain the fit by requiring in ( [ jlogm ] ) .plot the fit values of , as in fig.[javg ] , and the corresponding mean squared deviations , as in fig.[fq ] , over the plane .exclude regions in which the fit violates the condition and pathological regions like in the lower - left corner of fig .find the point at which the mean squared deviation is minimized , giving the associated fit value of as that most consistent with the data .since bell s model can be defined for any choice of basis , there is a more general question of how mechanism analysis varies with the choice of basis . beyond that ,bell s jump rule ( [ t ] ) itself permits generalization , providing additional freedom over which trajectory probability assignments may vary .the import of this freedom for mechanism identification remains to be determined .this paper has shown how bell s beable model of quantum mechanics can be used to understand the dynamics of quantum systems driven by complicated optimal control fields .beable trajectories are identified with simple physical processes effecting the controlled transfer of population from one state to another , and aggregations of beable trajectories may be used to compute the importance of different such processes in the dynamics . in the context of a model 7-level system , numerical simulationsreveal four chief pathways and also a host of higher order pathways that are collectively significant on the 40% population level .we have shown how the control field sweeps trajectories into these pathways by switching on and off beable flow over specific transitions on a fs time - scale .beable trajectory methods were then defined in general for extracting statistical mechanism information directly from experimental data , without requiring knowledge of a hamiltonian or even the level structure of the system under study .application to simulated noisy data for the model system produced the correct minimum number of quantum transitions in the control process and the average number of such transitions to within 3% at noise levels up to 25% .the authors acknowledge support from the nsf and dod . ed acknowledges partial support from the program for plasma science and technology at the princeton plasma physics laboratory .an alternative scheme quantifies pathway importance by associating to each pathway not a probability but rather an amplitude and a phase ( a. mitra and h. rabitz , to be published ) .although this latter scheme was not originally formulated in terms of dynamical trajectories , the amplitudes correspond in some sense to the trajectory probabilities that result from the jump rule , which does not preserve ( [ modpsi ] ) .the definition of as interaction picture states has the affect of eliminating larger contributions to the jump probabilities from the terms , hence reducing the overall frequency of jumps .had we taken as schrodinger picture states , we would have had to decrease the time step by a factor for comparable results .this factor is around 10 for the model 7-level system ..the five most probable pathways , followed by the highest probability pathway failing to reach at fs , and then the highest probability pathway involving a topologically non - trivial cycle in state space .the fractional error in the pathway probability is given roughly by . [ cols="<,<",options="header " , ]
the dynamics induced while controlling quantum systems by optimally shaped laser pulses have often been difficult to understand in detail . a method is presented for quantifying the importance of specific sequences of quantum transitions involved in the control process . the method is based on a `` beable '' formulation of quantum mechanics due to john bell that rigorously maps the quantum evolution onto an ensemble of stochastic trajectories over a classical state space . detailed mechanism identification is illustrated with a model 7-level system . a general procedure is presented to extract mechanism information directly from closed - loop control experiments . application to simulated experimental data for the model system proves robust with up to 25% noise .
we consider a scenario in which a `` digital good '' is to be sold to many potential buyers , with the objective of maximizing the revenue .a digital good is assumed to be provided with unlimited supply and to have no cost of production .given a set of selfish buyers who may have diverse valuations for the good , a theoretical optimum for the revenue ( commonly denoted ) is given by the sum of the buyers valuations . in a standard mechanism ,the allocation algorithm returns binary values so that a buyer would either win a copy of the good , or fail to do so .here we consider a more expressive class of mechanisms in which a buyer may be offered a probability of receiving the item ; assuming that buyers are risk - neutral , if has valuation for the item , then would have valuation for the probability to receive it .the general question we consider is , to what extent does the expressiveness of lotteries help us to design truthful mechanisms that better approximate ? we consider a setting in which we want to auction lotteries for digital goods , i.e. , goods with unlimited supply and no production costs .a lottery ( for a specified item ) is defined by its win probability ] it is possible to approximate within . ]l|c|c| & ' '' '' upper bound & lower bound + & ' '' '' ( thm [ thm:2value : ub ] ) & ( thm [ thm:2value : lb ] ) + & ' '' '' ( thm [ thm : finite : ub ] ) & , any ( thm [ thm : finite : lb ] ) + & ' '' '' ( thm [ thm : cont : ub ] ) & ( thm [ thm : cont : lowerbound ] ) + the motivation to study two - value domains , , arises from the fact that one can provably achieve the best approximation guarantee with respect to under this assumption ( see below for more details ) .more generally , many real - life applications involve bidders with valuations from a finite domain .money is , by its very nature , discrete with reasonable lower and upper bounds .similarly , auctions on the web may collect bids through drop - down menus ; the values available define a finite domain .regarding bids that may come from a _ continuous _domain ] .domains comprised of only two values are considered in section [ sec:2value ] .the results in this section are extended to any finite domain in section [ sec : finite ] .finally , we consider the relation between our notion of lotteries and the concept of universally truthful auctions in section [ sec : equivalence ] , and use this result to prove the matching lower bound for continuous domains ] domain involves a probability distribution over an infinite number of mechanisms ) .( see discussion at the end of section [ sec : equivalence ] for details . ) because of the aforementioned equivalence , this research can also be seen as a continuation of studying to which extent the knowledge of the domain helps in approximating ( e.g. , theorems [ thm:2value : ub ] and [ thm : finite : ub ] contradict the inapproximability result in , i.e. , lemma 3.5 therein breaks down for finite domains ) .another related work is which considers collusion - resistant mechanisms for bidders with domains similar to ours .a characterization in terms of allocation rules is given ( cf .proposition [ prop : singular ] ) for randomized mechanisms which define allocations , payments and then utilities in expectation over the random coin tosses of the mechanism . since in our lotterieswe work with deterministic utilities ( even , when considering lotteries as universally truthful auctions ) this characterization holds also in our setting .a related concept is the _ responsive lotteries _ ,studied in , in which a single agent reports his valuations of a set of alternatives , and is awarded one of them , using probabilities designed to incentivize him to report his true valuations ( up to affine rescaling ) .the difference here is that we have just one kind of item , and multiple agents .hart and nisan use a model similar to ours ( risk - neutral bidders and lottery offers ) in their study of the optimal revenue when selling multiple items .other benchmarks are defined in the literature to compare the revenue of incentive - compatible auctions , see , e.g. , . to the best of our knowledge ,our work is the first in which revenue is compared to .in this section we assume that the bids belong to the interval ] .we defer to section [ sec : equivalence ] , the proof of a matching lower bound for any truthful lottery and bidders having ] in the above system . notice that each variable occurs only twice with a coefficient different from : in particular , its coefficient is in all the constraints relative to a bid vector with ; for , has coefficient since ; finally , for , the variable has coefficient since . in order to prove the theorem we want to study the values of for which this system has no solutions . towards this end, we let be a shorthand for and number all the possible bid vectors as .then we denote by the terms involved in the -th constraint of ( [ eq:2value : lb : ineqsys ] ) relative to , i.e. , we use carver s theorem which characterizes inconsistent systems of linear inequalities : according to , ( [ eq:2value : lb : ineqsys ] ) above admits no solution if and only if we can find non - negative constants , such that with at least one of the s being positive .we next show how to define the constants so that the two occurrences with non - null coefficients of each variable cancel out . .] to show how to define the constants we define a graph which has a vertex for each possible bid vector .we put an edge between two vertices if the corresponding bid vectors are _ adjacent _ , i.e. , they differ in only one entry .each vertex then has exactly neighbors .this is a layered graph with layers .let layer be the set of all vertices whose corresponding bid vectors are comprised of s and s .the graph is indeed layered as by definition a node at layer only has neighbors at layer and . for a bid vector associated vertex lies in layer of the graph we define .the construction is depicted in figure [ fig : lb ] for the case of . to show that this definition of constants cancels out the s in ( [ eq:2value : lb : carverineq ] ) consider a variable .the two occurrences of with a non - zero coefficient are for the bid vectors , in which case has coefficient , and , in which case has coefficient .the two vertices associated to and are by definition connected , with being a node at layer and being a node at layer . by construction , we have and therefore , letting , we have that the constant corresponding to is while the constant corresponding to is .this means that the contribution of variable to ( [ eq:2value : lb : carverineq ] ) is : then , the following holds : we now study for which values of the above sum is not positive . consider all the nodes at layer of the graph .we call this set of nodes .we abuse the notation and say that to mean that the node corresponding to is at layer of the graph ; we then rewrite the summation above and impose it to be less than or equal than : if the whole summation is non - positive , then there exists at least one layer for which the inner summation is non - positive .the bid vectors in have exactly s , i.e. , and , and then their number is .moreover , all those bid vectors have the constant set to .therefore , we have implies that .this means that for these values of , the weighted sum of the known terms in ( [ eq:2value : lb : carverineq ] ) is non - positive .therefore , there exists a non - negative constant which together with the constants defined above satisfies ( [ eq:2value : lb : carverineq ] ) .in other words , for , the system ( [ eq:2value : lb : ineqsys ] ) has no solutions and therefore no truthful lottery can guarantee better approximation ratios .following the same approach used for two - value domains , one could study three - valued domains , .with such a study , one would prove an upper bound of ( this is done by setting and ) and a matching lower bound ( for , e.g. , this is not hard to check using carver s theorem ) . however , we prefer to focus on asymptotic bounds ( on the approximability of ) in terms of the number of allowed bid values in the domain , as opposed to detailed bounds in terms of those values . that is the goal of this section .we begin by proving that it is possible to design collusion - resistant lotteries collecting a fraction of the optimal revenue when bidders bid from a finite domain .[ thm : finite : ub ] there exists an anonymous collusion - resistant lottery for finite domains , , whose revenue is a -approximation of .we define the anonymous , singular lottery and the corresponding payment functions for where we set .let us show that these payment functions indeed lead to collusion - resistance .we have to prove that for any bidder with valuation and any : by definition this is equivalent to proving : now , whenever , the left - hand side of the above inequality is equal to which is non - negative since , .similarly , in the case in which , the left - hand side of the above inequality equals which is non - negative since , . to prove the approximation guarantee we note that for all bid vectors , we have denoting the set of bidders bidding in and being the size of . we can not improve over the above result even by relaxing the collusion - resistance to truthfulness , as shown by next theorem .the proof of the lower bound below is not a simple extension of the arguments used for two - value domains .although the structure of the proof is similar , the difficulty here rests on the fact that the layered graph is not adequate to model the definition of the constants required by carver s theorem .moreover , the study of the weighted sum of the known terms of the system is significantly more involved .[ thm : finite : lb ] for any and , there exist bids such that no truthful lottery for the domain has approximation guarantee better than . by truthfulness constraintsany truthful lottery must satisfy the following upper bounds on the payments , for . recall that and map bid vectors to bidder s payment and win probability , respectively . where in the second inequality we recursively use the bound obtained on to , and so on , and where the last inequality follows from the fact that given by truthfulness and by definition , respectively .we let be a positive value and then define the bids of the domain to satisfy , i.e. , , and therefore we are only quantifying the `` gap '' between two consecutive bids of the domain . ] , for , where and get that , for , it holds since .we also note that by voluntary participation , we have we can now bound from above the revenue of a truthful lottery . to ease the notation we set for any , and .we then get where , as above , is the set of bidders bidding in and .we now assume by contradiction that a truthful lottery has approximation guarantee better than for the domain as in the hypothesis . by noticing that for all wethen obtain the following system of linear inequalities for all , where if and otherwise . for , each variable in ( [ eq : finite : lb : ineqsys ] ) has only two occurrences with a coefficient different from zero .indeed , it appears with a coefficient of in the constraint relative to the bid vector and has a factor of in the constraint of .the constraints relative to all the other bid vectors have with a zero coefficient . similarly to the binary - domain case , we enumerate all the possible bid vectors , and for each of those we define where .by carver s theorem to reach a contradiction and show the theorem it is enough to show that there exist non - negative constants , such that with at least one of the s being positive .we call these s the carver s constants .akin to the proof of theorem [ thm:2value : lb ] , to prove ( [ eq : finite : lb : carverineq ] ) we first show that there exist carver s constants which make the sum of the functions equal to ( cf .lemma [ le : finite : variables ] ) and then prove that these constants also annul the sum of the functions ( cf .lemma [ le : finite : knownterms ] ) .[ le : finite : variables ] there exist positive constants such that by canceling out all the variables .these constants are the same for all the bid vectors that have the same value of . for any bid vector , we define [ claim : finite : lb : lambdasprop ] the constants are such that by canceling out all the variables if and only if for any bid vector , and , it holds that where . for the if part , note that by hypothesis in particular cancel out the variable , and ( recall that , for all , we bounded from above the variable with andthen this variable is not in the system ) .as observed above this variable appears in ( [ eq : finite : lb : ineqsys ] ) only twice with a non - null coefficient .specifically , has a coefficient of in the constraint relative to and a coefficient of in the constraint defined upon .then to cancel , it must be the case that for the only if part , take carver s constants which satisfy and assume by contradiction that there exists a variable , and , that has not a coefficient in . since , as noted above , this variable has a non - zero coefficient only in the constraints relative to and respectively then it must be the case that thus a contradiction . we then need to prove that our definition of carver s constants in ( [ eq : finite : lb : constants ] ) satisfies the requirement in claim [ claim : finite : lb : lambdasprop ] . take two bid vectors and defined as in the statement of the claim .since , and for , by ( [ eq : finite : lb : constants ] ) , we get that the proof concludes by observing that being the bids in the domain positive , so are the carver s constants we define .moreover , the second part of the lemma follows from the fact that is a function of , .[ le : finite : knownterms ] for the carver s constants defined in lemma [ le : finite : variables ] , it holds : by definition , .\end{aligned}\ ] ] observe that for any and then where last equality follows from the multinomial theorem .consequently , by setting , the theorem follows by the two lemmata above .[ sec : continuous : lb ] given a function we call the _ right - continuous version _ of defined as where , as usual , means that is approaching from the right .[ thm : equiv ] there exists a bijection between truthful lotteries and universally truthful auctions .let be a truthful lottery over ] . )observe that is right - continuous , non - decreasing and that , .therefore , it is known that there exists on some probability space a random variable for which } ] .moreover , price is a weak threshold ( i.e. , bidder wins by declaring at least ) if is right - continuous at , and strict ( i.e. , bidder wins by declaring strictly more than ) otherwise , cf .definition 2.4 in .conversely , starting from the cumulative distribution function of a universally truthful auction we can define a truthful lottery by using the arguments above backwards .note that the universally truthful auction given by may result rather unnatural mainly because it is not clear with which probability a certain price is charged . however , there are cases in which has some properties for which can be easily decoded .let be the number of different prices charged by the lottery for some and .if is finite then is a step function and a discrete random variable ; in this case , one can take the interval ] . finally , let us observe that the proof above holds for collusion - resistance as well when lotteries are singular .[ [ a - matching - lower - bound - for-1h - domains . ] ] a matching lower bound for ] with the aim of understanding how relaxing collusion - resistance to truthfulness affects the approximation guarantee to achievable also in the case of continuous domains . by using a lower bound technique developed in for universally truthful auctions and by relying on the bijection between these auctions and truthful lotteries just proved, we can show that it is not possible to achieve an approximation guarantee better than that given in theorem [ thm : cont : ub ] , even by relaxing collusion - resistance to truthfulness .[ thm : cont : lowerbound ] for truthful lotteries and bidders bidding from a domain ] such that the revenue collected by on input is at most .to prove this , we analyze the behavior of on a bid vector chosen from a carefully designed probability distribution .the outcome of the lottery is a random variable depending on the randomness in and .we prove that the expected revenue of is at most a fraction of the expected optimal revenue .then , by definition of expectation , there must exist a bid vector for which the claim holds .because of theorem [ thm : equiv ] , is equivalent to a universally truthful auction .the latter is characterized in terms of a so - called bid - independent auction ( see definition 2.4 in ) . in a bid - independent auctiona price is computed as a ( possibly randomized ) function of only and offered to the bidder .the bidder wins if the price is bounded from above by . in the rest of the proof we will argue about the bid - independent auction defined upon lottery .consider the bid vector in which each is i.i.d .generated from the distribution in which }=1/y ] and }=1/h ] , where the equality in the penultimate step follows since is independent of .therefore , }&=\sum_p { \mbox{\rm e}\left[\,r_i \left|{p_i({{\mathbf } b_{-i}})}\right.=p\,\right]}\cdot { \mbox{\rm prob}\left[\,{p_i({{\mathbf } b_{-i}})}=p\,\right ] } \leq \sum_p { \mbox{\rm prob}\left[\,{p_i({{\mathbf } b_{-i}})}=p\,\right ] } \leq 1.\end{aligned}\ ] ] we can then conclude that the expected revenue of on the randomly generated bid vector is at most . on the other hand , }=n { \mbox{\rm e}\left[\,b_i\,\right ] }= n ( \ln(h)+1) ] . in the first two cases ,the lower bound is proved via a new technique to bound the revenue of truthful lotteries which we regard as our main technical contribution . in the case of bids coming from $ ] , the lower bound is proved by showing the equivalence between universally truthful auctions and truthful lotteries and then applying a known technique due to .the proof of equivalence shows that the concept of incentive - compatible lotteries is rather useful : lotteries can be much more natural to imagine than universally truthful auctions .a number of questions are raised by our results .for example , it would be interesting to evaluate whether the feasibility of using the optimal benchmark is due to the assumptions on the domains , or rather to the expressiveness of lotteries .we believe that a study of lotteries in settings different from that of digital goods can shed light on this important question .notice , however , that moving from the digital good setting may imply that we have to give up collusion - resistance in order to get any reasonable performance .indeed , consider a 1-item auction with 2 possible bid values , ; suppose that had to depend only on bid and not .since we have only one item available to sell , we need , from which it follows that some satisfies .suppose then that all other bidders have value , so that .all the other bidders would pay at most while pays at most , so for this fails to approximate within any constant .* acknowledgments . *we wish to thank patrick briest for proposing this study and for the invaluable discussions we had in the starting phases of this project .we are also indebted to him for the ideas contained in section [ sec : bounded : ub ] .we also thank orestis telelis for his comments on an early draft of this work .
there has been much recent work on the revenue - raising properties of truthful mechanisms for selling goods to selfish bidders . typically the revenue of a mechanism is compared against a benchmark ( such as , the maximum revenue obtainable by an omniscient seller selling at a fixed price to at least two customers ) , with a view to understanding how much lower the mechanism s revenue is than the benchmark , in the worst case . we study this issue in the context of _ lotteries _ , where the seller may sell a probability of winning an item . we are interested in two general issues . firstly , we aim at using the true optimum revenue as benchmark for our auctions . secondly , we study the extent to which the expressive power resulting from lotteries , helps to improve the worst - case ratio . we study this in the well - known context of _ digital goods _ , where the production cost is zero . we show that in this scenario , collusion - resistant lotteries ( these are lotteries for which no coalition of bidders exchanging side payments has an advantage in lying ) are as powerful as truthful ones .
let be an open , bounded , convex polytope for .this article deals with the numerical approximation of strong solutions to the second - order elliptic partial differential equation ( pde ) where is a given square - integrable function and the operator has nondivergence form .more precisely , it is given through in the case that the coefficient satisfies certain smoothness assumptions , it is known that can be converted into a second - order equation in divergence form through the product rule . if is merely an essentially bounded tensor ,such a reformulation is not valid and variational formulations of are less obvious .it is proved in that the unique solvability is assured through the cordes condition described in [ s : formulation ] below . the first fully analyzed numerical scheme suited for cordes coefficientswas suggested and analyzed in and belongs to the class of discontinuous galerkin methods .it was successfully applied in to fully - nonlinear hamilton bellman equations .further works on discontinuous galerkin methods for nondivergence form problems focus on error estimates in norms for the case of continuous coefficients .other approaches include the discrete hessian method of and the two - scale method of .the latter work is based on the integro - differential approach of and focusses on error estimates .this paper studies variational formulations of for the case of discontinuous coefficients satisfying the cordes condition .the formulation seeks such that where the operator acts on the test functions . in this work ,two possible options are discussed .the choice ( for the function defined in below ) leads to the nonsymmetric formulation of .the second possibility is which results in a symmetric problem that turns out to be the euler lagrange equation for the minimization of the functional .the superscripts and stand for ` non - symmetric ' and ` least - squares ' , respectively , owing to the properties of the individual method .the variational formulations naturally allow the use of -conforming finite element methods .since finite elements are sometimes considered impractical , alternative discretization techniques are desirable .we apply the recently proposed mixed formulation to the present problem .its formulation involves function spaces similar to those employed for the stokes equations . in the sense of the least - squares functional ,the minimization problem is restated as the minimization of over all vector - valued functions with vanishing tangential trace subject to the constraint that . while the continuous formulations are equivalent , the latter can be discretized with -conforming finite elements in the framework of saddle - point problems . in the discrete formulation ,the structure of the differential operator requires the incorporation of an additional stabilization term .this is mainly due to the fact that in the norm is bounded from below by the norm of the laplacian rather than the full hessian tensor .this is also the reason why the application of nonconforming schemes is not as immediate as for the usual biharmonic equation . indeed, nonconforming finite element spaces may contain piecewise harmonic functions and , thus , it is not generally possible to bound the piecewise laplacian from below by the piecewise hessian unless further stabilization terms are included .for example , the divergence theorem readily implies that three out of the six local basis function of the morley finite element are harmonic .the conforming and mixed finite element formulations presented here lead to quasi - optimal a priori error estimates and give rise to natural a posteriori error estimates based on strong volume residuals where on any element of the finite element partition the residual reads for the conforming finite element solution ( with an analogous formula for the mixed discretization ) .since this residual equals , it immediately leads to reliable and efficient estimates with explicit constants ( depending solely on the data ) .this error estimator can be employed for guiding a self - adaptive refinement procedure .this work focusses on -adaptivity and does not address a local adaptation of the polynomial degree as in .for the suggested class of discretizations , the convergence of the adaptive algorithm can be proved . since the proof utilizes a somehow indirect argument ( similar to that of ) ,no convergence rate is obtained . the performance of the adaptive mesh - refinement procedure is numerically studied in the experiments of this paper .the remaining parts of this article are as follows : [ s : formulation ] revisits the unique solvability results of and presents the variational formulations ; [ s : fem ] presents the a priori and a posteriori error estimates for finite element discretizations .the convergence analysis of an adaptive algorithm follows in [ s : afem ] .numerical experiments are presented in [ s : num ] .the remarks of [ s : conclusion ] conclude the paper .standard notation on function spaces applies throughout this article .lebesgue and sobolev functions with values in are denoted by , , etc .the identity matrix is denoted by .the inner product of real - valued matrices , is denoted by .the frobenius norm of a matrix is denoted by ; the trace reads . for vectors, refers to the euclidean length .the rotation ( often referred to as or ) of a vector field is denoted by .the union of a collection of subsets of is indicated by the symbol without index and reads section lists some conditions for the unique solvability of and proceeds with the variational formulations . throughout this articleit is assumed that the coefficient is uniformly elliptic , that is , there exist constants such that assume furthermore that there exists some ] taking the value on all elements of that do not meet the boundary and that vanishes on , and that satisfies , for some constant and for any element touching that the boundary conditions of , the identity , and the product rule lead to using the cauchy inequality and , this term can be shown to converge to zero because by the shape - regularity the measure of the elements in meeting the boundary converges to zero as . in conclusion, the error estimator converges to zero , and so does the error .this section presents numerical experiments in two space dimensions for the choice , that is the least - squares method .this subsection describes the employed finite element methods and the used adaptive algorithm .the -conforming method used here is the bogner fox schmit ( bfs ) finite element .let be a rectangular partition of , where one hanging node ( that is a point shared by two or more rectangles which is not vertex to all of them ) per edge is allowed .the finite element space is the subspace of consisting of piecewise bicubic polynomials .it is a second - order scheme with expected convergence of for -regular solutions on quasi - uniform meshes with maximal mesh - size . for the error in the and the norm ,the corresponding convergence order is and , respectively . as a mixed scheme , the taylor hood finite element is used . for a regular triangulation of of ,the space is the subspace of consisting of piecewise quadratic polynomials while is the subspace of consisting of piecewise affine and globally continuous functions .it is a second - order scheme with expected convergence of for -regular solution ( meaning that is -regular ) on quasi - uniform meshes . for the error ,the corresponding convergence order is .the computation of the primal variable is performed with a standard finite element method based on piecewise quadratics .the predicted convergence order in the norm is .for any element the error estimators of proposition [ p : confanalysis ] and proposition [ p : mixedanalysisls ] are abbreviated as follows furthermore set , .the following adaptive algorithm is a concrete instance of the procedure outlined in [ s : afem ] .it is based on the drfler marking for some parameter . in the following, refers to or , depending on the used method .departing from an initial mesh it runs the following loop over the index + * solve .* solve the discrete problem with respect to the mesh . + * estimate .* compute the local error estimator contributions , , for the discrete solution . + * mark . *mark a minimal subset such that . + * refine . *compute a refined admissible partition of of minimal cardinality such that all elements of are refined .for rectangular meshes , the local refinement splits every rectangle in four congruent sub - rectangles while further local refinements assure the property of only one hanging node per edge . on triangular meshes ,newest - vertex bisection is employed . in all numerical experimentsthe domain is the square .the parameter for the stabilization in the mixed scheme is chosen as .all convergence history plots are logarithmically scaled .the errors are plotted against the number of degrees of freedom , that is , the space dimension of resp . of . in the adaptive computation ,the parameter is chosen .the coefficient reads .\ ] ] the requirements of [ s : formulation ] are met with , , and , so that .three test cases are considered : in the first experiment the known smooth solution from is considered .the convergence history is displayed in figure [ f : convhistbfssmooth ] for the conforming bfs discretization and in figure [ f : convhistthsmooth ] for the mixed taylor hood method .the convergence rates are of optimal order , that is for the approximation of the hessian and for the approximation of the gradient . with the bfs element , is approximated at the optimal rate .the mixed method gives the rate , which is optimal for the used quadratic fem .since the solution in this example is smooth and the discontinuities of the coefficient match with the initial meshes , uniform refinement leads to the same rates as adaptive refinement . in the case of a non - matching initial triangulation, uniform mesh refinement leads to reduced convergence rates as shown in figure [ f : convhistsmooth_nonmatch ] , whereas the adaptive bfs and taylor - hood schemes seem to behave optimal .the initial rectangular mesh is the square subdivided in four rectangles meeting at .the initial triangular mesh is created by inserting a ` criss ' diagonal in each of those four rectangles .a more challenging example is given below . ] ] for a legend .right : bfs method , cf .figure [ f : convhistthsmooth ] for a legend . in both plots, the dotted line indicates ( unlike in figures [ f : convhistbfssmooth][f : convhistthsmooth ] ) .[ f : convhistsmooth_nonmatch],title="fig : " ] for a legend .right : bfs method , cf .figure [ f : convhistthsmooth ] for a legend . in both plots, the dotted line indicates ( unlike in figures [ f : convhistbfssmooth][f : convhistthsmooth ] ) .[ f : convhistsmooth_nonmatch],title="fig : " ] the known singular solution reads in polar coordinates as the sobolev smoothness of near the origin is strictly less than . also near the boundary of the sector the regularity is reduced .the singularities in experiment 2 lead to the suboptimal convergence rates of uniform refinement , displayed in the convergence history of figure [ f : convhistbfssingularity ] for the bfs fem and figure [ f : convhistthsingularity ] for the taylor hood element . in both cases ,the adaptive method converges at optimal rate .the graph of the solution computed with the adaptive bfs element and the adaptively generated meshes are displayed in figure [ f : meshessingularity ] . in both cases ,the refinement is pronounced in the regions where the solution is singular : the origin and the curved sector boundary . ] ] .right : taylor hood , 4518 vertices , 40238 degrees of freedom , .[f : meshessingularity],title="fig : " ] . right : taylor hood , 4518 vertices , 40238 degrees of freedom , .[f : meshessingularity],title="fig:",scaledwidth=23.0% ] . right : taylor hood , 4518 vertices , 40238 degrees of freedom , .[f : meshessingularity],title="fig:",scaledwidth=23.0% ] ] the efficiency indices are defined by for the conforming discretization and by .propositions [ p : confanalysis ] and [ p : mixedanalysisls ] and the values of and , , and , predict that the efficiency index ranges in the interval ] for the mixed method .the efficiency indices for experiments 12 with matching initial meshes are shown in figure [ f : effind ] . for the conforming discretization they range from to , while for the mixed scheme they lie between and .this is an example with right - hand side , where the exact solution is unknown .the used coefficient is , that is , is concatenated with the nonlinear transform .the coefficient is not aligned with the initial meshes and has a sharp discontinuity interface near the point .figure [ f : meshesnonmatching ] displays the sign pattern of its off - diagonal entries . in experiment 3 ,the meshes are not aligned with the discontinuous coefficient .the convergence history is shown in figure [ f : convhistbfs - thnonmatching ] for the bfs method and the taylor hood method .since the exact solution is not known , the error estimators are plotted . in both cases, uniform refinement leads to the suboptimal convergence rate of .the adaptive methods converge at a better rate .still , it is suboptimal of rate .this may be due to under - integration .indeed , a gaussian quadrature rule is used , which is not accurate for discontinuous function , and the adaptive method behaves like a first - order scheme .the adaptive meshes from figure [ f : meshesnonmatching ] show strong refinement towards the jump of the coefficient . ]( left ) ; and adaptive meshes .middle : bfs , 4808 vertices , 18716 degrees of freedom , .right : taylor hood , 5848 vertices , 51956 degrees of freedom , .[ f : meshesnonmatching],title="fig:",scaledwidth=23.0% ] ( left ) ; and adaptive meshes .middle : bfs , 4808 vertices , 18716 degrees of freedom , .right : taylor hood , 5848 vertices , 51956 degrees of freedom , .[ f : meshesnonmatching],title="fig:",scaledwidth=23.0% ] ( left ) ; and adaptive meshes .middle : bfs , 4808 vertices , 18716 degrees of freedom , .right : taylor hood , 5848 vertices , 51956 degrees of freedom , .[ f : meshesnonmatching],title="fig:",scaledwidth=23.0% ]the variational formulation of as well as the new least - squares formulation of elliptic equations in nondivergence form can be discretized with conforming and , more importantly , mixed finite element technologies in a direct way .this allows for quasi - optimal error estimates and a posteriori error analysis .the proven convergence of the adaptive algorithm can be observed in the numerical experiments , and , as an empirical observation , appears to be quasi - optimal , provided the quadrature is accurate enough .the following remarks conclude this paper . [[ a - on - the - choice - of - the - variational - formulation ] ] ( a ) on the choice of the variational formulation + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the least - squares method is presented as an alternative approach to the nonsymmetric formulation of . while the symmetry of the discrete problem is certainly a favourable property , a straightforward generalization to , e.g. , hamilton - jacobi - bellman equations ( as presented in for the nonsymmetric formulation ) is less obvious .this is due to the fact that the nonlinear operator in that problem does not have sufficient smoothness properties that would allow an analysis of a direct least - squares procedure .alternatively , the least - squares method could be applied to the linear problems from a semismooth newton algorithm . however , as semismoothness on the operator level does not hold in general ( cf .* remark 1 ) ) , an analysis of this method requires further investigation .[ [ b - nonconvex - domains ] ] ( b ) nonconvex domains + + + + + + + + + + + + + + + + + + + + + the least - squares formulation may still be meaningful on nonconvex domains , but the solution will generally not coincide with that of .[ [ c - nonconforming - schemes ] ] ( c ) nonconforming schemes + + + + + + + + + + + + + + + + + + + + + + + + + nonconforming finite elements for fourth - order problems have the advantage to be much simpler than their conforming counterparts . since discrete analogues of or may not be satisfied without further stabilization terms , their application would require further modifications .[ [ d - lower - order - terms ] ] ( d ) lower - order terms + + + + + + + + + + + + + + + + + + + + + equations in nondivergence form including lower - order terms can be equally well treated in the proposed framework , see also .the least - squares formulation can be derived from the minimization of the functional for data and .for the mixed system this leads to a coupling of equations and . the cordes condition with lower - order terms reads : there exists such that more details on this cordes condition are given in . [[ e - space - dimensions - higher - than - d3 ] ] ( e ) space dimensions higher than + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the main arguments of this work are valid for any space dimension .also the mixed formulation can be formulated in any dimension , provided it is posed in the space satisfying the constraint , which in higher dimensions is understood as .for the design of a numerical method , it remains to identify the space of multipliers .the author thanks prof . ch .kreuzer for a helpful discussion , and the anonymous referees who helped to significantly improve the presentation .the author is funded by deutsche forschungsgemeinschaft ( dfg ) through crc1173 .luis caffarelli and luis silvestre , _ smooth approximations of solutions to nonconvex fully nonlinear elliptic equations _ , nonlinear partial differential equations and related topics , amer .229 , amer .soc . , providence , 2010 , pp . 6785 .ricardo h. nochetto , kunibert g. siebert , and andreas veeser , _ theory of adaptive finite element methods : an introduction _ , multiscale , nonlinear and adaptive approximation , springer , berlin , 2009 , pp .
this paper studies formulations of second - order elliptic partial differential equations in nondivergence form on convex domains as equivalent variational problems . the first formulation is that of smears & sli [ siam j. numer . anal . 51(2013 ) , pp . 20882106 . ] , and the second one is a new symmetric formulation based on a least - squares functional . these formulations enable the use of standard finite element techniques for variational problems in subspaces of as well as mixed finite element methods from the context of fluid computations . besides the immediate quasi - optimal a priori error bounds , the variational setting allows for a posteriori error control with explicit constants and adaptive mesh - refinement . the convergence of an adaptive algorithm is proved . numerical results on uniform and adaptive meshes are included .
wireless charging , also known as wireless power transfer , is the technology that enables a power source to transmit electromagnetic energy to an electrical load across an air gap , without interconnecting cords .this technology is attracting a wide range of applications , from low - power toothbrush to high - power electric vehicles because of its convenience and better user experience . nowadays ,wireless charging is rapidly evolving from theories toward standard features on commercial products , especially mobile phones and portable smart devices . in 2014, many leading smartphone manufacturers , such as samsung , apple and huawei , began to release new - generation devices featured with built - in wireless charging capability .ims research envisioned that wireless charging would be a billion market by 2016 .pike research estimated that wireless powered products will triple by 2020 to a billion market .compared to traditional charging with cord , wireless charging introduces many benefits as follows .* firstly , it improves user - friendliness as the hassle from connecting cables is removed .different brands and different models of devices can also use the same charger .* secondly , it renders the design and fabrication of much smaller devices without the attachment of batteries . *thirdly , it provides better product durability ( e.g. , waterproof and dustproof ) for contact - free devices . *fourthly , it enhances flexibility , especially for the devices for which replacing their batteries or connecting cables for charging is costly , hazardous , or infeasible ( e.g. , body - implanted sensors ) . *fifthly , wireless charging can provide power requested by charging devices in an on - demand fashion and thus are more flexible and energy - efficient .nevertheless , normally wireless charging incurs higher implementation cost compared to wired charging .first , a wireless charger needs to be installed as a replacement of traditional charging cord .second , a mobile device requires implantation of a wireless power receiver .moreover , as wireless chargers often produce more heat than that of wired chargers , additional cost on crafting material may be incurred .the development of wireless charging technologies is advancing toward two major directions , i.e. , radiative wireless charging ( or radio frequency ( rf ) based wireless charging ) and non - radiative wireless charging ( or coupling - based wireless charging ) .radiative wireless charging adopts electromagnetic waves , typically rf waves or microwaves , as a medium to deliver energy in a form of radiation .the energy is transferred based on the electric field of an electromagnetic wave , which is radiative . due to the safety issues raised by rf exposure , radiative wireless charging usually operates in a low power region .for example , omni - directional rf radiation is only suitable for sensor node applications with up to 10mw power consumption .alternatively , non - radiative wireless charging is based on the coupling of the magnetic - field between two coils within the distance of the coils dimension for energy transmission . as the magnetic - field of an electromagnetic wave attenuates much faster than the electric field , the power transfer distance is largely limited . due to safety implementation, non - radiative wireless charging has been widely used in our daily appliances ( e.g. , from toothbrush to electric vehicle charger ) by far . in this article, we aim to provide a comprehensive survey of the emerging wireless charging systems with regard to the fundamental technologies , international standards as well as applications in wireless communication networks .our previous work in presented a review of research issues in rf - powered wireless networks with the focus on the receiver - side ( i.e. , energy harvester ) designs .this survey differs from in the following aspects : this survey i ) covers various major wireless charging techniques , namely , inductive coupling , magnetic resonance coupling and rf / microwave radiation , from fundamental principles to their applications , ii ) reviews the existing international standards , commercialization and implementations , and iii ) emphasizes on the transmitter - side ( i.e. , wireless charger ) strategy designs for different types of network applications .another recent survey in provides an overview of self - sustaining wireless communications with different energy harvesting techniques , from the perspective of information theory , signal processing and wireless networking .unlike , this survey focuses on the wireless charging strategies in communication networks with wireless energy harvesting capability , also referred to as wireless powered communication networks ( wpcns ) ..summary of existing survey in related area . [ cols="<,<,<",options="header " , ] table [ deployment_strategies ] summarizes the existing wireless charger deployment strategies . clearly , multi - hop provisioning has been less investigated , only in . additionally , it is important to study a system when each device can harvest energy from multiple transmitters . as for the deployment scenarios ,none of the existing works considers the deployment of mobile chargers in mobile networks .mobile charger deployment strategies based on the mobility pattern of user devices can be studied . moreover , we observe that the deployment problems are formulated mostly as optimization problems with different objectives and constraints .all the solutions consequently need global information such as devices battery capacity , location , and even hardware specification parameters and velocity ( e.g. , in ) .collecting these information incurs tremendous communication overhead . though some of the proposed solutions ( e.g. , in and ) claimed to be of low - complexity and scalable for large networks , its feasibility and practicability in deploying them in real systems have to be evaluatedalternatively , decentralized approaches based on local information that relax the communication requirement can be one of the important future directions .moreover , most of the proposals were evaluated by numerical simulation .only references and have provided system - level simulation .there is the need for future research to conduct more assessment through system - level simulations and real experiments to understand the empirical performance .in this section , we first summarize some open issues with regard to both wireless charging technologies and data communication in wireless charging systems .then , we envision several novel paradigms emerging with the advance of wireless charging technologies .this subsection first discusses some technical issues in wireless charging , then highlights some communication challenges . _inductive coupling : _ the increase of wireless charging power density gives rise to several technical issues , e.g. , thermal , electromagnetic compatibility , and electromagnetic field problems .this requires high - efficiency power conversion techniques to reduce the power loss at an energy receiver and battery modules with effective ventilation design ._ magnetic resonance coupling : _ magnetic resonance coupling - based techniques , such as witritiy and magmimo , have a larger charging area and are capable of charging multiple devices simultaneously . however , they also cause increased electromagnetic interference with a lower efficiency compared with inductive charging .another limitation with magnetic resonance coupling is the relatively large size of a transmitter . the wireless charging distance is generally proportional to the diameter of the transmitter .therefore , wireless charging over long distance typically requires a large receiver size ._ near - field beamforming : _ for multi - antenna near - field beamforming , the computation of a magnetic - beamforming vector on the transmission side largely depends on the knowledge of the magnetic channels to the receivers .the design of channel estimation and feedback mechanisms is of paramount importance . with the inaccuracy of channel estimation or absence of feedback , the charging performance severely deteriorates .additionally , there exists a hardware limitation that the impedance matching is optimally operated only within a certain range ._ localization for rf - based energy beamforming : _ as aforementioned , energy beamforming is able to enhance the power transfer efficiency .however , the energy transmitter needs to know the location of the energy receiver to steer the energy beam to .localization needs to make real - time spatial estimations for two major parameters , i.e. , angle and distance .self - detection and localization of to - be - charger devices is challenging especially for mobile wpcns .additionally , similar to near - field beamforming , channel estimation is also very critical in the design of beamforming vectors ._ heating effect : _ a metallic or ferromagnetic material can absorb some of the near - field energy if it locates in a proximity of any wireless charger . the induced voltage / current on the material can cause temperature rise . as metallic material is an essential part of electronic devices , the resultant temperature rise lowers charging efficiency and can render bad user experience . although both qi and a4wp have the mechanisms to avoid safety issues such as severe over - temperature , system power loss is still inevitable and can be considerable especially if the device is large in size .moreover , foreign objects may be another factor to cause power loss .how to mitigate the heating effect to diminish power loss is challenging ._ energy conversion efficiency : _ wireless charging requires electricity energy to be transformed from ac to electronmagnetic waves and then to dc .each conversion adds the loss in overall energy , which leads to a normally wireless charging efficiency hovering between and .efforts toward hardware improvement of energy conversion efficiency are instrumental to achieve highly efficient wireless charging . to improve the usability and efficiency of the wireless charger , their data communication capability can be enhanced. _ duplex communication and multiple access : _ the current communication protocols support simplex communication ( e.g. , from a charging device to charger ) . however , there are some important procedures which require duplex communication . for example , the charging device can request for a certain charging power , while the charger may request for battery status of the charging device. moreover , the current protocols support one - to - one communication . however ,multiple device charging can be implemented with multiple access for data transmission among charging devices and a charger has to be developed and implemented . _secure communication : _ the current protocols support plain communication between a charger and a charging device .they are susceptible to jamming attacks ( e.g. , to block the communication between the charger and the charging device ) , eavesdropping attacks ( e.g. , to steal charging device s and charger s identity ) and man - in - the - middle attacks ( e.g. , malicious device manipulates or falsifies charging status ) .the security features have to be developed in the communication protocols , taking unique wireless charging characteristics ( e.g. , in - band communication in qi ) into account . _inter - charger communication : _ the protocols support only the communication between a charger and charging device ( i.e. , intra - charger ) .multiple chargers can be connected and their information as well as charging devices information can be exchanged ( i.e. , inter - charger ) . although the concept of wireless charger networking has been proposed in , there are some possible improvements .for example , wireless chargers can be integrated with a wireless access point , which is called a hybrid access point , to provide both data communication and energy transfer services . in this subsection , we discuss several emerging paradigms which are anticipated in wireless powered communication networks .similar to wireless communication networks that provide data service , a wireless charger network can be built to deliver energy provisioning service to distributed users . the wireless charger network that connects a collection of distributed chargers through wired or wireless links allows to exchange information ( e.g. , include availability , location , charging status , and cost of different chargers ) to schedule the chargers .such scheduling can either be made in a distributed or centralized manner to optimize certain objectives ( e.g. , system energy efficiency , total charging cost ) .a wireless charger network can be a hybrid system based on several charging techniques to satisfy heterogeneous charging and coverage requirement .for instance , the system may utilize short - range near - field chargers ( e.g. , inductive - based ) to charge static devices that have high power demand , mid - range near - field chargers ( e.g. , resonance - based ) to charge devices having no line - of - sight charging link and relax the coil alignment requirement .furthermore , a far - field charger ( e.g. powercaster and cota system ) can be employed to cover remote devices with low - power requirement and some local movement requirement , ( e.g. , wearable devices , mp3 , watches , google glasses , and sensors in smart building ) . with the increasing deployment of wireless powered devices , how to provision wireless energy for large - scale networks in an eco - friendly waybecomes an emerging issue . as reviewed above ,both static and mobile charger scheduling strategies have been developed for power replenishment .however , these strategies could incur more pollution and energy consumption , if the power sources and charging techniques for wireless chargers are not appropriately adopted .for example , the vehicle equipped with wireless chargers for mobile energy provisioning will produce considerable amount of co emission .moreover , due to the propagation loss and thus low transfer efficiency , a static rf - based charger powered by the electric grid could cause more consumption of conventional fuels , like coal , to harm the environment .currently , how to perform green wireless energy provisioning remains an open issue and has been ignored by the majority of existing studies .one promising solution is to equip renewable energy sources , e.g. , solar , for wireless chargers .however , renewable energy could be unpredictable , thus hard for the chargers to deliver reliable wireless charging services .significant relevant issues can be explored in this direction .full - duplex based wireless information transmitter can be equipped with multiple antennas to transmit information and receive energy simultaneously in the same frequency band .conventionally , a full - duplex system suffers from the self - interference as part of the transmitted rf signals is received by the transmitter itself .self - interference is undesirable because it degrades the desired information signal .however , with the capability of harvesting rf energy , self - interference can facilitate energy saving .in particular , part of the energy used for information transmission can be captured by the receive antenna(s ) for future reuse , referred to as self - energy recycling . this paradigm benefits both energy efficiency and spectrum efficiency . moreover , it can be widely applied to a multi - antenna base station , access point , relay node , and user devices .millimeter - wave cellular communications that operates on frequency ranging from 30 to 300ghz have become a new frontier for next - generation wireless systems .due to high frequencies , millimeter - wave cellular communication is a natural system to facilitate wireless energy beamforming . for a multi - antenna transmitter , the beamforming efficiency increases by increasing the frequency .moreover , frequency is a key factor that affects the physical size of a rectenna based microwave power conversion system . at high frequency ranges ,the required size of the antennas is small , which consequently renders a small form factor for the system .moreover , a small form factor helps to advance beamforming by enabling a larger number of antennas to be placed in an array , which further helps to mitigate charging power attenuation .thus , a millimeter - wave rf transmitter is desired to be utilized for rf - based wireless charging and swipt .as afore - introduced , swipt ( see and references therein ) has been broadly investigated in rf - based wireless communication systems . with the emerging of coupling - based chargers , magnetic induction communication can also be incorporated in near - field charging system to induce swipt .near - field communication based on magnetic field can achieve significant capacity gain compared with rf - based communication .a hardware design and implementation were reported in that an inductive coupling based chip can deliver 11gbps for a distance of 15 m in 180 nm complementary metal - oxide semiconductor ( cmos ) .therefore , swipt - compliant near - field chargers have great potentials in high - speed data offloading in next generation communications . being backhauled with high - speed internet connections , swipt - compliant near - field chargers can be integrated into cellular systems for seamless data service during charging .wireless power technology offers the possibility of removing the last remaining cord connections required to replenish portable electronic devices .this promising technology has significantly advanced during the past decades and introduces a large amount of user - friendly applications . in this article , we have presented a comprehensive survey on the paradigm of wireless charging compliant communication networks .starting from the development history , we have further introduced the fundamental , international standards and network applications of wireless charging in a sequence , followed by the discussion of open issues and envision of future applications .the integration of wireless charging with existing communication networks creates new opportunities as well as challenges for resource allocation .this survey has shown the existing solutions of providing seamless wireless power transfer through static charger scheduling , mobile charger dispatch and wireless charger deployment . among those studies , various emerging issues including online mobile charger dispatch strategies , near - field energy beamforming schemes ,energy provisioning for mobile networks , distributed wireless charger deployment strategies , and multiple access control for wireless power communication networks are less explored and require further investigation .additionally , the open issues and practical challenges discussed in section viii can be considered as main directions for future research .this work was supported in part by the national research foundation of korea ( nrf ) grant funded by the korean government ( msip ) ( 2014r1a5a1011478 ) , singapore moe tier 1 ( rg18/13 and rg33/12 ) and moe tier 2 ( moe2014-t2 - 2 - 015 arc 4/15 ) , and the u.s .national science foundation under grants us nsf ccf-1456921 , cns-1443917 , eccs-1405121 , and nsfc 61428101 .a. costanzo , m. dionigi , d. masotti , m. mongiardo , g. monti , l. tarricone , and r. sorrentino , electromagnetic energy harvesting and wireless power transmission : a unified approach , " _ proceedings of the ieee _ , vol .1692 - 1711 , nov .2014 .a. sample , d. j. yeager , p. s. powledge , a. v. mamishev , j. r. smith , design of an rfid - based battery - free programmable sensing platform , " _ ieee trans . instrumentation and measurement _ ,11 , pp . 2608 - 2615 ,2008 .x. lu , p. wang , d. niyato , d. i. kim , and z. han , wireless networks with rf energy harvesting : a contemporary survey , " _ ieee communications surveys and tutorials _ ,757 - 789 , may 2015 .s. ulukus , a. yener , e. erkip , o. simeone , m. zorzi , p. grover and k. huang , energy harvesting wireless communications : a review of recent advances , " _ ieee journal of sel .areas in comm .360 - 381 , mar .2015 .r. v. prasad , s. devasenapathy , v. s. rao , and j. vazifehdan , reincarnation in the ambiance : devices and networks with energy harvesting , " _ ieee communications surveys tutorials _ , vol . 16 , no. 1 , pp . 195 - 213 , feb . 2014 . c. r. valenta , and g. d. durgin , harvesting wireless power : survey of energy - harvester conversion efficiency in far - field , wireless power transfer systems , " _ ieee microwave magazine _ , vol .108 - 120 , june 2014 .j. jadidian and d. katabi , magnetic mimo : how to charge your phone in your pocket , " in _ proc . of the annual international conference on mobile computing and networking ( mobicom 14 )_ , maui , hawaii , sept . 2014 . s. l. ho , j. wang , w. n. fu , and m. sun , a comparative study between novel witricity and traditional inductive magnetic coupling in wireless charging , " _ ieee transactions on magnetics _5 , pp . 1522 - 1525 , may 2011 .a. kurs , a. karalis , r. moffatt , j. d. joannopoulos , p. fisher , and m. soljacic , `` wireless power transfer via strongly coupled magnetic resonances , '' _ science _ ,5834 , pp .83 - 86 , june 2007 . c. s.wang , g. a. covic , and o. h. stielau , power transfer capability and bifurcation phenomena of loosely coupled inductive power transfer systems , " _ ieee trans .1 , pp . 148 - 157 , feb . 2004 .z. pantic , and s. m. lukic , framework and topology for active tuning of parallel compensated receivers in power transfer systems , " _ ieee trans .power electron .4503 - 4513 , nov . 2012 .l. roselli , f. alimenti , g. orecchini , c. mariotti , p. p. mezzanotte , and m. virili , wpt , rfid and energy harvesting : concurrent technologies for the future networked society , " in _ proc . of asia - pacific microwave conference ( apmc ) _ , seoul ,south korea , nov .h. h. wu , a. gilchrist , k. d. sealy , and d. bronson , a high efficiency 5 kw inductive charger for evs using dual side control , " _ ieee transactions on industrial informatics _ , vol .585 - 595 , aug .2012 .x. li , c. tsui , and w. ki , a 13.56 mhz wireless power transfer system with reconfigurable resonant regulating rectifier and wireless power control for implantable medical devices , " _ ieee journal of solid - state circuits _ ,978 - 989 , april 2015 .b. cannon , j. hoburg , d. stancil , and s. goldstein , magnetic resonant coupling as a potential means for wireless power transfer to multiple small receivers , " _ ieee trans .power electron .1819 - 1825 , july 2009 . z. n. low , r. chinga , r. tseng , and j. lin , design and test of a high - power high - efficiency loosely coupled planar wireless power transfer system , " _ ieee trans .1801 - 1812 , may 2009 .j. choi , y. h. ryu , d. kim , n. y. kim , c. yoon , y .- k .park , s. kwon , and y. yang , design of high efficiency wireless charging pad based on magnetic resonance coupling , " in _ proc . of european radar conference ( eurad ) _ ,amsterdam , netherlands , nov .d. z. kim , k. y. kim , n. y. kim , y .- k .park , w .- s .lee , j .- w .yu , and s. kwon , one - to - n wireless power transmission system based on multiple access one - way in - band communication , " in _ proc . of asia - pacific signal information processing associationannual summit and conference ( apsipa asc ) _ , hollywood , ca , dec .2012 .s. rajagopal , and f. khan , multiple receiver support for resonance coupling based wireless charging , " in _ proc . of ieee international conference on communications workshops( icc ) _ , kyoto , japan , june 2011 .y. g. kim , y. lim , s. yun , and s. nam , mutual coupling analysis of antennas in layered media through equivalent sources for wireless power transfer , " in _ proc . of ieee radio science meeting ( joint with ap - s symposium )_ , memphis , tn , july 2014 .v. kuhn , c. lahuec , f. seguin , c. person , a multi - band stacked rf energy harvester with rf - to - dc efficiency up to , " _ ieee transactions on microwave theory and techniques _5 , may 2015 . c. r. valenta , m. m. morys , and g. d. durgin , theoretical energy - conversion efficiency for energy - harvesting circuits under power - optimized waveform excitation , " _ ieee transactions on microwave theory and techniques _ , vol .5 , pp . 1758 - 1767 , may 2015 .z. ding , c. zhong , d. w. k. ng , m. peng , h. a. suraweera , r. schober , and h.v .poor , application of smart antenna technologies in simultaneous wireless information and power transfer , " _ ieee communications magazine _ , vol .86 - 93 , april 2015 .k. huang and v. k. n. lau , enabling wireless power transfer in cellular networks : architecture , modeling and deployment , " _ ieee transactions on wireless communications _, vol 13 , no .902 - 912 , feb . 2014 .a. kawamura , k. ishioka , and j. hirai , wireless transmission of power and information through one high - frequency resonant ac link inverter for robot manipulator applications, _ ieee trans .503 - 508 , june 1996 .z. cheng , y. lei , k. song , and c. zhu , design and loss analysis of loosely coupled transformer for an underwater high - power inductive power transfer system , " to appear in _ ieee transactions on magnetics_. f. tang , k. zhang , w. yan , and b. song , circuit design of compensation for contactless power system of auv , " in _ proc . of china international conference on electricity distribution ( ciced ) _ , shanghai , china , sept. 2012 .h. gorginpour , h. oraee , and r. a. mcmahon , electromagnetic - thermal design optimization of the brushless doubly fed induction generator , " _ ieee trans .1710 - 1721 , april 2014 .h. guzman , m. j. duran , f. barrero , b. bogado , and s. toral , speed control of five - phase induction motors with integrated open - phase fault operation using model - based predictive current control techniques , " _ ieee trans . ind .9 , pp . 4474 - 4484 , sep .2014 .a. mahmoudi , s. kahourzade , n. a. rahim , w. p. hew , and m. n. uddin , design , analysis , prototyping of a novel - structured solid rotor - ringed , line - start axial - flux permanent - magnet motor , " _ ieee trans .1722 - 1734 , april 2014 .g. a. covic and j. t. boys , modern trends in inductive power transfer for transportation applications , " _ proceedings of ieee journal of emerging and selected topics in power electronics _ , vol . 1 , no .28 - 41 , mar . 2013 .g. a. j. elliott , g. a. covic , d. kacprzak , and j. t. boys , a new concept : asymmetrical pick - ups for inductively coupled power transfer monorail systems , " _ ieee trans .3389 - 3391 , oct .2006 .j. shin et al ., design and implementation of shaped magnetic - resonance based wireless power transfer system for roadway - powered moving electric vehicles , " _ ieee trans .1179 - 1192 , mar .g. a. covic , j. t. boys , m. l. g. kissin , and h. g. lu , a three - phase inductive power transfer system for roadway - powered vehicles , " _ ieee trans .54 , no . 6 , pp . 3370 - 3378 , dec .s. y. choi , b. w. gu , s. y. jeong , and c. t. rim , advances in wireless power transfer systems for roadway - powered electric vehicles , " _ ieee journal of emerging and selected topics in power electronics _18 - 36 , mar .2015 .j. kim , b. lee , s. lee , c. park , s. jung , s. lee , k. yi , and j. baek , development of 1mw inductive power transfer system for a high speed train , " to appear in _ ieee transactions on industrial electronics_. s. lee , g. jung , s. shin , y. kim , b. song , j. shin , and d. cho , the optimal design of high - powered power supply modules for wireless power transferred train , " in _ proc . of electrical systems for aircraft , railway and ship propulsion ( esars )_ , bologna , italy , oct . 2012 .y. chun , s. park , j. kim , h. kim , k. hwang , j. kim , and s. ahn , system and electromagnetic compatibility of resonance coupling wireless power transfer in on - line electric vehicle , " _ international symposium on antennas and propagation ( isap ) _ , nagoys , japan , oct./nov .r. severns , e. yeow , g. woody , j. hall , j. hayes , an ultra - compact transformer for a 100 w to 120 kw inductive coupler for electric vehicle battery charging , " in _ proc . of ieee applied power electronics conference and exposition _ , san jose , ca , mar .j. g. hayes , m. g. egan , john m. d. murphy , s. e. schulz , j. t. hall , wide - load - range resonant converter supplying the sae j-1773 electric vehicle inductive charging interface , " _ ieee transactions on industry applications _ , vol .4 , pp . l884 - 1895 , july / aug . 1999 .m. g. egan , d. l. osullivan , j. g. hayes , m. j. willers , and c. p. henze , power - factor - corrected single - stage inductive charger for electric vehicle batteries , " _ ieee transactions on industrial electronics _ , vol .1217 - 1226 , april 2007 .u. k. madawala and d. j. thrimawithana , a bidirectional inductive power interface for electric vehicles in v2 g systems , " _ ieee transactions on industrial electronics _ , vol .4789 - 4796 , oct .m. c. kisacikoglu , m. kesler , and l. m. tolbert , single - phase on - board bidirectional pev charger for v2 g reactive power operation , " _ ieee transactions on smart grid _, vol . 6 , no .767 - 775 , mar .2015 .h. h. wu , a. gilchrist , k. sealy , p. israelsen , and j. muhs , a review on inductive charging for electric vehicles , " in _ proc . of ieee international electric machines drives conference ( iemdc ) _ , niagara falls , on , may 2011 .h. kim , c. song , j. kim , d. h. jung , e. song , s. kim , and j. kim , design of magnetic shielding for reduction of magnetic near field from wireless power transfer system for electric vehicle , " in _ proc . of international symposium on electromagnetic compatibility ( emc europe )_ , gothenburg , sweden , sept . 2014 .a. o. di tommaso , f. genduso , and r. miceli , a small power transmission prototype for electric vehicle wireless battery charge applications , " in _ proc . of international conference on renewable energy research and applications ( icrera )_ , nagasaki , japan , nov . 2012 .x. wang , h. zhang , and y. liu , analysis on the efficiency of magnetic resonance coupling wireless charging for electric vehicles , " in _ proc . of ieee annual international conference on cyber technology in automation , control and intelligent systems ( cyber )_ , nanjing , china , may 2013 .s. krishnan , s. bhuyan , v. p. kumar , w. wang , j. a. afif , and k. s. lim , frequency agile resonance - based wireless charging system for electric vehicles , " in _ proc . of ieee international electric vehicle conference ( ievc )_ , greenville , sc , mar .h. jiang , j. m. zhang , s. s. liou , r. fechter , s. hirose , m. harrison , and s. roy , a high - power versatile wireless power transfer for biomedical implants , " in _ proc . of annual international conference of the ieee engineering in medicine and biology society ( embc ) _ , buenos aires , argentina , aug./sept .h. jiang , j. zhang , d. lan , k. k. chao , s. liou , h. shahnasser , r. fechter , s. hirose , m. harrison , and s. roy , a low - frequency versatile wireless power transfer technology for biomedical implants , " _ ieee transactions on biomedical circuits and systems _ , vol . 7 , no .526 - 535 , aug . 2013 .a. arshad , s. khan , a. h. m. z. alam , and r. tasnim , investigation of inductive coupling approach for non - contact bidirectional transfer of power and signal , " in _ proc .of international conference on computer and communication engineering ( iccce ) _ , kuala lumpur , malaysia , july 2012 .a. k. ramrakhyani , and g. lazzi , on the design of efficient multi - coil telemetry system for biomedical implants , " _ ieee transactions on biomedical circuits and systems _ ,vol . 7 , no .11 - 23 , feb . 2013 .a. k. ramrakhyani , and g. lazzi , multicoil telemetry system for compensation of coil misalignment effects in implantable systems , " _ ieee antennas and wireless propagation letters _ ,1675 - 1678 , feb . 2012 .a. k. ramrakhyani , s. mirabbasi , and c. mu , design and optimization of resonance - based efficient wireless power delivery systems for biomedical implants , " _ ieee transactions on biomedical circuits and systems _ , vol . 5 , no .48 - 63 , feb . 2011 .r. f. xue , k .- w .cheng , and m.high - efficiency wireless power transfer for biomedical implants by optimal resonant load transformation , " _ ieee transactions on circuits and systems i : regular papers _867 - 874 , april 2013 . q. xu , h. wang , z. gao , z. mao , j. he , and m. sun , a novel mat - based system for position - varying wireless power transfer to biomedical implants , " _ ieee transactions on magnetics _49 , no . 8 , no .2 , pp . 4774 - 4779 , aug .a. qusba , a. k. ramrakhyani , j.so , g. j. hayes , m. d. dickey , and g. lazzi , on the design of microfluidic implant coil for flexible telemetry system , " _ ieee sensors journal _ , vol . 14 , no .4 , pp . 1074 - 1080 , april 2014 .g. yilmaz and c. dehollaini , an efficient wireless power link for implanted biomedical devices via resonant inductive coupling , " in _ proc . of ieee radio and wireless symposium ( rws )_ , santa clara , ca , jan . 2012 .j. kim , h. son , d. kim , and y. park , optimal design of a wireless power transfer system with multiple selfresonators for an led tv , " _ ieee transactions on consumer electronics _ , vo1 .775 - 780 , aug . 2012 .x. xin , d. jackson , j. chen , and p. tubel , wireless power transmission for oil well applications , " in _ proc .of ieee international symposium on electromagnetic compatibility ( emc ) _ , denver , co , aug . 2013 .y. zhou , w. xia , y. zhou , safety charging mode analysis with multi - load used in coal mine , " in _ proc .of international conference on materials for renewable energy environment ( icmree ) _ , shanghai , china , may 2011 .f. pellitteri , v. boscaino , a. o. di tommaso , r. miceli , and g. capponi , wireless battery charging : e - bike application , " in _ proc . of international conference on renewable energy research and applications ( icrera ) _ , madrid , spain , oct . 2013 .n. dobrostomat , g. turcan , and m. neag , wearable health monitors with transferjet data communications and inductive power transfer , " in _ proc . of international semiconductor conference ( cas )_ , sinaia , romania , oct . 2014 .o. jonah , s. v. georgakopoulos , and m. m. tentzeris , wireless power transfer to mobile wearable device via resonance magnetic , " in _ proc . of ieee14th annual wireless and microwave technology conference ( wamicon ) _ , orlando , fl , april 2013 .f. zhang , s. a. hackworth , x. liu , h. chen , r. j. sclabassi , and m. sun , wireless energy transfer platform for medical sensors and implantable devices , " in _ proc . of international conference of the ieee engineering in medicine and biology society _ , pp .1045 - 1048 , minneapolis , mn , sept . 2009 .o. mourad , p. le thuc , r. staraj , and p. iliev , system modeling of the rfid contactless inductive coupling using 13.56 mhz loop antennas , " in _ proc . of ieeeeuropean conference on antennas and propagation ( eucap ) _ , hague , netherlands , april 2014 .i. cho , s. kim , j. moon , j. yoon , w. byun , and j. choi , wireless power transfer system for led display board by using 1.8 mhz magnetic resonant coils , " in _ proc . of electromagnetic compatibilitysymposium - perth ( emcsa ) _ , perth , wa , nov .s. barmada , m. raugi and m. tucci , power line communication integrated in a wireless power transfer system : a feasibility study , " in _ proc . of ieee international symposium on power line communications and its applications ( isplc )_ , glasgow , uk , april 2014 . h. j. visser and r. j. m. vullers , rf energy harvesting and transport for wireless sensor network applications : principles and requirements , " _ proceedings of the ieee _ , vol . 101 , no . 6 , pp . 1410 - 1423 , june 2013 .e. falkenstein , d. costinett , r. zane , and z. popovic , far - field rf - powered variable duty cycle wireless sensor platform , " _ ieee transactions on circuits and systems ii : express briefs _ , vol .822 - 826 , dec . 2011 .hong , j. kang , s. j. kim , s. j. kim , and u .- k .kwon , ultralow power sensor platform with wireless charging system , " in _ proc . of ieee international symposium on circuits and systems ( iscas ) _ , pp .978 - 981 , seoul , south korea , may 2012 .d. mascarenas , e. flynn , c. farrar , g. park , and m. todd , a mobile host approach for wireless powering and interrogation of structural health monitoring sensor networks , " _ ieee sensors journal _ , vol .1719 - 1726 , dec . 2009 . c. cato and s. lim , uhf far - field wireless power transfer for remotely powering wireless sensors , " in _ proc . of antennas and propagation society international symposium ( apsursi ) _ , memphis , tn , july 2014 .n. shinohara , and t. ichihara , coexistence of wireless power transfer via microwaves and wireless communication for battery - less zigbee sensors , " in _ proc . of international symposium on electromagnetic compatibility_ , tokyo , japan , may 2014 .l. xia , j. cheng , n. e. glover , and p. chiang , 0.56 v , 20 dbm rf - powered , multi - node wireless body area network system - on - a - chip with harvesting - efficiency tracking loop , " _ ieee journal of solid - state circuits _ ,49 , no . 6 , pp . 1345 - 1355 , june 2014 .r. vyas , b. cook , y. kawahara , and m. tentzeris , a self - sustaining , autonomous , wireless - sensor beacon powered from long - range , ambient , rf energy , " in _ proc . of ieeemtt - s international microwave symposium digest ( ims ) _ , seattle , wa , june 2013 .s. kim , r. vyas , j. bito , k. niotaki , a. collado , a. georgiadis , and m. m. tentzeris , ambient rf energy - harvesting technologies for self - sustainable standalone wireless sensor platforms , " _ proceedings of the ieee _ ,1649 - 1666 , nov .2014 .p. nintanavongsa , m. y. naderi , and k. r. chowdhury , a dual - band wireless energy transfer protocol for heterogeneous sensor networks powered by rf energy harvesting , " in _ proc .of ieee international computer science and engineering conference ( icsec ) _ , nakorn pathom , thailand , sept . 2013 .x. wang and a. mortazawi , high sensitivity rf energy harvesting from am broadcasting stations for civilian infrastructure degradation monitoring , " in _ proc . of ieee international wireless symposium ( iws )_ , victoria , bc , may 2013 .l. m. borges , n. barroca , h. m.saraiva , j. tavares , p. t.gouveia , f. j. velez , c. loss , r. salvado , p. pinho , r. goncalves , borges n. carvalho , r. chavez - santiago , and i. balasingham , design and evaluation of multi - band rf energy harvesting circuits and antennas for wsns , " in _ proc . of international conference on telecommunications ( ict ) _ ,lisbon portugal , may 2014 .t. b. lim , n. m. lee , and b. k. poh , feasibility study on ambient rf energy harvesting for wireless sensor network , " in _ proc . of ieee mtt - s international microwave workshop series on rf and wireless technologies for biomedical and healthcare applications ( imws - bio ) _ ,singapore , dec .e. abd kadir , a. p. hu , m. biglari - abhari , and k.c .aw , indoor wifi energy harvester with multiple antenna for low - power wireless applications , " in _ proc .of ieee 23rd international symposium on industrial electronics ( isie ) _ , istanbul , turkey , june 2014 .f. alneyadi , m. alkaabi , s. alketbi , s. hajraf , and r. ramzan , 2.4ghz wlan rf energy harvester for passive indoor sensor nodes , " in _ proc . of ieee international conference on semiconductor electronics ( icse )_ , kuala lumpur , malaysia , aug 2014 .f. zhang , s. a. hackwoth , x. liu , c. li , and m. sun , wireless power delivery for wearable sensors and implants in body sensor networks , " in _ proc . of annual international conference of the ieee engineering in medicine and biology society ( embc ) _ , buenos aires , argentina , sept .j. olivo , s. carrara , and g. de micheli , ironic patch : a wearable device for the remote powering and connectivity of implantable systems , " in _ proc . of ieee international instrumentation and measurement technology conference ( i2mtc )_ , graz , austria , may 2012 .n. desai , j. yoo , and a. p. chandrakasan , a scalable , 2.9 mw , 1 mb / s e - textiles body area network transceiver with remotely - powered nodes and bi - directional data communication , " _ ieee journal of solid - state circuits _ , vol .1995 - 2004 , sept .s. majerus , s. l. garverick , and m. s. damaser , wireless battery charge management for implantable pressure sensor , " in _ proc . of ieee dallas circuits and systemsconference ( dcas ) _ , richardson , tx , oct .2014 . e. y. chow , c. yang , y. ouyang , a. l. chlebowski , p. p. irazoqui , w. j. chappell , wireless powering and the study of rf propagation through ocular tissue for development of implantable sensors , " _ ieee transactions on antennas and propagation _ , vol .2379 - 2387 , june 2011 .m. arsalan , m. h. ouda , l. marnat , t. j. ahmad , a. shamim , and k. n. salama , a 5.2ghz , 0.5mw rf powered wireless sensor with dual on - chip antennas for implantable intraocular pressure monitoring , " in _ proc . of ieee mtt - s international microwave symposium digest ( ims )_ , seattle , wa , june 2013 .r. a. bercich , d. r. duffy , and p. p. irazoqui , far - field rf powering of implantable devices : safety considerations , " _ ieee transactions on biomedical engineering _ , vol .60 , no . 8 , pp . 2107 - 2112 , aug .2013 .m. erol - kantarci and h. t. mouftah , drift : differentiated rf power transmission for wireless sensor network deployment in the smart grid , " in _ proc . of ieee globecom workshops _ , anaheim , ca , dec .u. baroudi and a. al - roubaiey , mobile radio frequency charger for wireless sensor networks in the smart grid , " in _ proc . of international wireless communications and mobile computing conference ( iwcmc )_ , nicosia , cyprus , aug . 2014 .j. kim , s. y. yang , k. song , s. jones , j. r. elliott and s. h. choi microwave power transmission using a flexible rectenna for microwave - powered aerial vehicles , " _ smart materials and structures _ ,5 , pp . 1243 - 1248 , oct .j. miyasaka , k. ohdoi , m. watanabe , h. nakashima , a. oida , h. shimizu , k. hashimoto , n. shinohara , and t. mitani control for microwave - driven agricultural vehicle : tracking system of parabolic transmitting antenna and vehicle rectenna panel , " _ engineering in agriculture , environment and food _ , vol . 6 , no .135 - 140 , feb . 2013 .j. miyasaka , y. yamanaka , s. nakagawa , k. ohdoi , h. nakashima , h. shimizu , k. hashimoto , n. shinohara , and t. mitani , development of an electric vehicle by microwave power transmission : development of small model vehicle and control of rectenna panel , " _ engineering in agriculture , environment and food _ , vol . 7 , no .103 - 108 , april 2014 .a. oida , h. nakashima , j. miyasaka , k. ohdoi , h. matsumoto , and n. shinohara , development of a new type of electric off - road vehicle powered by microwaves transmitted through air , " _ journal of terramechanics _ ,329 - 338 , nov . 2007 .g. b. ernald , the feasibility of a high - altitude aircraft platform with consideration of technological and societal constraints , " langley research center , hampton , va , tech .nasa tm-84508 , june 1982 .n. shinohara , beam efficiency of wireless power transmission via radio waves from short range to long range , " _ journal of the korean institute of electromagnetic engineering and science _ ,vol.10 , no.4 , pp .224 - 230 , december , 2010 .n. shinohara , y. kubo , and h. tonomura , wireless charging for electric vehicle with microwaves , " in _ proc . of international electric drivesproduction conference ( edpc ) _ , nuremberg , german , oct .y. kubo , n. shinohara , and t. mitani , development of a kw class microwave wireless power supply system to a vehicle roof , " in _ proc . of ieee mtt - s international microwave workshop series on innovative wireless power transmission : technologies , systems , and applications ( imws ) _ , kyoto , japan , may 2012 .n. shinohara , y. kubo , and h. tonomura , mid - distance wireless power transmission for electric truck via microwaves , " in _ proc . of ursi international symposium on electromagnetic theory ( emts )_ , hiroshima , japan , may 2013 .m. erol - kantarci and h. t. mouftah , radio - frequency - based wireless energy transfer in lte - a heterogenous networks , " in _ proc . of ieee symposium on computers and communication ( iscc )_ , funchal , portugal , june 2014 .r. kurtus , list of worldwide ac voltages and frequencies , " may 2014 .( available online at : http : // www.school-for-champions.com /science/ ac volt list.htm.vmgebocnco0 ) a. j. moradewicz , and m. p. kazmierkowski , contactless energy transfer system with fpga - controlled resonant converter , " _ ieee transactions on industrial electronics _ , vol .9 , pp . 3181 - 3190 , sept .n. jamal , s. saat , and a. z. shukor , a study on performances of different compensation topologies for loosely coupled inductive power transfer system , " in _ proc . of ieee international conference on control system , computing and engineering ( iccsce ) _ , minden , nevada , nov .m. yang , g. yang , e. li , z. liang , and b. zhai , topology and inductance analysis for wireless power transmission system , " in _ proc . of chinese control and decision conference ( ccdc )_ , guiyang , china , may 2013 .c. j. chen , t. h.chu , c. l. lin , and z. c. jou , a study of loosely coupled coils for wireless power transfer , " _ ieee trans .circuitsand syst.part ii : express briefs _ ,57 , no . 7 , pp .536 - 540 , july 2010 .t. p. duong and j. w. lee , experimental results of high - efficiency resonant coupling wireless power transfer using a variable coupling method , " _ ieee microw .wireless components lett .21 , no . 8 , pp .442 - 444 , aug . 2011 .f. zhang , s. a. hackworth , w. fu , c. li , z. mao , and m. sun , relay effect of wireless power transfer using strongly coupled magnetic resonances , " _ ieee transactions on magnetics _ , vol .1478 - 1481 , may 2011 .w. zhong , c. k. lee , s. y. hui , wireless power domino - resonator systems with noncoaxial axes and circular structures , " _ ieee transactions on power electronics _ , vol .4750 - 4762 , nov .2012 . c. k. lee , w. zhong , and s. y. r. hui , effects of magnetic coupling of nonadjacent resonators on wireless power domino - resonator systems , " _ ieee transactions on power electronics _ , vol .1905 - 1916 , april 2012 .w. zhong , c. k. lee , and s. y. r. hui , general analysis on the use of tesla s resonators in domino forms for wireless power transfer , " _ ieee transactions on industrial electronics _ , vol .261 - 270 , jan . 2013 .s. cheon , y. h. kim , s. y. kang , m. l. lee , j. m. lee , and t. zyung , circuit - model - based analysis of a wireless energy - transfer system via coupled magnetic resonances , " _ ieee trans .58 , no . 7 , pp . 2906 - 2914 , july 2011 .w. x. zhong , c. zhang , x. liu , and s. y. r. hui , a methodology for making a three - coil wireless power transfer system more energy efficient than a two - coil counterpart for extended transfer distance , " _ ieee transactions on power electronics _ , vol .933 - 942 , feb . 2015 .j. w. kim , h. c. son , k. h. kim , and y. j. park , efficiency analysis of magnetic resonance wireless power transfer with intermediate resonant coil , " _ ieee antennas wireless propagation lett .389 - 392 , may 2011 .k. y. kim , y. ryu , e. park , n. y. kim , j. choi , d. kim , c. yoon , k. song , c. ahn , y. park , and s. kwon , power transfer efficiency of magnetic resonance wireless power link with misaligned relay resonator , " in _ proc . of european microwave conference ( eumc ) _ ,amsterdam , netherlands , oct./nov . 2012 .m. kiani , u. m. jow , and m. ghovanloo , design and optimization of a 3-coil inductive link for efficient wireless power transmission , " _ ieee trans . biomed .circuits syst ._ , vol . 5 , no . 6 ,579 - 591 , dec . 2011 .i. mayordomo , t. drager , p. spies , j. bernhard , and a. pflaum , an overview of technical challenges and advances of inductive wireless power transmission , " _ proceedings of the ieee _ , vol. 101 , no . 6 , pp . 1302 - 1311 , june 2013 .d. w. kim , y. d. chung , h. k. kang , y. s. yoon , and t. k. ko , characteristics of contactless power transfer for hts coil based on electromagnetic resonance coupling , " _ ieee transactions on applied superconductivity _ ,5400604 - 5400604 , june 2012 .t. imura and y. hori , maximizing air gap and efficiency of magnetic resonant coupling for wireless power transfer using equivalent circuit and neumann formula , " _ ieee trans .10 , pp . 4746 - 4752 , oct .a. p. sample , d. a. meyer , and j. r. smith , analysis , experimental results , and range adaptation of magnetically coupled resonators for wireless power transfer , " _ ieee trans .544 - 554 , feb . 2011 .t. c. beh , m. kato , t. imura , and y. hori , wireless power transfer system via magnetic resonant coupling at fixed resonance frequency power transfer system based on impedance matching , " in _ proc . of world battery , hybrid fuel cell _ ,shenzhen , china , 2010 .i. awai and t. komori , a simple and versatile design method of resonator - coupled wireless power transfer system , " in _ proc .international conference on communications , circuits and systems ( icccas ) _ , chengdu , china , july 2010 .j. yoo , l. yan , s. lee , y. kim , and h .- j yoo , a 5.2 mw self - configured wearable body sensor network controller and a 12 w wirelessly powered sensor for a continuous health monitoring system , " _ ieee journal of solid - state circuits _ , vol .178 - 188 , jan . 2010 .lee and m. ghovanloo , an adaptive reconfigurable active voltage doubler / rectifier for extended - range inductive power transmission , " _ ieee transactions on circuits and systems ii : express briefs _ , vol .59 , no . 8 , pp .481 - 485 , aug . 2012 .lee , j .- h .hong , c .- h .hsieh , m .- c .liang , and j .- y .kung , a low - power 13.56 mhz rf front - end circuit for implantable biomedical devices , " _ ieee trans . biomed .circuits syst ._ , vol . 7 , no .256 - 265 , june 2013 .o. lazaro and g. a. rincon - mora , 180-nm cmos wideband capacitor - free inductively coupled power receiver and charger , " _ ieee journal of solid - state circuits _ , vol .2839 - 2849 , nov .x. li , c .- y .tsui , and w .- h .ki , power management analysis of inductively - powered implants with 1x/2x reconfigurable rectifier , " _ ieee transactions on circuits and systems i : regular papers _ ,617 - 624 , mar .d. wang , y. zhu , h. guo , x. zhu , t. mo , and q. huang , enabling multi - angle wireless power transmission via magnetic resonant coupling , " in _ proc . of international conference on computing and convergence technology ( iccct ) _ ,seoul , south korea , dec .d. ahn and s. hong , effect of coupling between multiple transmitters or multiple receivers on wireless power transfer , " _ ieee transactions on industrial electronics _ , vol .60 , no . 7 , pp . 2602 - 2613 , july 2013 .m. t. ali , a. anwar , u. tayyab , y. iqbal , t. tauqeer , and u. nasir , design of high efficiency wireless power transmission system at low resonant frequency , " in _ proc . of ieee international power electronics and motion control conference and exposition ( pemc )_ , antalya , turkey , sept . 2014 .p. almers , e. bonek , a. burr , n. czink , m. debbah , v. degli - esposti , and c. oestges , survey of channel and radio propagation models for wireless mimo systems , " _ eurasip journal on wireless communications and networking _ , 2007 .h. nguyen , j. i. agbinya , and j. devlin , channel characterisation and link budget of mimo configuration in near field magnetic communication , " _ international journal of electronics and telecommunications _ , vol .255 - 262 , aug . 2013 .u. azad , h. c. jing , and y. e. wang , link budget and capacity performance of inductively coupled resonant loops , " _ ieee transactions on antennas and propagation _ , vol .5 , pp . 2453 - 2461 , may 2012 .w. x. zhong , x. liu , and s. y. r. hui , a novel single - layer winding array and receiver coil structure for contactless battery charging systems with free - positioning and localized charging features , " _ ieee transactions on industrial electronics _ , vol .9 , pp . 4136 - 4144 , sept .2011 .s. y. r. hui and w. c. ho , a new generation of universal contactless battery charging platform for portable consumer electronic equipment , " _ ieee trans .power electron .620 - 627 , may 2005 .r. tseng , b. von novak , s. shevde and k. a. grajski , introduction to the alliance for wireless power loosely - coupled wireless power transfer system specification version 1.0 , " in _ proc .ieee wireless power transfer ( wpt ) _ , perugia , italy , may 2013 .s. hached , a. trigui , i. el khalloufi , m. sawan , o. loutochin , and j. corcos , a bluetooth - based low - energy qi - compliant battery charger for implantable medical devices , " in _ proc .of ieee international symposium on bioelectronics and bioinformatics ( isbb ) _ , chung li , taiwan , april 2014 .s. miura , k. nishijima , and t. nabeshima , bi - directional wireless charging between portable devices , " in _ proc . of international conference on renewable energy research and applications ( icrera )_ , madrid , spain , oct . 2013 . m. galizzi , m. caldara , v. re , and a. vitali , a novel qi - standard compliant full - bridge wireless power charger for low power devices , " in _ proc . of ieee wireless power transfer conference ( wptc )_ , perugia , italy , may 2013 .p. h. v. quang , t. tien ha , and j. lee , a fully integrated multimode wireless power charger ic with adaptive supply control and built - in resistance compensation , " _ ieee transactions on industrial electronics _ ,2 , pp . 1251 - 1261 , feb .2015 .w. p. choi , w. c. ho , x. liu , and s. y. r. hui , comparative study on power conversion methods for wireless battery charging platform , " in _ proc . of international power electronics and motion control conference ( epe / pemc )_ , ohrid , macedonia , sept. 2010 .i. flint , x. lu , n. privault , d. niyato , and p. wang , `` performance analysis of ambient rf energy harvesting with repulsive point process modelling , '' ieee transactions on wireless communications , vol .5402 - 5416 , may 2015 .x. lu , i. flint , d. niyato , n. privault , and p. wang , `` performance analysis for simultaneously wireless information and power transfer with ambient rf energy harvesting , '' in proc . of ieee wcnc ,new orleans , la , usa , march 2015 . c. liu , m. maso , s. lakshminarayana , c. lee , and t. q. s. quek , simultaneous wireless information and power transfer under different csi acquisition schemes , " _ ieee transactions on wireless communications _ , vol .1911 - 1926 , april 2015 .s. timotheou , i. krikidis , s. karachontzitis , and k. berberidis , spatial domain simultaneous information and power transfer for mimo channels , " to appear in _ ieee transactions on wireless communications_. d. w. k. ng , e. s. lo , and r. schober , robust beamforming for secure communication in systems with wireless information and power transfer , " _ ieee transactions on wireless communications _ ,13 , no . 8 , pp . 4599 - 4615 , aug .k. lee , and j. hong , energy efficient resource allocation for simultaneous information and energy transfer with imperfect channel estimation , " to appear in _ ieee transactions on vehicular technology_. k. xiong , p. fan , c. zhang , and k. b. letaief , wireless information and energy transfer for two - hop non - regenerative mimo - ofdm relay networks , " to appear in _ ieee journal on selected areas in communications_. h. chen , y. li , y. jiang , y. ma , and b. vucetic , distributed power splitting for swipt in relay interference channels using game theory , " _ ieee transactions on wireless communications _ ,1 , pp 410 - 420 , jan . 2015 .a. a. nasir , x. zhou , s. durrani , and r. a. kennedy , relaying protocols for wireless energy harvesting and information processing , " _ ieee transactions on wireless communications _ ,12 , no . 7 , pp . 3622 - 3636 , july 2013 .d. w. k. ng , e. s. lo , and r. schober , wireless information and power transfer : energy efficiency optimization in ofdma systems , " _ ieee transactions on wireless communications _ , vol .6352 - 6370 , december 2013 .r. morsi , d. s. michalopoulos , and r. schober , multiuser scheduling schemes for simultaneous wireless information and power transfer over fading channels , " _ ieee transactions on wireless communications _ , vol .1967 - 1982 , april 2015 .x. chen , z. zhang , h. chen , and h. zhang , enhancing wireless information and power transfer by exploiting multi - antenna techniques , " _ ieee communications magazine _ , vol .133 - 141 , april 2015 .d. w. k. ng , e. s. lo , and r. schober , multi - objective resource allocation for secure communication in cognitive radio networks with wireless information and power transfer , " to appear in _ ieee transactions on vehicular technology_. m. tian , x. huang , q. zhang , and j. qin , robust an - aided secure transmission scheme in miso channels with simultaneous wireless information and power transfer , " _ ieee signal processing letters _ , vol .22 , no . 6 , pp .723 - 727 , june 2015 .n. zhao , f. r. yu , and v. c. m. leung , opportunistic communications in interference alignment networks with wireless power transfer , " _ ieee wireless communications _ , vol .88 - 95 , feb . 2015 .i. krikidis , s. timotheou , s. nikolaou , g. zheng , d. w. k. ng , and r. schober , simultaneous wireless information and power transfer in modern communication systems , " _ ieee communications magazine _ , vol .11 , nov . 2014 .q. shi , w. xu , j. wu , e. song , and y. wang , secure beamforming for mimo broadcasting with wireless information and power transfer , " _ ieee transactions on wireless communications _ , vol .5 , pp . 2841 - 2853 , may 2015 .q. zhang , x. huang , q. li , and j. qin , cooperative jamming aided robust secure transmission for wireless information and power transfer in miso channels , " _ ieee transactions on communications _906 - 915 , mar .2015 .m. r. a. khandaker , and k. wong , masked beamforming in the presence of energy - harvesting eavesdroppers , " _ ieee transactions on information forensics and security _40 - 54 , jan . 2015 .h. tabassum , e. hossain , m. hossain , and d. kim , on the spectral efficiency of multiuser scheduling in rf - powered uplink cellular networks , " to appear in _ ieee transactions on wireless communications_. q. sun , g. zhu , c. shen , x. li , and z. zhong joint beamforming design and time allocation for wireless powered communication networks , " _ ieee communications letters _ , vol .1783 - 1786 , oct . 2014 .a. o. bicen and o. b. akan , energy - efficient rf source power control for opportunistic distributed sensing in wireless passive sensor networks , " in _ proc .of ieee symposium on computers and communications ( iscc ) _ , cappadocia , turkey , july 2012 .h. chen , y. li , j. l. rebelatto , b. f. u. filhoand , and b. vucetic , harvest - then - cooperate : wireless - powered cooperative communications , " _ ieee transactions on signal processing _63 , no . 7 , pp .1700 - 1711 , jan . 2015 .s. nikoletseas , t. p. raptis , and c. raptopoulos , low radiation efficient wireless energy transfer in wireless distributed systems , " in _ proc . of ieee35th international conference on distributed computing systems ( icdcs ) _ , columbus , oh , june 2015 .s. nikoletseas , t. p. raptis , a. souroulagkas , and d. tsolovos , an experimental evaluation of wireless power transfer protocols in mobile ad hoc networks , " in _ proc .of wireless power transfer conference ( wptc ) _ , boulder , co , may 2015 .y. zeng , and r. zhang , optimized training for net energy maximization in multi - antenna wireless energy transfer over frequency - selective channel , " _ ieee transactions on communications _ , vol .63 , no . 6 , pp .2360 - 2373 , june 2015 .y. yang , c. wang , and j. li , power sensor networks by wireless energy current status and future trends , " in _ proc . of international conference on computing , networking and communications ( icnc ) _ , garden grove , ca , feb .m. zhao , j. li , and y. yang , a framework of joint mobile energy replenishment and data gathering in wireless rechargeable sensor networks , " _ ieee transactions on mobile computing _ , vol .2689 - 2705 , dec . 2014 .s. guo , c. wang , and y. yang , joint mobile data gathering and energy provisioning in wireless rechargeable sensor networks , " _ ieee transactions on mobile computing _ , vol .13 , no . 13 , pp . 2836 - 2852 , dec .2014 .m. d. francesco , s. k. das , and g. anastasi , data collection in wireless sensor networks with mobile elements : a survey , " _ acm transactions on sensor networks ( tosn ) _ , vol .1 , pp . 7:1 - 7:31 , aug . 2011 .l. xie , y. shi , y. t. hou , and h. d. sherali , making sensor networks immortal : an energy - renewal approach with wireless power transfer , " _ ieee / acm transactions on networking _ , vol .20 , no . 6 , pp . 1748 - 1761 , dec .l. shi , j. han , d. han , x. ding , and z. wei , the dynamic routing algorithm for renewable wireless sensor networks with wireless power transfer , " _ computer networks _74 , part a , pp .34 - 52 dec .l. xie , y. shi , y. t. hou , w. lou , h. d. sherali , and s. f. midkiff , multi - node wireless energy charging in sensor networks , " _ ieee / acm transactions on networking _ , vol .437 - 450 , april 2015 .l. xie , y. shi , y. thomas hou , w. lou , h. d. serali , and s. f. midkiff , bundling mobile base station and wireless energy transfer : modeling and optimization , " in _ proc . of ieee infocom _ ,turin , italy , april 2013 .l. xie , y. shi , y. t. hou , w. lou , and h. d. sherali , on traveling path and related problems for a mobile station in a rechargeable sensor network , " in _ proc . of the fourteenth acm international symposium on mobile ad hoc networking and computing _ , bangalore , india ,july 2013 .z. qin , c. zhou , y. yue , l. wang , l. sun , and y. zhang , a practical solution to wireless energy transfer in wsns , " in _ proc .of ieee international conference on ict convergence ( ictc ) _ , jeju , south korea , oct .l. fu , p. cheng , y. gu , j. chen , and t. he , the optimal charging in wireless rechargeable sensor networks , " to appear in _ ieee transactions on vehicular technology_. m. padberg and g. rinaldi , a branch - and - cut algorithm for the resolution of large - scale symmetric traveling salesman problems , " _ siam rev .60 - 100 , mar .1991 .h. d. sherali , w. p. adams , and p. j. driscoll , exploiting special structures in constructing a hierarchy of relaxations for 0 - 1 mixed integer problems , " _ oper .396 - 405 , may / june 1998 .j. wang , x. wu , x. xu , y. yang , and x. hu , programming wireless recharging for target - oriented rechargeable sensor networks , " in _ proc . of ieee international conference on networking , sensing and control ( icnsc ) _ , miami , fl , april 2014 .h. dai , l. jiang , x. wu , d. k. y. yau , g. chen , and s. tang , near optimal charging and scheduling scheme for stochastic event capture with rechargeable sensors , " _ ieee international conference on mobile ad - hoc and sensor systems ( mass ) _ , hangzhou , china , oct . 2013 .f. jiang , s. he , p. cheng , and j. chen , on optimal scheduling in wireless rechargeable sensor networks for stochastic event capture , " in _ proc . of international conference on mobile adhoc and sensor systems ( mass )_ , valencia , spain , oct . 2011 .k. li , h. luan , and c .- c .shen , qi - ferry : energy - constrained wireless charging in wireless sensor networks , " in _ proc . of ieee wireless communications and networking conference ( wcnc )_ , shanghai , china , april 2012 .r. beigel , j. wu , and h. zheng , on optimal scheduling of multiple mobile chargers in wireless sensor networks , " in _ proc . of international workshop on mobilesensing , computing and communication _ , new york , ny , aug .2014 .h. dai , x. wu , g. chen , l. xu , and s. lin , minimizing the number of mobile chargers for large - scale wireless rechargeable sensor networks , " _ computer communications _54 - 65 , june 2014 .w. xu , w. liang , x. lin , g. mao , and x. ren , towards perpetual sensor networks via deploying multiple mobile wireless chargers , " in _ proc . of international conference on parallel processing ( icpp )_ , minneapolis , mn , sept . 2014 .w. liang , w. xu , x. ren , x. jia , and x. lin , maintaining sensor networks perpetually via wireless recharging mobile vehicles , " _ ieee conference on local computer networks ( lcn ) _ , edmonton , canada , sept .2014 . c. wang , j. li , f. ye , and y. yang , multi - vehicle coordination for wireless energy replenishment in sensor networks , " in _ proc . of international symposium on parallel distributed processing ( ipdps ) _ , boston , ma , may 2013 .a. madhja , s. nikoletseas , and t. p. raptis distributed wireless power transfer in sensor networks with multiple mobile chargers , " _ computer networks _ ,89 - 108 april 2015 .x. ren , w. liang , and w. xu , maximizing charging throughput in rechargeable sensor networks , " in _ proc . of international conference on computer communication and networks ( icccn )_ , shanghai , china , aug . 2014 .l. he , l. fu , l. zheng , y. gu , p. cheng , j. chen , and j. pan , esync : an energy synchronized charging protocol for rechargeable wireless sensor networks , " in _ proc . of acm international symposium on mobile ad hoc networking and computing ( mobihoc14 ) _ , philadelphia , pa , usa , 2014 .l. jiang , x. wu , g. chen , and y. li , effective on - demand mobile charger scheduling for maximizing coverage in wireless rechargeable sensor networks , " _ mobile networks and applications _ ,543 - 551 , aug . 2014 .chiu , y .- y .shih , and a .- c .pang , j .- y .jeng , and p .- c .hsiu , mobility - aware charger deployment for wireless rechargeable sensor networks , " in _ proc . of asia - pacific network operations and management symposium ( apnoms ) _ , seoul , south korean , sept .s. he , j. chen , f. jiang , d. k. y. yau , g. xing , and y. sun , energy provisioning in wireless rechargeable sensor networks , " _ ieee transactions on mobile computing _ ,1931 - 1942 , oct . 2013 .j. liao and j. r. jiang , wireless charger deployment optimization for wireless rechargeable sensor networks , " in _ proc . of international conference on ubi - media computing and workshops ( umedia )_ , ulaanbaatar , mongolia , july 2014 .t. he , k. chin , and s. soh , on using wireless power transfer to increase the max flow of rechargeable wireless sensor networks , " in _ proc . of ieee tenth international conference on intelligent sensors , sensor networks and information processing ( issnip )_ , singapore , april 2015 .y. pang , z. lu , m. pan , and w. w. li , charging coverage for energy replenishment in wireless sensor networks , " in _ proc . of ieee international conference on networking ,sensing and control ( icnsc ) _ , miami , fl , april 2014 .h. dai , y. liu , g. chen , x. wu , and t. he , scape : safe charging with adjustable power , " in _ proc . of ieee34th international conference on distributed computing systems ( icdcs ) _ , madrid , spain , july 2014 .m. erol - kantarci and h. t. mouftah , mission - aware placement of rf - based power transmitters in wireless sensor networks , " in _ proc . of ieee symposium on computers and communications ( iscc ) _ , cappadocia , greek , july 2012 .t. la porta , c. petrioli , d. spenza , sensor - mission assignment in wireless sensor networks with energy harvesting , " in _ proc . of 8th annual ieee conference on sensor ,mesh and ad hoc communications and networks ( secon ) _ , salt lake city , ut , june 2011 .t. l. porta , c. petrioli , c. phillips , and d. spenza , sensor mission assignment in rechargeable wireless sensor networks , " _ transactions on sensor networks ( tosn ) _ ,4 , article 60 , june 2014 .d. niyato , p. wang , d. i. kim , z. han , and x. lu , performance analysis of delay - constrained wireless energy harvesting communication networks under jamming attacks , " in _ proc . of ieee wcnc _ ,new orleans , la , usa , march 2015 .g. chattopadhyay , h. manohara , m. mojarradi , v. tuan , h. mojarradi , b. sam and n. marzwell , millimeter - wave wireless power transfer technology for space applications , " in _ proc . of ieee asia - pacific microwave conference _ , macau , china , dec .2008 .n. miura , y. kohama , y. sugimori , h. ishikuro , t. sakurai , and t. kuroda , a high - speed inductive - coupling link with burst transmission , " _ ieee journal of solid - state circuits _ ,947 - 955 , mar .
wireless charging is a technology of transmitting power through an air gap to electrical devices for the purpose of energy replenishment . the recent progress in wireless charging techniques and development of commercial products have provided a promising alternative way to address the energy bottleneck of conventionally portable battery - powered devices . however , the incorporation of wireless charging into the existing wireless communication systems also brings along a series of challenging issues with regard to implementation , scheduling , and power management . in this article , we present a comprehensive overview of wireless charging techniques , the developments in technical standards , and their recent advances in network applications . in particular , with regard to network applications , we review the static charger scheduling strategies , mobile charger dispatch strategies and wireless charger deployment strategies . additionally , we discuss open issues and challenges in implementing wireless charging technologies . finally , we envision some practical future network applications of wireless charging . shell : bare demo of ieeetran.cls for journals _ index terms- wireless charging , wireless power transfer , inductive coupling , resonance coupling , rf / microwave radiation , energy harvesting , qi , pma , a4wp , simultaneous wireless information and power transfer ( swipt ) , energy beamforming , wireless powered communication network ( wpcn ) , magnetic mimo , witricity_.
the luminosity function ( lf ) has been an important tool for understanding the evolution of galaxies and quasars , as it provides a census of the galaxy and quasar populations over cosmic time .quasar luminosity functions have been estimated for optical surveys ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , x - ray surveys ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , infrared surveys ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , radio surveys ( e.g. , * ? ? ?* ; * ? ? ?* ) , and emission lines .in addition , luminosity functions across different bands have been combined to form an estimate of the bolometric luminosity function .besides providing an important constraint on models of quasar evolution and supermassive black hole growth ( e.g. , * ? ? ?* ; * ? ? ?* ) , studies of the lf have found evidence for ` cosmic downsizing ' , where the space density of more luminous quasars peaks at higher redshift .attempts to map the growth of supermassive black holes start from the local supermassive black hole distribution , and employ the argument of , using the quasar luminosity function as a constraint on the black hole mass distribution .these studies have found evidence that the highest mass black holes grow first ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?* ) , suggesting that this cosmic downsizing is the result of an anti - hierarchical growth of supermassive black holes . similarly , galaxy luminosity functions have been estimated in the optical ( e.g. , * ? ? ?* ; * ? ? ?? * ; * ? ? ?* ) , x - ray ( e.g. , * ? ? ?* ; * ? ? ?* ) , infrared ( e.g. , * ? ? ?* ; * ? ? ?* ) , ultraviolet ( e.g. , * ? ? ?* ; * ? ? ?* ) , radio ( e.g. , * ? ? ?* ; * ? ? ?* ) , for galaxies in clusters ( e.g. , * ? ? ?* ; * ? ? ?* ) , and for galaxies in voids .the galaxy luminosity function probes several aspects of the galaxy population ; namely ( a ) the evolution of stellar populations and star formation histories ( e.g. , * ? ? ?* ) , ( b ) the local supermassive black hole mass distribution ( e.g , * ? ? ?* ; * ? ? ?* ) via the magorrian relationship , ( c ) the dependence of galaxy properties on environment ( e.g. , * ? ? ?* ; * ? ? ?* ) , and ( d ) places constraints on models of structure formation and galaxy evolution ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?. given the importance of the luminosity function as an observational constraint on models of quasar and galaxy evolution , it is essential that a statistically accurate approach be employed when estimating these quantities .however , the existence of complicated selection functions hinders this , and , as a result , a variety of methods have been used to accurately account for the selection function when estimating the lf .these include various binning methods ( e.g. , * ? ? ?? * ; * ? ? ?* ) , maximum - likelihood fitting ( e.g. , * ? ? ?* ; * ? ? ?* ) , and a powerful semi - parametric approach .in addition , there have been a variety of methods proposed for estimating the cumulative distribution function of the lf ( e.g. , * ? ? ? * ; * ? ? ? * ; * ? ? ? * ) .each of these statistical methods has advantages and disadvantages .statistical inference based on the binning procedures can not be extended beyond the support of the selection function , and the cumulative distribution function methods typically assume that luminosity and redshift are statistically independent .furthermore , one is faced with the arbitrary choice of bin size .the maximum - likelihood approach typically assumes a restrictive and somewhat _ ad hoc _ parametric form , and has not been used to give an estimate of the lf normalization ; instead , for example , the lf normalization is often chosen to make the expected number of sources detected in one s survey equal to the actual number of sources detected .in addition , confidence intervals based on the errors derived from the various procedures are typically derived by assuming that the uncertainties on the lf parameters have a gaussian distribution . while this is valid as the sample size approaches infinity, it is not necessarily a good approximation for finite sample sizes .this is particularly problematic if one is employing the best fit results to extrapolating the luminosity function beyond the bounds of the selection function .it is unclear if the probability distribution of the uncertainty in the estimated luminosity function below the flux limit is even asymptotically normal .motivated by these issues , we have developed a bayesian method for estimating the luminosity function .we derive the likelihood function of the lf by relating the observed data to the true lf , assuming some parametric form , and derive the posterior probability distribution of the lf parameters , given the observed data . while the likelihood function and posterior are valid for any parametric form , we focus on a flexible parametric model where the lf is modeled as a weighted sum of gaussian functions .this is a type of ` non - parametric ' approach , where the basic idea is that the individual gaussian functions do not have any physical meaning , but that given enough gaussian functions one can obtain a suitably accurate approximation to the true lf ; a similar approach has been taken by for estimating galaxy lfs , and by within the context of linear regression with measurement error .modeling the lf as a mixture of gaussian functions avoids the problem of choosing a particular parametric form , especially in the absence of any guidance from astrophysical theory .the mixture of gaussians model has been studied from a bayesian perspective by numerous authors ( e.g. , * ? ? ?* ; * ? ? ?* ; * ? ? ?in addition , we describe a markov chain monte carlo ( mcmc ) algorithm for obtaining random draws from the posterior distribution .these random draws allow one to estimate the posterior distribution for the lf , as well as any quantities derived from it .the mcmc method therefore allows a straight - forward method of calculating uncertainties on any quantity derived from the lf , such as the redshift where the space density of quasars or galaxies peaks ; this has proven to be a challenge for other statistical methods developed for lf estimation . because the bayesian approach is valid for any sample size , one is therefore able to place reliable constraints on the lf and related quantities even below the survey flux limits .because of the diversity and mathematical complexity of some parts of this paper , we summarize the main results here .we do this so that the reader who is only interested in specific aspects of this paper can conveniently consult the sections of interest . * in [ s - lik ] we derive the general form of the likelihood function for luminosity function estimation .we show that the commonly used likelihood function based on the poisson distribution is incorrect , and that the correct form of the likelihood function is derived from the binomial distribution .however , because the poisson distribution is the limit of the binomial distribution as the probability of including a source in a survey approaches zero , the maximum - likelihood estimates derived from the two distribution give nearly identical results so long as a survey s detection probability is small .the reader who is interested in using the correct form of the likelihood function of the lf should consult this section . * in [ s - posterior ] we describe a bayesian approach to luminosity function estimation .we build on the likelihood function derived in [ s - lik ] to derive the probability distribution of the luminosity function , given the observed data ( i.e. , the posterior distribution ) .we use a simple example based on a schechter function to illustrate the bayesian approach , and compare it with the maximum - likelihood approach . for this example , we find that confidence intervals derived from the posterior distribution are valid , while confidence intervals derived from bootstrapping the maximum - likelihood estimate can be too small .the reader who is interested in a bayesian approach to luminosity function estimation , and how it compares with maximum - likelihood , should consult this section . * in [ s - smodel ] we develop a mixture of gaussian functions model for the luminosity function , deriving the likelihood function and posterior distribution for the model . under this model ,the lf is modeled as a weighted sum of gaussian functions .this model has the advantage that given a suitably large enough number of gaussian functions , it is flexible enough to give an accurate estimate of any smooth and continuous lf .this allows the model to adapt to the true lf , thus minimizing the bias that can result when assuming a parametric form of the lf .this is particularly useful when extrapolating beyond the flux limits of a survey , where bias caused by parametric misspecification can be a significant concern .the reader who are interested in employing the mixture of gaussian functions model should consult this section . * because of the large number of parameters often associated with luminosity function estimation , bayesian inference is most easily performed by obtaining random draws of the lf from the posterior distribution . in [ s - mha ] we describe the metropolis - hastings algorithm ( mha ) for obtaining random draws of the lf from the posterior distribution . as an example, we describe a mha for obtaining random draws of the parameters for a schechter function from the posterior distribution .then , we describe a more complex mha for obtaining random draws of the parameters for the mixture of gaussian functions model .the reader who is interested in the computational aspects of ` fitting ' the mixture of gaussian functions model , or who is interested in the computational aspects of bayesian inference for the lf , should consult this section .a computer routine for performing the metropolis - hastings algorithm for the mixture of gaussian functions model is available on request from b. kelly . * in [ s - sim ] we use simulation to illustrate the effectiveness of our bayesian gaussian mixture model for luminosity function estimation . we construct a simulated data set similar to the sloan digital sky survey dr3 quasar catalog .we then use our mixture of gaussian functions model to recover the true lf and show that our mixture model is able to place reliable constraints on the lf .we also illustrate how to use the mha output to constrain any quantity derived from the lf , and how to use the mha output to assess the quality of the fit .the reader who is interested in assessing the effectiveness of our statistical approach , or who is interested in using the mha output for statistical inference on the lf , should consult this section .we adopt a cosmology based on the the wmap best - fit parameters ( , * ? ? ?we use the common statistical notation that an estimate of a quantity is denoted by placing a ` hat ' above it ; e.g. , is an estimate of the true value of the parameter .the parameter may be scalar or multivalued .we denote a normal density ( i.e. , a gaussian distribution ) with mean and variance as , and we denote as a multivariate normal density with -element mean vector and covariance matrix .if we want to explicitly identify the argument of the gaussian function , we use the notation , which should be understood to be a gaussian with mean and variance as a function of .we will often use the common statistical notation where `` '' means `` is drawn from '' or `` is distributed as '' .this should not be confused with the common usage of implying `` similar to '' .for example , states that is drawn from a normal density with mean and variance , whereas states that the value of is similar to one . in this work , the maximum - likelihood estimate of the luminosity function refers to an estimate of the lf obtained by maximizing the likelihood function of the unbinned data .therefore , the maximum - likelihood estimate does not refer to an estimate obtained by maximizing the likelihood function of binned data , such as fitting the results obtained from the technique .the luminosity function , denoted as , is the number of sources per comoving volume with luminosities in the range .the luminosity function is related to the probability density of by where is the total number of sources in the observable universe , and is given by the integral of over and .note that is the probability of finding a source in the range and .equation ( [ eq - phiconvert ] ) separates the lf into its shape , given by , and its normalization , given by .once we have an estimate of , we can easily convert this to an estimate of using equation ( [ eq - phiconvert ] ) .in general , it is easier to work with the probability distribution of and , instead of directly with the lf , because is more directly related to the likelihood function .if we assume a parametric form for , with parameters , we can derive the likelihood function for the observed data .the likelihood function is the probability of observing one s data , given the assumed model .the presence of flux limits and various other selection effects can make this difficult , as the observed data likelihood function is not simply given by equation ( [ eq - phiconvert ] ) . in this case , the set of luminosities and redshifts observed by a survey gives a biased estimate of the true underlying distribution , since only those sources with above the flux limit at a given are detected . in order to derive the observed data likelihood function , it is necessary to take the survey s selection method into account .this is done by first deriving the joint likelihood function of both the observed and unobserved data , and then integrating out the unobserved data .because the data points are independent , the likelihood function for all sources in the universe is in reality , we do not know the luminosities and redshifts for all sources , nor do we know the value of , as our survey only covers a fraction of the sky and is subject to a selection function . as a result, our survey only contains sources . because of this, the selection process must also be included in the probability model , and the total number of sources , , is an additional parameter that needs to be estimated .we can incorporate the sample selection into the likelihood function by including the random detection of sources .we introduce an -element indicator vector that takes on the values if the source is included in our survey and otherwise .note that is a vector of size containing only ones and zeros . in this case, the selection function is the probability of including a source given and , .the complete data likelihood is then the probability that all objects of interest in the universe ( e.g. , all quasars ) have luminosities and redshifts , and that the selection vector has the values , given our assumed luminosity function : here , is the binomial coefficient , denotes the set of included sources , and denotes the set of missing sources .the number of sources detected in a survey is random , and therefore the binomial coefficient is necessary in normalizing the likelihood function , as it gives the number of possible ways to select a subset of sources from a set of total sources . because we are interested in the probability of the observed data , given our assumed model , the complete data likelihood function is of little use by itself .however , we can integrate equation ( [ eq - complik1 ] ) over the missing data to obtain the observed data likelihood function .this is because the marginal probability distribution of the observed data is obtained by integrating the joint probability distribution of the observed and the missing data over the missing data : ^{n - n } \prod_{i \in { \cal a}_{obs } } p(l_i , z_i | \theta ) , \label{eq - obslik1 } \end{aligned}\ ] ] where the probability that the survey misses a source , given the parameters , is here , we have introduced the notation that and denote the set of values of and for those sources included in one s survey , and we have omitted terms that do not depend on or from equation ( [ eq - obslik1 ] ) . equation ( [ eq - obslik1 ] ) is the observed data likelihood function , given an assumed luminosity function ( eq.[[eq - phiconvert ] ] ) .qualitatively , the observed data likelihood function is the probability of observing the set of luminosities and redshifts given the assumed luminosity function parameterized by , multiplied by the probability of not detecting sources given , multiplied by the number of ways of selecting a subset of sources from a set of total sources .the observed data likelihood function can be used to calculate a maximum likelihood estimate of the luminosity function , or combined with a prior distribution to perform bayesian inference .the observed data likelihood given by equation ( [ eq - obslik1 ] ) differs from that commonly used in the luminosity function literature . instead, a likelihood based on the poisson distribution is often used . the following equation for the log - likelihood function based on the poisson distribution : inserting equation ( [ eq - phiconvert ] ) for , the log - likelihood based on the poisson likelihood becomes where , , and is given by equation ( [ eq - selprob ] ) .in contrast , the log - likelihood we have derived based on the binomial distribution is the logarithm of equation ( [ eq - obslik1 ] ) : the likelihood functions implied by equations ( [ eq - poislik2 ] ) and ( [ eq - loglik ] ) are functions of , and thus the likelihoods may also be maximized with respect to the lf normalization .this is contrary to what is often claimed in the literature , where the lf normalization is typically chosen to make the expected number of sources observed in one s survey equal to the actual number observed .the binomial likelihood , given by equation ( [ eq - obslik1 ] ) , contains the term , resulting from the fact that the total number of sources included in a survey , , follows a binomial distribution .for example , suppose one performed a survey over one quarter of the sky with no flux limit .assuming that sources are uniformly distributed on the sky , the probability of including a source for this survey is simply .if there are total sources in the universe , the total number of sources that one would find within the survey area follows a binomial distribution with ` trials ' and probability of ` success ' .however , the poisson likelihood is derived by noting that the number of sources detected in some small bin in follows a poisson distribution . since the sum of a set of poisson distributed random variables also follows a poisson distribution , this implies that the total number of sources detected in one s survey , , follows a poisson distribution .however , actually follows a binomial distribution , and thus the observed data likelihood function is not given by the poisson distribution .the source of this error is largely the result of approximating the number of sources in a bin as following a poisson distribution , when in reality it follows a binomial distribution .although the poisson likelihood function for the lf is incorrect , the previous discussion should not be taken as a claim that previous work based on the poisson likelihood function is incorrect . when the number of sources included in one s sample is much smaller than the total number of sources in the universe , the binomial distribution is well approximated by the poisson distribution . therefore, if the survey only covers a small fraction of the sky , or if the flux limit is shallow enough such that , then the poisson likelihood function should provide an accurate approximation to the true binomial likelihood function .when this is true , statistical inference based on the poisson likelihood should only exhibit negligible error , so long as there are enough sources in one s survey to obtain an accurate estimate of the lf normalization . in [ s - schechter2 ] we use simulate to compare results obtained from the two likelihood functions , and to compare the maximum - likelihood approach to the bayesian approach .we can combine the likelihood function for the lf with a prior probability distribution on the lf parameters to perform bayesian inference on the lf .the result is the posterior probability distribution of the lf parameters , i.e. , the probability distribution of the lf parameters given our observed data .this is in contrast to the maximum likelihood approach , where the maximum likelihood approach seeks to relate the observed value of the mle to the true parameter value through an estimate of the sampling distribution of the mle . in appendix [ a - mle_vs_bayes ] we give a more thorough introduction to the difference between the maximum likelihood and bayesian approaches .the posterior probability distribution of the model parameters is related to the likelihood function and the prior probability distribution as where is the prior on , and is is the observed data likelihood function , given by equation ( [ eq - obslik1 ] ) .the posterior distribution is the probability distribution of and , given the observed data , and . because the luminosity function depends on the parameters and , the posterior distribution of and be used to obtain the probability distribution of , given our observed set of luminosities and redshifts .it is of use to decompose the posterior as ; here we have dropped the explicit conditioning on .this decomposition separates the posterior into the conditional posterior of the lf normalization at a given , , from the marginal posterior of the lf shape , . in this workwe assume that and are independent in their prior distribution , , and that the prior on is uniform over .a uniform prior on corresponds to a prior distribution on of , as . under this prior, one can show that the marginal posterior probability distribution of is ^{-n } \prod_{i \in { \cal a}_{obs } } p(l_i , z_i|\theta ) , \label{eq - thetapost}\ ] ] where .we derive equation ( [ eq - thetapost ] ) in appendix [ a - margpost_deriv ] ( see also * ? ? ?. under the assumption of a uniform prior on , equation ( [ eq - thetapost ] ) is equivalent to equation ( 22 ) in , who use a different derivation to arrive at a similar result . under the prior ,the conditional posterior distribution of at a given is a negative binomial distribution with parameters and .the negative binomial distribution gives the probability that the total number of sources in the universe is equal to , given that we have observed sources in our sample with probability of inclusion : ^n \left[p(i=0|\theta)\right]^{n - n}. \label{eq - npost}\ ] ] here , is given by equation ( [ eq - selprob ] ) and .further description of the negative binomial distribution is given in [ a - densities ] .the complete joint posterior distribution of and is then the product of equations ( [ eq - thetapost ] ) and ( [ eq - npost ] ) , . because it is common to fit a luminosity function with a large number of parameters , it is computationally intractable to directly calculate the posterior distribution from equations ( [ eq - thetapost ] ) and ( [ eq - npost ] ) . in particular ,the number of grid points needed to calculate the posterior will scale exponentially with the number of parameters .similarly , the number of integrals needed to calculate the marginal posterior probability distribution of a single parameters will also increase exponentially with the number of parameters . instead, bayesian inference is most easily performed by simulating random draws of and from their posterior probability distribution .based on the decomposition , we can obtain random draws of from the posterior by first drawing values of from equation ( [ eq - thetapost ] ) .then , for each draw of , we draw a value of from the negative binomial distribution .the values of and can then be used to compute the values of luminosity function via equation ( [ eq - phiconvert ] ) .the values of the lf computed from the random draws of and are then treated as a random draw from the probability distribution of the lf , given the observed data .these random draws can be used to estimate posterior means and variances , confidence intervals , and histogram estimates of the marginal distributions .random draws for may be obtained via markov chain monte carlo ( mcmc ) methods , described in [ s - mha ] , and we describe in [ a - densities ] how to obtain random draws from the negative binomial distribution . in [ s - simmcmc ] we give more details on using random draws from the posterior to perform statistical inference on the lf . before moving to more advanced models , we illustrate the bayesian approach by applying it to a simulated set of luminosities drawn from a schechter function .we do this to give an example of how to calculate the posterior distribution , how to obtain random draws from the posterior and use these random draws to draw scientific conclusions based on the data , and to compare the bayesian approach with the maximum - likelihood approach ( see [ s - schechter2 ] ) .the schechter luminosity function is : for simplicity , we ignore a dependence .the schechter function is equivalent to a gamma distribution with shape parameter , and scale parameter .note that and ; otherwise the integral of equation ( [ eq - schechter ] ) may be negative or become infinite . for our simulation , we randomly draw galaxy luminosities from equation ( [ eq - schechter ] ) using a value of and . to illustrate how the results depend on the detection limit, we placed two different detection limits on our simulated survey .the first limit was at and the second was at .we used a hard detection limit , where all sources above were detected and all sources below were not : and .note that the first detection limit lies below , while the second detection limit lies above .we were able to detect sources for and sources for the marginal posterior distribution of and can be calculated by inserting into equation ( [ eq - thetapost ] ) an assumed prior probability distribution , , and the likelihood function , . because we are ignoring redshift in our example ,the likelihood function is simply . in this example , we assume a uniform prior on and , and therefore . from equations( [ eq - thetapost ] ) and ( [ eq - schechter ] ) , the marginal posterior distribution of the parameters is ^{-n } \prod_{i=1}^n \frac{1}{l^ * \gamma(\alpha + 1 ) } \left ( \frac{l_i}{l^ * } \right)^{\alpha } e^{-l_i / l^ * } , \label{eq - schechpost}\ ] ] where the survey detection probability is the conditional posterior distribution of at a given is given by inserting in equation ( [ eq - schechdet ] ) into equation ( [ eq - npost ] ) , and the joint posterior of and is obtained by multiplying equation ( [ eq - schechpost ] ) by equation ( [ eq - npost ] ) .we perform statistical inference on the lf by obtaining random draws from the posterior distribution . in order to calculate the marginal posterior distributions , and , we would need to numerically integrate the posterior distribution over the other two parameters .for example , in order to calculate the marginal posterior of , , we would need to integrate over and on a grid of values for . while feasible for the simple 3-dimensional problem illustrated here , it is faster to simply obtain a random draw of and from the posterior , and then use a histogram to estimate .further details are given in [ s - simmcmc ] on performing bayesian inference using random draws from the posterior .we used the metropolis - hastings algorithm described in [ s - mha_schechter ] to obtain a random draw of , and from the posterior probability distribution .the result was a set of random draws from the posterior probability distribution of and . in figure [ f - schechpost ]we show the estimated posterior distribution of , and for both detection limits . while is fairly well constrained for both detection limits , the uncertainties on and are highly sensitive to whether the detection limit lies above or below .in addition , the uncertainties on these parameters are not gaussian , as is often assumed for the mle . and , for the simulated sample described in [ s - schechter ] .the top three panels show the posterior when the luminosity limit of the survey is ] .the vertical lines mark the true values of the parameters , and ] , and the right panel summarizes the posterior distribution of the lf when the luminosity limit is ] , and the right panel summarizes the posterior distribution of the lf when the luminosity limit is ] , .then , calculate the quantity where is the floor function , i.e. , denotes the greatest integer less than or equal to .the quantity will then follow a negative binomial distribution with parameters and .the dirichlet distribution is a multivariate generalization of the beta distribution , and it is commonly used when modeling group proportions .dirichlet random variables are constrained to be positive and sum to one .the dirichlet distribution with argument and parameters is given by to draw a random value from a dirichlet distribution with parameters , first draw independently from gamma distributions with shape parameters and common scale parameter equal to one .then , set .the set of will then follow a dirichlet distribution .the student- distribution is often used as a robust alternative to the normal distribution because it is more heavily tailed than the normal distribution , and therefore reduces the effect of outliers on statistical analysis . a distribution with degree of freedom is referred to as a cauchy distribution , and it is functionally equivalent to a lorentzian function . a -dimensional multivariate distribution with -dimensional argument , -dimensional mean vector , scale matrix , and degrees of freedom is given by ^{-(\nu + p)/2}. \label{eq - tdist}\ ] ] the 1-dimensional distribution is obtained by replacing matrix and vector operations in equation ( [ eq - tdist ] ) with scalar operations .although we do not simulate from a distribution in this work , for completeness we include how to do so . to simulate a random vector from a multivariate distribution with mean vector , scale matrix , and degrees of freedom , first draw from a zero mean multivariate normal distribution with covariance matrix .then , draw from a chi - square distribution with degrees of freedom , and compute the quantity . the quantity is then distributed according to the multivariate distribution .the wishart distribution describes the distribution of the sample covariance matrix , given the population covariance matrix , for data drawn from a multivariate normal distribution .conversely , the inverse wishart distribution describes the distribution of the population covariance matrix , given the sample covariance matrix , when the data are drawn from a multivariate normal distribution .the wishart distribution can be thought of as a multivariate extension of the distribution . a wishart distribution with argument , scale matrix , and degrees of freedom is given by ^{-1 } |\sigma|^{-\nu/2 } |s|^{(\nu - p - 1 ) / 2 } \exp\left \ { -\frac{1}{2 } tr ( \sigma^{-1 } s ) \right \ } , \label{eq - wishart}\ ] ] where the matrices and are constrained to be positive definite .an inverse wishart distribution with argument , scale matrix , and degrees of freedom is ^{-1 } |s|^{\nu/2 } |\sigma|^{-(\nu + p + 1 ) / 2 } \exp\left\{-\frac{1}{2 } tr(\sigma^{-1 } s)\right\ } , \label{eq - invwishart}\ ] ] where the matrices and are constrained to be positive definite . to draw a random matrix from a wishart distribution with scale matrix and degrees of freedom ,first draw from a zero mean multivariate normal distribution with covariance matrix .then , calculate the sum .the quantity is then a random draw from a wishart distribution .note that this technique only works when .a random draw from the inverse wishart distribution with scale matrix and degrees of freedom may be obtained by first obtaining a random draw from a wishart distribution with scale matrix and degrees of freedom .the quantity will then follow an inverse wishart distribution .avni , y. , & bahcall , j. n. 1980 , , 235 , 694 babbedge , t. s. r. , et al .2006 , , 370 , 1159 barger , a. j. , cowie , l. l. , mushotzky , r. f. , yang , y. , wang , w .- h . , steffen , a. t. , & capak , p. 2005, , 129 , 578 blanton , m. r. , et al .2003 , , 592 , 819 bower , r. g. , benson , a. j. , malbon , r. , helly , j. c. , frenk , c. s. , baugh , c. m. , cole , s. , & lacey , c. g. 2006 , , 370 , 645 brown , m. j. i. , dey , a. , jannuzi , b. t. , brand , k. , benson , a. j. , brodwin , m. , croton , d. j. , & eisenhardt , p. r. 2007 , , 654 , 858 budavri , t. , et al .2005 , , 619 , l31 cao , x. , & xu , y .- d . 2007 , , 377 , 425 chib , s. , & greenberg , e. 1995 , amer .stat . , 49 , 327 cirasuolo , m. , et al .2007 , , 380 , 585 croom , s. m. , smith , r. j. , boyle , b. j. , shanks , t. , miller , l. , outram , p. j. , & loaring , n. s. 2004 , , 349 , 1397 croton , d. j. , et al .2005 , , 356 , 1155 dahlen , t. , mobasher , b. , somerville , r. s. , moustakas , l. a. , dickinson , m. , ferguson , h. c. , & giavalisco , m. 2005 , , 631 , 126 davison , a. c. , & hinkley , d. v. 1997 , bootstrap methods and their application ( cambridge : cambridge university press ) dellaportas , p. , & papageorgiou , i. 2006 , stat .comput . , 16 , 57 efron , b. 1987 , j. americ .assoc . , 82 , 171 efron , b. , & petrosian , v. 1992 , , 399 , 345 faber , s. m. , et al .2007 , , 665 , 265 fan , x. , et al .2001 , , 121 , 54 fan , x. , et al .2006 , , 131 , 1203 finlator , k. , dav , r. , papovich , c. , & hernquist , l. 2006 , , 639 , 672 gelman , a. , carlin , j. b. , stern , h. s. , & rubin , d. b. 2004 , bayesian data analysis ( 2nd ed . ;boca raton : chapman & hall / crc ) gelman , a. , meng , x. l. , & stern , h. s. 1998 , statistica sinica , 6 , 733 gelman , a. , roberts , g. , & gilks , w. 1995 , in bayesian statistics 5 , ed . j. m. bernardo , j. o. berger , a. p. dawid , & a. f. m. smith ( oxford : oxford university press ) , 599 hao , l. , et al .2005 , , 129 , 1795 harsono , d. , & de propris , r. 2007 , , 380 , 1036 hastings , w. k. 1970 , biometrika , 57 , 97 ho , l. c. 2002 , , 564 , 120 hopkins , p. f. , hernquist , l. , cox , t. j. , di matteo , t. , robertson , b. , & springel , v. 2006 , , 163 , 1 hopkins , p. f. , narayan , r. , & hernquist , l. 2006 , , 643 , 641 hopkins , p. f. , richards , g. t. , & hernquist , l. 2007 , , 654 , 731 hoyle , f. , rojas , r. r. , vogeley , m. s. , & brinkmann , j. 2005 , , 620 , 618 huynh , m. t. , frayer , d. t. , mobasher , b. , dickinson , m. , chary , r .-r . , & morrison , g. 2007 , , 667 , l9 jasra , a. , holmes , c.c . , & stephens , d.a . 2005 , statistical science , 20 , 50 jester , s. 2005 , , 625 , 667 jiang , l. , et al . 2006 , , 131 , 2788 kelly , b. c. 2007 , , 665 , 1489 kim , d .- w . , et al .2006 , , 652 , 1090 la franca , f. , et al . 2005 , , 635 , 864 lauer , t. r. , et al .2007 , , 662 , 808 lin , y .- t . , & mohr , j. j. 2007 , , 170 , 71 lynden - bell , d. 1971 , , 155 , 95 magorrian , j. , et al .1998 , , 115 , 2285 maloney , a. , & petrosian , v. 1999 , , 518 , 32 marchesini , d. , celotti , a. , & ferrarese , l. 2004 , , 351 , 733 marchesini , d. , et al .2007 , , 656 , 42 marchesini , d. , & van dokkum , p. g. 2007 , , 663 , l89 marconi , a. , risaliti , g. , gilli , r. , hunt , l. k. , maiolino , r. , & salvati , m. 2004 , , 351 , 169 marshall , h. l. , tananbaum , h. , avni , y. , & zamorani , g. 1983 , , 269 , 35 matute , i. , la franca , f. , pozzi , f. , gruppioni , c. , lari , c. , & zamorani , g. 2006 , , 451 , 443 mauch , t. , & sadler , e. m. 2007 , , 375 , 931 merloni , a. 2004 , , 353 , 1035 metropolis , n. , & ulam , s. 1949 , j. amer .assoc . , 44 , 335 metropolis , n. , rosenbluth , a. w. , rosenbluth , m. n. , teller , a. h. , & teller , e. 1953 , j. chem .phys . , 21 , 1087 nakamura , o. , fukugita , m. , yasuda , n. , loveday , j. , brinkmann , j. , schneider , d. p. , shimasaku , k. , & subbarao , m. 2003 , , 125 , 1682 neal , r. m. 1996 , statistics and computing , 6 , 353 page , m. j. , & carrera , f. j. 2000 , , 311 , 433 paltani , s. , et al . 2007 , , 463 , 873 press , w. h. , & schechter , p. 1974, , 187 , 425 popesso , p. , biviano , a. , bhringer , h. , & romaniello , m. 2006 , , 445 , 29 ptak , a. , mobasher , b. , hornschemeier , a. , bauer , f. , & norman , c. 2007 , , 667 , 826 richards , g. t. , et al .2006 , , 131 , 2766 richardson , s. , & green , p. j. 1997 , j. roy .b , 59 , 731 roeder , k. , & wasserman , l. 1997 , j. amer .assoc . , 92 , 894 rubin , d. b. 1981 , j. educational statistics , 6 , 377 rubin , d. b. 1984 , annals of statistics , 12 , 1151 scarlata , c. , et al .2007 , , 172 , 406 schafer , c. m. 2007 , , 661 , 703 schechter , p. 1976, , 203 , 297 schneider , d. p. , et al . 2005 , , 130 , 367 soltan , a. 1982 , , 200 , 115 spergel , d. n. , et al .2003 , , 148 , 175 steffen , a. t. , barger , a. j. , cowie , l. l. , mushotzky , r. f. , & yang , y. 2003 , , 596 , l23 schmidt , m. 1968 , , 151 , 393 ueda , y. , akiyama , m. , ohta , k. , & miyaji , t. 2003 , , 598 , 886 waddington , i. , dunlop , j. s. , peacock , j. a. , & windhorst , r. a. 2001 , , 328 , 882 willott , c. j. , rawlings , s. , blundell , k. m. , lacy , m. , & eales , s. a. 2001 , , 322 , 536 wolf , c. , wisotzki , l. , borch , a. , dye , s. , kleinheinrich , m. , & meisenheimer , k. 2003 , , 408 , 499 wyithe , j. s. b. , & loeb , a. 2003 , , 595 , 614 yu , q. , & tremaine , s. 2002 , , 335 , 965
we describe a bayesian approach to estimating luminosity functions . we derive the likelihood function and posterior probability distribution for the luminosity function , given the observed data , and we compare the bayesian approach with maximum - likelihood by simulating sources from a schechter function . for our simulations confidence intervals derived from bootstrapping the maximum - likelihood estimate can be too narrow , while confidence intervals derived from the bayesian approach are valid . we develop our statistical approach for a flexible model where the luminosity function is modeled as a mixture of gaussian functions . statistical inference is performed using markov chain monte carlo ( mcmc ) methods , and we describe a metropolis - hastings algorithm to perform the mcmc . the mcmc simulates random draws from the probability distribution of the luminosity function parameters , given the data , and we use a simulated data set to show how these random draws may be used to estimate the probability distribution for the luminosity function . in addition , we show how the mcmc output may be used to estimate the probability distribution of any quantities derived from the luminosity function , such as the peak in the space density of quasars . the bayesian method we develop has the advantage that it is able to place accurate constraints on the luminosity function even beyond the survey detection limits , and that it provides a natural way of estimating the probability distribution of any quantities derived from the luminosity function , including those that rely on information beyond the survey detection limits .
high precision synchronization of clocks plays an important role in modern society and scientific research ; examples include navigation , global positioning , tests of general relativity theory , long baseline interferometry in radio astronomy , as well as gravitational wave observation .two standard classical protocols for clock synchronization are einstein s synchronization scheme and eddington s slow clock transfer .the former requires operational exchange of light pulses between the distant clocks and the latter is based on sending a locally synchronized clock from one part to other parts .recently , quantum strategies have been exploited to improve the accuracy of clock synchronization . a few quantum clock synchronization ( qcs ) proposals and experiments are reported .it is shown that the schemes based on quantum mechanics can gain significant improvements in precision over their classical counterparts .the main idea in most of the qcs proposals is employing quantum entanglement and state squeezing to achieve an enhancement of the precision .on one hand several satellite - based quantum optics experiences are feasible with current technology , such as satellite quantum communication , and quantum tagging , as well as gravity probes using beam interferometers and atomic clocks to test the principle of equivalence . among these experiments ,the synchronization of clocks between a satellite and a ground station is an essential step .the global positioning system ( gps ) with synchronized clocks is a satellite - based navigation system .there are growing needs to advance this system with higher accuracy for military , civil and commercial reasons , and also for fundamental research such as in searching dark matter .time signals transferred by optical technology may be necessary to meet the demanding of precision.besides , a satellite - based quantum network of clocks is promising to act as a single world clock with unprecedented stability and accuracy approaching the limit set by quantum mechanics , and there is also a security advantage . on the other hand , the feasibility for satellite - based qcs has to be concerned and time dilation is a concern because of relativistic effects of the earth on the qcs .the influence of relativistic effects on quantum systems is a focus of study in recent years because such studies provide insights into some key questions in quantum mechanics and relativity , such as nonlocality , causality , and the information paradox of black holes .relativistic effects are particularly significant for the quantum versions of the eddington scheme because one must assume that the transfer is performed `` adiabatically slowly '' and the spacetime is flat such that relativistic effects are negligible .it is demonstrated experimentally that the gravitational frequency shift ( gfs ) effects remarkably influence the running of current atomic clockshowever , time dilation induced by earth s spacetime curvature is experimentally observed for a change in height of m and thus can not be neglected . in this paperwe propose a practical scheme for satellite - based qcs .we let two observers , alice and bob , exchange two frequency entangled pulses between a ground station and a satellite .the influence of gravitational red - shift on the frequency of a pulse can be eliminated by an opposite gravitational blue - shift . by assuming the clocks have the same precision , clock synchronizationcan be realized by identifying the time discrepancies . in our scheme ,the time discrepancies are introduced through adding or subtracting optical path differences into the pulses and the coincidence rate of the pulses in the interferometer as a function of the time discrepancy between the clocks is measured . in actual satellite - based quantum information processing tasks , and similarly in protocols of qcs , the main errors are induced by photon - loss and the dispersion effects of the atmosphere through which the pulses travel .therefore , we employ frequency entangled light , instead of the entangled n00n state which is vulnerable for photon - loss , as well as the dispersion cancellation technology to eliminate the influence of the atmospheric scattering .we find that the coincidence rate of interferometry is remarkably affected by the spacetime curvature of the earth .we also find that the precision of the clock synchronization is sensitive to the light source parameters . by using this quantum scheme, we can reach a high accuracy for clock synchronization .the outline of the paper is as follows . in sec .ii we briefly introduce the sketch of the experimental setup . in sec .iii we discuss how the earth s spacetime effects the propagation of photons . in sec .iv we study the feasibility of satellite - based qcs and how the effect of the earth s gravity will disturb it . in the last section we discuss the experimental feasibility of our scheme and give a brief summary .the sketch of our proposal for satellite - based qcs is described in fig .( [ f : experiment ] ) .the quantum optical technology of our proposal is based on the hong , ou , and mandel ( hom ) interferometer .we assume that alice works on the surface of the earth ( ) with her own clock , while bob works on a satellite at constant radius .the clocks have the same accuracy thus the clock synchronization problem is reduced to the problem of identifying the time discrepancy between the clocks .alice s clock has been synchronized with a standard clock but bob s clock dose not.alice sends two frequency entangled beams produced by a parametric down converter crystal ( pdc ) source to bob and bob bounces them back to alice again .those two pulses are named signal beam and idler beam , respectively . by exchanging entangled beams between alice and bob ,a conveyor belt " for time information is established . after propagating through different optical paths ,the signal and idler beams are interfered at the 50/50 beam splitter and be measured by the detectors . to introduce time information into the beams , alice and bob use moving mirrors with constant speed to add or subtract optical path differences ( opd ) to the beams .alice and bob come to an agreement on the starting time of their mirrors in advance .since they do not have a synchronized clock to start with , they can only start the mirrors at time relative to the time readings of their own clock , which are different due to different locations . as described in fig .( 1 ) , alice at the ground station start a moving mirror to _ add _ an opd to the idler beam and to _ subtract _ the same amount of opd from the signal beam which was bounced back from bob . at the satellite bob _subtracts _ an opd from the idler beam and _ adds _ an identical opd to the signal beam . the linear time dependent opdsare given by where and are the starting times ( proper time ) as measured by alice s and bob s clocks , respectively . in eq .( [ delayt ] ) is the time point ( coordinate time ) of the coincidence detection of the signal and idler photons , i.e. , the time when the photons quantum state is measured . by assuming the quantum state is instantaneously collapsed by the measurements made on the surface of the earth , we can agree that the collapsing time is identical , even if alice and bob reading times are different at this moment . from eq .( [ delayt ] ) we can see that the delays are proportional to the time interval between the starting time for the moving of the mirrors and the time when the photons are detected .if the proportionality constant in eq . ( [ delayt ] ) and the starting time reading on alice s and bob s clocks are identical , the quantity of opd at alice s point will be _ zero _ after an exchange period .therefore , the final opd is totally produced from the starting time discrepancy between alice s clock and bob s clock .then the signal and idler pulses are interfered at the beam splitter ( bs ) and click the detectors .we will show in section iv that the final difference of optical path lengths is affected by a factor depending on the starting time discrepancy . by measuring the photon coincidence rate at the output ports 1 and 2 of the beam splitter, one may acquire very precise information on the opd in the two arms .thus , it is sufficient to measure the photon coincidence rate to recover the exact time discrepancy between alice s clock and bob s clock . then alice tells bob by classical communication to adjust his clock according to the time discrepancy . by using this scheme, alice and bob may thus synchronize their clocks , andcheck how much the accuracy is disturbed by the gravity induced spacetime curvature of the earth , and inversely they can precisely measure the curvature via an atomic clock , which is the most accurate setup currently available in the world .now we describe the propagation of photons from the earth to a satellite by taking gravity of the earth into consideration .the earth s spacetime curvature will influence the light pulses during their propagation between the ground station and the satellite .we know that the earth rotates slowly with an angular velocity at the equator of or at a linear speed of , which is much slower than the speed of light . therefore , the schwarzschild metric is a sufficient approximation for the earth s spacetime , as has been discussed in .the schwarzschild metric is given by where is the earth s schwarzschild radius , is the mass of the earth , is the speed of light in vacuo , and is the gravitational constant .alice and bob should move with constant acceleration to overcome the gravitational potential and to remain at a constant distance from the earth .we also assume that exchange of pulses are performed only when the satellite is just above the ground station .considering further the earth s schwarzschild radius is much smaller than its radius and the size of the optical source is small compared to the characteristic distances involved , we can consider only radial propagations .a photon can be properly modeled by a wave packet of electromagnetic fields with a distribution of modes peaked around the frequencies , where labels either alice or bob .the annihilation operator for a photon from the point of view of alice or bob takes the form where is the physical frequencies as measured in their labs .the proper times are related to the schwarzschild coordinate time by .the creation and annihilation operators of photons satisfy =\delta(\omega_k-\omega^{'}_k) ] and ] , where is the gaussian width .the wave packet overlaps and are found to be }}\label{final : result},\end{aligned}\ ] ] where and the signs occur for or . in eq .( [ fintt ] ) , we define and is assumed. the modes will be perfectly overlapped ( ) when alice and bob are in a flat spacetime .this means that gravitational effect of the earth plays no role in the wave packet overlap when alice and bob are at the same height.for the typical resources used in quantum optics experiments , the relation should be satisfied , which yields .then we find that the coincidence rate has the form \;\label{gauss}.\end{aligned}\ ] ] we can see that the coincidence rate has the factor compared to that of the flat spacetime case , where .we define the effect of spacetime curvature on the accuracy of clock synchronization as the relative disturbance of coincidence rate it is now clear that the relative disturbance of coincidence rate depends on the spacetime parameter and the characteristics of the pdc source . as a function of the altitude of the satellite .the parameters of the pdc light source are fixed as and . ] in fig .( [ fig2 ] ) , we plot the relative disturbance of the earth s spacetime curvature on the coincidence rate as a function of the distance between the satellite and the earth s core for the fixed light source parameters . it is shown that increases as the distance increases , i.e. , the accuracy of clock synchronization depends on the altitude of the satellite , which also verifies that the spacetime curvature remarkably influences the running of the atomic clocks .this result is very different from that of ref . , in which the coincidence rate is independent of the distance between alice and bob when the spacetime curvature of the earth is not considered . in fig .( [ fig3 ] ) , we plot the relative disturbance over the peak frequency and bandwidth of the pdc .it is shown that the disturbance of accuracy depends sensitively on the bandwidth of the source , which is similar to the flat spacetime case .however , here we find that the disturbance on accuracy also depends on the peak frequency of the pulses , which is different from that of where the accuracy is independent of the peak frequency . in this paperwe are particularly interested in two typical cases , in which the qcs are performed between the ground station and either a low earth orbit satellite ( leo ) , or a geostationary earth orbit ( geo ) satellite , respectively . as a function of the source parameters and for the fixed distance m ( leo ) . ]_ the leo case : _ the typical distance from the earth to a leo satellite is about km , which yields m and m .considering that the schwarzschild radius of the earth is mm , it is found that .we employ a typical pdc source with a wavelength of 369.5 nm ( corresponding to ) and ( the relation is satisfied ) .a light source with such peak frequency and bandwidth is available , for example , in trapped ion experiments .the relative disturbance of the spacetime curvature on the coincidence rate is obtained as .the achievement of an optical lattice clock with accuracy at the level has been reported in refs .if we would like to synchronize two clocks up to a time discrepancy of ( as in , or a much lower level as presented in ) , the correction of the earth s spacetime curvature effect will reach during a single synchronization process .such a correction is comparable to the accuracy of the atom clocks and thus should be considered for the qcs between clocks in future satellite - based applications .therefore , we can safely arrive at the conclusion that the spacetime curvature is _ not negligible _ when the synchronization is performed by leo satellites ._ the geo case : _ the typical distance between a geo satellite and the ground is about m. therefore , the distance between the earth and the satellite is about m , which yields . in this casethe relative disturbance of the spacetime curvature on the coincidence rate is .we find that the disturbance of the spacetime curvature on the coincidence rate for the geo satellites becomes even _more remarkable _ than that of the leo case .we remark that the current gps satellites have m . in the qcs scheme ,the spacetime curvature is also remarkable . _error analysis : _ it is to mentioning that the velocity variations of the moving mirrors may induce some errors on the coincidence rate .however , the order of magnitudes of the movement speed of the mirrors is much smaller than the velocity of light , let alone the velocity variation of the mirrors .to be specific , the typical velocity of the moving mirrors is _ m / s_. let us suppose that this velocity has one percent of variation , say _ m / s _ , which is much smaller than the velocity of light .note that the relative disturbance of the spacetime curvature on the coincidence rate is on the order of for the leo satellites and of the order of for the geo satellites , which are at least 3 orders of magnitudes larger than that of the velocity variations of the mirrors .therefore , the systematic errors induced by the velocity variation of the mirrors can be safely ignored in the scheme .we have proposed a practical satellite - based qcs scheme with the advantages of dispersion cancellation and the robust frequency entangled pulses of light by taking the effects of the spacetime curvature of the earth into consideration .the spacetime background of the earth is described by the schwarzschild metric , and the quantum optics part of our proposal is based on the hom interferometer . by eliminating the gravitational redshift and blueshift of the laser pulses and the atmospheric dispersion cancellation , the accuracy of clock synchronization in our quantum scheme can be very high by showing that is close to unity .our proposal can be implemented , in principle , with current available technologies . to be specific, optical sources with the required peak frequency and bandwidth have been achieved by the trapped ion experiments .the feasibility of photon exchanges between a satellite and a ground station has been experimentally demonstrated by the matera laser ranging observatory ( mlro ) in italy .most recently , they have reported the operation of experimental satellite quantum communications by sending selected satellites laser pulses . in this paper , we have discussed how the transferred pulses will react to the spacetime curvature of the earth encountered en route , which changes their coincidence rate .the effect of spacetime curvature on the time discrepancy is evaluated in terms of the coincidence rate of the pulses .it is shown that the coincidence rate of pulses is sensitive to the source parameters and the distance between the earth and the satellite.our scheme can also be generalized to the quantum clock network cases .the results should be significant both for determining the accuracy of clock synchronization and for our general understanding of time discrepancy in future satellite - based quantum systems .this work is supported by the national natural science foundation of china under grants no .11305058 , no .11175248 , no . 11475061 , the doctoral scientific fund project of the ministry of education of china under grants no .20134306120003 , postdoctoral science foundation of china under grants no .2014m560129 , no .2015t80146 and the strategic priority research program of the chinese academy of sciences ( under grant no .xdb01010000 ) .
clock synchronization between the ground and satellites is a fundamental issue in future quantum telecommunication , navigation , and global positioning systems . here , we propose a scheme of near - earth orbit satellite - based quantum clock synchronization with atmospheric dispersion cancellation by taking into account the spacetime background of the earth . two frequency entangled pulses are employed to synchronize two clocks , one at a ground station and the other at a satellite . the time discrepancy of the two clocks is introduced into the pulses by moving mirrors and is extracted by measuring the coincidence rate of the pulses in the interferometer . we find that the pulses are distorted due to effects of gravity when they propagate between the earth and the satellite , resulting in remarkably affected coincidence rates . we also find that the precision of the clock synchronization is sensitive to the source parameters and the altitude of the satellite . the scheme provides a solution for satellite - based quantum clock synchronization with high precision , which can be realized , in principle , with current technology .
the use of objective functions for the formulation of complex systems has seen a study surge of interest .objective functions , in particular objective functions based on information theoretical principles , are used increasingly as generating functionals for the construction of complex dynamical and cognitive systems .there is then no need to formulate by hand equations of motion , just as it is possible , in analogy , to generate in classical mechanics newton s equation of motion from an appropriate lagrange function .when studying dynamical systems generated from objective functions encoding general principles , one may expect to obtain a deeper understanding of the resulting behavior .the kind of generating functional employed also serves , in addition , to characterize the class of dynamical systems for which the results obtained may be generically valid .here we study the interplay between two generating functionals .the first generating functional is a simple energy functional . minimizing this objectivefunction one generates a neural network with predefined point attractors , the hopfield net .the second generating functional describes the information content of the individual neural firing rates . minimizing this functional results in maximizing the information entropy and in the generation of adaption rules for the intrinsic neural parameters , the threshold and the gain .this principle has been denoted polyhomeostatic optimization , as it involves the optimization of an entire function , the distribution function of the time - averaged neural activities .we show that polyhomeostatic optimization destabilizes all attractors of the hopfield net , turning them into attractor ruins .the resulting dynamical network is an attractor relict network and the dynamics involves sequences of continuously latching transient states .this dynamical state is characterized by trajectories slowing down close to the succession of attractor ruins visited consecutively .the two generating functionals can have incompatible objectives .each generating functional , on its own , would lead to dynamical states with certain average levels of activity .stress is induced when these two target mean activity levels differ .we find that the system responds to objective function stress by resorting to intermittent bursting , with laminar flow interseeded by burst of transient state latching .dynamical systems functionally equivalent to attractor relict networks have been used widely to formulate dynamics involving on - going sequences of transient states . latchingdynamics has been studied in the context of the grammar generation with infinite recursion and in the context of reliable sequence generation .transient state latching has also been observed in the brain and may constitute an important component of the internal brain dynamics .this internal brain dynamics is autonomous , ongoing being modulated , but not driven , by the sensory input . in this contextan attractor relict network has been used to model autonomous neural dynamics in terms of sequences of alternating neural firing patterns .the modulation of this type of internal latching dynamics by a stream of sensory inputs results , via unsupervised learning , in a mapping of objects present in the sensory input stream to the preexisting attractor ruins of the attractor relict network , the associative latching dynamics thus acquiring semantic content .we consider here rate encoding neurons in continuous time , with a non - linear transfer function , where is the membrane potential and ] , which corresponds to the cosine of the angle between the actual neural activity and the attractor state . as a second measure , of how close the actual activity pattern and the original attractors are, we consider the reweighted scalar product ] and ) and the second and the first order lines meet at , where the critcal threshold is determined by the self - consistent solution of - 1/2 ] the mean pattern activity , when using the hopfield encoding ( [ hopfield_encoding ] ) , the attractors are known to correspond , to a close degree , to the stored patterns , as long as the number of patterns is not too large .we here make use of the hopfield encoding for convenience , no claim is made that memories in the brain are actually stored and defined by ( [ hopfield_encoding ] ) .we consider in the following random binary patterns , as illustrated in fig .[ fig : examplepatterns ] , where is the mean activity level or sparseness .the patterns have in general a finite , albeit small , overlap , as illustrated in fig .[ fig : examplepatterns ] .the target distribution for the intrinsic adaption has an expected mean which can be evaluated noting that the support of is $ ] .the target mean activity can now differ from the average activity of an attractor relict , the mean pattern activity , compare ( [ def_alpha ] ) .the difference between the two quantities , viz between the two objectives , energy minimization vs. polyhomeostatic optimization , induces stress into the latching dynamics , which we will study in the following . and ( color encoded , see eqs .( [ o_p ] ) and ( [ a_p ] ) ) , for a network , of the neural activities with the attractor ruins , as a function of time .the sparseness of the binary patterns defining the attractor ruins is . herethe average activity of the attractor relicts and the target mean activity level have been selected to be equal , resulting in a limiting cycle with clean latching dynamics . ]we simulated eqs .( [ xdot ] ) and ( [ abdot ] ) using 4th order classical runge - kutta and a timestep of .the resulting dynamics is dependent on the magnitude of the adaption rates for the gain and for the threshold .in general latching dynamics is favored for small and somewhat larger .we kept a constant leak rate .the results presented in figs . [ fig : patternlatchingtimeseries_normal ] and [ fig : neurontimeseries_normal ] have been obtained setting and using table 1 for [ tab_lambda ] .relation between the parameter , for , and the mean value , as given by eq .[ eqn_lambda ] , for the target distribution . [ cols="^,^,^,^,^,^,^,^,^,^",options="header " , ] in fig .[ fig : patternlatchingtimeseries_normal ] we present the time evolution of the overlaps and for and adaption rates , . in this casethe target mean activity , , of the intrinsic adaption rules ( [ abdot ] ) is consistent with the mean activity of the stored patterns , viz with the mean activity level of the attractor relicts .one observes that the system settles into a limiting cycle with all seven stored patterns becoming successively transiently active , a near - to - perfect latching dynamics .the dynamics is very stable and independent of initial conditions , which were selected randomly .neurons , of the membrane potential , gain , threshold and firing rate for the time series of overlaps presented in fig .[ fig : patternlatchingtimeseries_normal ] . ] in fig .[ fig : neurontimeseries_normal ] we present the time evolution , for 20 out of the neurons , of the respective individual variables .the simulation parameters are identical for figs .[ fig : patternlatchingtimeseries_normal ] and [ fig : neurontimeseries_normal ] . shown in fig .[ fig : neurontimeseries_normal ] are the individual membrane potentials , the firing rates , the gains and the thresholds .the latching activation of the attractor relicts seen in fig . [ fig : patternlatchingtimeseries_normal ] reflects in corresponding transient activations of the respective membrane potentials and firing rates .the oscillations in the thresholds drive the latching dynamics , interestingly , even though the adaption rate is larger for the gain .the synaptic weights are symmetric and consequently also the overlap matrix presented in fig .[ fig : examplepatterns ] .the latching transitions evident in fig . [ fig : patternlatchingtimeseries_normal ] are hence spontaneous in the sense that they are not induced by asymmetries in the weight matrix . we have selected uncorrelated patterns and the chronological order of the transient states is hence determined by small stochastic differences in the pattern overlaps .it would however be possible to consider correlated activity patters incorporating a rudimental grammatical structure , which is however beyond the scope of the present study . and ( color coded , compare eqs .( [ o_p ] ) and ( [ a_p ] ) ) for all binary patterns , with sparseness , for a network of neurons .the target mean neural activity is , the difference between the two objective functions , viz between and induces stress .the intermittent latching dynamics has a mean activity of about , which is too large .the phases of laminar flows between the burst of latching has a reduced average activity , thus reducing the time - averaged mean activity level toward the target . ] in fig .[ fig : patternlatchingtimeseries_bursting ] we present the time evolution of the overlaps and for the case when the two objective functions , the energy functional and the polyhomeostatic optimization , incorporate conflicting targets .we retain the average sparseness for the stored patterns , as for fig .[ fig : patternlatchingtimeseries_normal ] , but reduced the target mean firing rate to .this discrepancy between and induces stress into the dynamics .the pure latching dynamics , as previously observed in fig .[ fig : patternlatchingtimeseries_normal ] , corresponds to a mean activity of about , in conflict with the target value .phase of laminar flow are induced by the objective function stress , and the latching dynamics occurs now in the form of intermittent bursts .the neural activity , see in fig .[ fig : patternlatchingtimeseries_bursting ] , is substantially reduced during the laminar flow and the time averaged mean firing rate such reduced towards the target activity level of .the trajectory does not come close to any particular attractor relict during the laminar flow , the overall being close to for all stored pattern .the time evolution is hence segmented into two distinct phases , a slow laminar phase far from any attractor and intermittent phases of bursting activity in which the trajectories linger transiently close to attractor relicts with comparatively fast ( relative to the time scale of the laminar phase ) latching transitions .intermittent bursting dynamics has been observed previously in polyhomeostatically adapting neural networks with randomly selected synaptic weights , the underlying causes had however not been clear . using the concept of competing generating functionals we find that objective function stress is the underlying force driving the system into the intermittent bursting regime .the latching dynamics presented in figs . [ fig : patternlatchingtimeseries_normal ] and [ fig : patternlatchingtimeseries_bursting ] is robust with respect to system size .we did run the simulation for different sizes of networks with up to neurons , a series of sparseness parameters and number of stored patterns . as an example we present in fig .[ fig : patterns_latching_time_series_nudles ] the overlap for neurons and binary patterns with a sparseness of .no stress is present , the target mean activity level is . the latching dynamics is regular , no intermittent bursting is observed .there is no constraint , generically , to force the limiting cycle to incorporate all attractor relicts .indeed , for the transient state dynamics presented in fig . [ fig : patterns_latching_time_series_nudles ] , some of the stored patterns are never activated . the data presented in figs . [fig : patternlatchingtimeseries_normal ] , [ fig : patternlatchingtimeseries_bursting ] and [ fig : patterns_latching_time_series_nudles ] is for small numbers of attractor relicts , relative to the systems size , a systematic study for large values of is beyond the scope of the present study .latching dynamics tends to break down , generically speaking , when the overlap between distinct attractors , as shown in fig .[ fig : examplepatterns ] , becomes large .the autonomous dynamics then becomes irregular ., of the overlaps for all binary patterns ( vertically displaced ) with sparseness and a target mean activity of .there is no objective - function stress and the latching is regular , the adaption rates are , . ]the use of generation functionals has a long tradition in physics in general and in classical mechanics in particular . herewe point out that using several multivariate objective functions may lead to novel dynamical behaviors and an improved understanding of complex systems in general .we propose in particular to employ generating functionals which are multivariate in the sense that they are used to derive the equations of motion for distinct , non - overlapping subsets of dynamical variables . in the present workwe have studied a neural network with fast primary variables , the membrane potentials , and slow secondary variables and , characterizing the internal behavior of individual neurons , here the gains and the thresholds . the time evolution of these sets of interdependent variables is determined respectively by two generating functionals : energy functional : : minimizing an energy functional generates the equation of motion ( leaky integrator ) for the primary dynamical variables , the individual membrane potentials .information theoretical functional : : minimizing the kullback - leibler divergence between the distribution of the time - average neural firing rate and a target distribution function maximizing information entropy generates intrinsic adaption rules and for the gain and the threshold ( polyhomeostatic optimization ) .generating functionals may incorporate certain targets or constraints , either explicitly or implicitly .we denote the interplay between distinct objectives incorporated by competing generating functionals _ `` objective functions stress''_. for the two generating functionals considered in this study there are two types of objective functions stress : functional stress : : the minima of the energy functional are time - independent point attractors leading to firing - rate distributions which are sharply peaked .the target firing - rated distribution for the information - theoretical functional is however smooth ( polyhomeostasis ) .this functional stress leads to the formation of an attractor relict network .scalar stress : : the mean target neural firing rate is a ( scalar ) parameter for the target firing - rate distribution , and hence encoded explicitly within the information theoretical functional .the local minima of the energy functional , determined by the synaptic weights , determine implicitly the mean activity levels of the corresponding point attractors .scalar objective function stress is present for .for the two generating functionals considered , we find that the scalar objective function stress induces a novel dynamical state , characterized by periods of slow laminar flow interseeded by bursts of rapid latching transitions .we propose that objective function stress is a powerful tool , in general , for controlling the behavior of complex dynamical systems .the interplay between distinct objective functions may hence serve as a mechanism for guiding self organization .10 o. sporns , and m. lungarella , evolving coordinated behavior by maximizing information structure .artificial life x : proceedings of the tenth international conference on the simulation and synthesis of living systems , 323329 ( 2006 ) .h. sompolinsky and i. kanter , _ temporal association in asymmetric neural networks_. phys .lett . * 57 * , 2861 ( 1986 ) .m. abeles _ et al ._ , _ cortical activity flips among quasi - stationary states_. pnas * 92 * , 8616 ( 1995 ) .ringach , _ states of mind_. nature * 425 * , 912 ( 2003 ) .j. fiser , c. chiu , and m. weliky , _ small modulation of ongoing cortical dynamics by sensory input during natural vision_. nature * 421 * , 573 ( 2004 ) j.n .maclean b.o .watson , g.b .aaron , and r. yuste , _ internal dynamics determine the cortical response to thalamic stimulation_. neuron * 48 * , 811 ( 2005 ) . c. gros , _ neural networks with transient state dynamics_. new journal of physics * 9 * , 109 ( 2007 ) .
well characterized sequences of dynamical states play an important role for motor control and associative neural computation in the brain . autonomous dynamics involving sequences of transiently stable states have been termed associative latching in the context of grammar generation . we propose that generating functionals allow for a systematic construction of dynamical networks with well characterized dynamical behavior , such as regular or intermittent bursting latching dynamics . coupling local , slowly adapting variables to an attractor network allows to destabilize all attractors , turning them into attractor ruins . the resulting attractor relict network may show ongoing autonomous latching dynamics . we propose to use two generating functionals for the construction of attractor relict networks . the first functional is a simple hopfield energy functional , known to generate a neural attractor network . the second generating functional , which we denote polyhomeostatic optimization , is based on information - theoretical principles , encoding the information content of the neural firing statistics . polyhomeostatic optimization destabilizes the attractors of the hopfield network inducing latching dynamics . we investigate the influence of stress , in terms of conflicting optimization targets , on the resulting dynamics . objective function stress is absent when the target level for the mean of neural activities is identical for the two generating functionals and the resulting latching dynamics is then found to be regular . objective function stress is present when the respective target activity levels differ , inducing intermittent bursting latching dynamics . we propose that generating functionals may be useful quite generally for the controlled construction of complex dynamical systems .
content - based image retrieval ( cbir ) aims at effectively indexing and mining large image databases such that given an unseen query image we can effectively retrieve images that are similar in content . with the deluge in medical imaging data , there is a need to develop cbir systems that are both fast and efficient .however , in practice , it is often infeasible to exhaustively compute similarity scores between the query image and each image within the database .adding to the challenge of scalability of cbir systems is the less understood semantic gap between the visual content of the image and the associated expert annotations .to address these challenges , hashing based cbir systems have come to a forefront where the system indexes each image with a compact similarity preserving binary code that could be potentially leveraged for very fast retrieval . towards this end, we propose an end - to - end one - stage deep residual hashing ( drh ) network to directly generate hash codes from input images .specifically , the drh model constitutes of a sub - network with multiple residual convolutional blocks for learning discriminative image representations followed by a fully - connected hashing layer to generate compact binary embeddings . through extensive validation, we demonstrate that drh learns discriminative hash codes in an end - to - end fashion and demonstrates high retrieval quality on standard chest x - ray image databases .the existing hashing methods proposed for efficient encoding and searching approaches have been proposed for large scale retrieval in machine learning and medical image computing can be categorised into : * ( 1 ) * shallow learning based hashing methods like locality sensitive hashing ( lsh ) ) , data - driven methods _e.g. _ iterative quantization ( itq ) , kernel sensitive hashing , circulent binary embedding ( cbe ) , metric hashing forests ( mhf ) ; * ( 2 ) * hashing using deep architectures ( only binarization without feature learning ) including restricted boltzmann machines in semantic hashing , autoencoders in supervised deep hashing _ etc . _ and * ( 3 ) * application - specific hashing methods including weighted hashing for histopathological image search , binary code tagging for chest x - ray images , forest based hashing for neuron images , to name a few .the ultimate objective of earning similarity preserving hashing functions is to generate embeddings in a latent hamming space such that the class - separability is preserved while embedding and local neighborhoods are well defined and semantically relevant .this can be visualized in 2d by generating the t - stochastic neighborhood embedding ( t - sne ) of unseen test data post learning like shown in fig .[ fig : nettsne ] . starting from fig .. [ fig : nettsne](a ) which is generated by a purely un - superivsed setting we aim at moving towards fig .. [ fig : nettsne](d ) which is closer to an ideal embedding .in fact , fig . [fig : nettsne ] represents the results of our proposed drh approach in comparison to other methods and baselines .* hand - crafted features * : conventional hashing methods including lsh , itq , ksh , mhf _ etc . _ perform encoding in two stages : firstly , generating a vector of hand - crafted descriptors and a second stage involving hashing learning to preserve the captured semantics in a latent hamming space .these two independent stages may lead to sub - optimal results as the image descriptors may not be tailored for hashing .moreover , hand - crafting requires significant domain knowledge and extensive parameter tuning which is particularly undesirable . * conventional deep learning * : using point - wise loss - functions like cross - entropy , hinge loss _ etc . _ for training ( / finetuning ) deep networks may not lead to feature representations that are sufficiently optimal for the task of retrieval as they do not consider crucial pairwise relationships between instances . * simultaneous feature learning and hashing * : recently , with the advent of deep learning for hashing we are able to perform effective end - to - end learning of binary representations directly from input images .these include deep hashing for compact binary code learning , deep hashing network for effective similarity retrieval , simultaneous feature learning and hashing _ etc . _to name a few .however , a crucial disadvantage of these deep learning for hashing methods is that with very deep versions of these networks accuracy gets saturated and often degrades .in addition to this , the continuous relaxation of hash codes to train deep networks to be able to learn with more viable continuous optimisation methods ( gradient - descent based methods ) could potentially lead to uncontrolled quantization and distance approximation errors during binarization . in an attempt to redress the above short - comings of the existing approaches ,we make the following contributions with our work : * 1 ) * we , for the first lime , design a novel deep hash function learning framework using deep residual networks for representation learning ; * 2 ) * we introduced a neighborhood component analysis - inspired loss suitably tailored for learning discriminative hash codes ; * 3 ) * we leverage multiple hashing related losses and regularizations to control the quantization error while binarization of hash codes and to encourage hash codes to be maximally independent of each other ; and * 4 ) * clinically , to the best of our knowledge , this is the first retrieval work on medical images ( specifically , chest x - ray images ) to discuss co - morbidities _i.e. _ co - occuring manifestations of multiple diseases .the paper also aims at encouraging further discussion on the following aspects of cbir through drh : 1 .* trainability * : how do we train very deep neural networks for hashing ? does introducing residual connections aid in this process ?* representability * : do networks tailored for the dataset at hand learn better representations over transfer learning ? 3 .* compactness * : do highly compact binary representations effectively compress the desired semantic content within an image ? do loss functions to control quantization error while binarzing aid in improved hash coding ?* semantic - similarity preservation * : do we learn hash codes such that neighbourhoods in the hamming space comprise of semantically similar instances ?* joint optimisation * : does end - to - end implicit learning of hash codes work better than a two stage learning process where the images are embedded to a latent space and then quantized explicitly _ via _ hashing ?an ideal hashing method should generate codes that are compact , similarity preserving and easy to compute representations ( typically , binary in nature ) , which can be leveraged for accurate search and fast retrieval .the desired similarity preserving aspect of the hashing function implies that _ semantically similar images are encoded with similar hash codes_. mathematically , hashing aims at learning a mapping , such that an input image can be encoded into a bit binary code . in hashing for image retrieval , we typically define a similarity matrix , where implies images and are similar and indicates they are dissimilar .similarity preserving hashing aims at learning an encoding function such that the similarity matrix is maximally - preserved in the binary hamming space .we start with a deep convolutional neural network architecture inspired in part by the seminal resnet architecture proposed for image classification by he _et al . _ . as shown in fig .[ fig : netarchdrh ] , the proposed architecture consists of the a convolutional layer ( conv 1 ) followed by a sequence of residual blocks ( conv 2 - 5 ) and terminating in a final fully connected hashing ( fch ) layer for hash code - generation .the unique advantages offered by the proposed resnet architecture for hashing over a typical convolutional neural network are as follows : * * training of very deep networks * : the representational power of deep networks should ideally increase with increased depth .it is empirically observed that in deep feed - forward nets beyond a certain depth , adding additional layers results in higher training and validation error ( despite using batch normalization ) .residual networks seamlessly solves this _ via _ adding short cut connections that are summed with the output of the convolutional blocks . * * ease of optimization * : a major issue to training deep architectures is the problem of vanishing gradients during training ( this is in part mitigated with the introduction of rectified linear units ( relu ) , input batch normalisation and layer normalisation ) .residual connections offer additional support _ via _ a no - resistance path for the flow of gradients along the shortcut connections to reach the shallow learning layers . in order to learn feature embeddings tailored for retrieval and specifically for the scenario at hand where the pairwise similarity matrix should be preserved , we propose our supervised retrieval loss drawing inspiration from the neighbourhood component analysis . to encourage the learnt embedding to be binary in nature , we squash the output of the residual layers to be within $ ] by passing it through a hyperbolic tangent ( tanh ) activation function .the final binary hash codes are generated by quantizing the output of the tanh activation function ( say , ) as follows : . given instances and the corresponding similarity matrix is defined as , the proposed supervised retrieval loss is formulated as : where is the probability that any two instances ( and ) can be potential neighbours .inspired by knn classification , where the decision of an unseen test sample is determined by the semantic context of its local neighbourhood in the embedding space , we define as a softmax function of the hamming distance ( indicated as ) between the hash codes of two instances and is derived as : as gradient based optimisation of in a binary embedding space is infeasible due to its non - differentiable nature , we use a continuous domain relaxation and substitute non - quantized embeddings in place of hash code and euclidean distance as as surrogate of hamming distance between binary codes .this is derived as : .it must be noted that such an continuous relaxation could potentially result in uncontrollable quantization error and large approximation errors in distance estimation . with continuous relaxation , eq .is now differentiable and continuous thus suited for backpropagation of gradients during training .generation of high quality hash codes requires us to control this quantization error and bridge the gap between the hamming distance and its continuous surrogate . in this paper , we jointly optimise for and improve hash code generation by imposing additional loss functions as follows : * quantization loss * : in the seminal work on iterative quantization ( itq ) for hashing , gong and lazebnik introduced the notion of quantization error as .optimising for required a computation intensive alternating optimisation procedure and is not compatible with back propagation which is used to train deep neural nets ( due to non - differentiable sgn function within the formulation ) . towards this end , we use a modified point - wise quantization loss function proposed by zhu _ et al ._ sans the sgn function as .they establish that is an upper bound over , therefore can be deemed as a reasonable loss function to control quantization error . for ease of back - propagation , we propose to use a differentiable smooth surrogate to norm and derived the proposed quantization loss function as: . with the incorporation of the quantization loss , we hypothesise that the final binarization step would incur significantly less quantization error and the loss of retrieval quality (also empirically validated in section [ sec : results ] ) .* bit balance loss * : in addition to , we introduce an additional bit balance loss to maximise the entropy of the learnt hash codes and in effect create balanced hash codes . here, is derived as : .this loss aims at encouraging maximal information storage within each hash bit .* regularisation * : inspired by itq , we also introduce a relaxed orthogonality regularisation constraint on the convolutional weights ( say , ) connecting the output of the final residual block of the network to the hashing block .this weakly enforces that the generated codes are not correlated and each of the hash bits are independent .here , is formulated as : . in additon to , we also impose weight decay regularization to control the scale of learnt weights and biases . in this section , we detail on the training procedure for the proposed drh network with respect to the supervised retrieval and hashing related loss functions .we learn a single - stage end - to - end deep network to generate hash codes directly given an input image .we formulate the optimisation problem to learn the parameters of our network ( say , ) : where , , and are four parameters to balance the effect of different contributing terms . to solve this optimisation problem, we employ stochastic gradient descent to learn optimal network parameters .differentiating with respect to and using chain rule , we derive : the second term is computed through gradient back - propagation . the first term ( )is the gradient of the composite loss function with respect to the output hash codes of the drh network .we differentiate the continuous relaxation of the supervised retrieval loss function with respect to the hash code of a single example ( ) as follows : = 2 ( _ l : s_li > 0 p_lid_li - _ l i ( _ q : s_lq > 0 p_lq ) p_lid_li ) - 2 ( _ j : s_ij > 0 p_ijd_ij - _ j : s_ij > 0 p_ij ( _ z i p_izd_iz ) ) [ eq : derjs ] where .the derivatives of hashing related loss functions ( and ) are derived as : and the regularisation function acts on the convolutional weights corresponding to the hash layer ( ) and its derivative with respect to is derived as follows : .having computed the gradients of the individual components of the loss function with respect to the parameters of drh , we apply gradient - based learning rule to update .we use mini - batch stochastic gradient descent ( sgd ) with momentum .sgd incurs limited memory requirements and reduces the variance of parameter updates .the addition of the momentum term leads to stable convergence .the update rule for the weights of the hash layer is derived as : the convolutional weights and biases of the other layers are updated similarly .it must be noted that the learning rate in eq [ eq : updatewh ] is an important hyper - parameter .for faster learning , we initialise it the largest learning rate that stably decreases the objective function ( typically , at or ) . upon convergence at a particular setting of ,we scale the learning rate multiplicatively by a factor of and resume training.this is repeated until convergence or reaching the maximum number of epochs .* database * : we conducted empirical evaluations on the publicly available indiana university chest x - rays ( cxr ) dataset archived from their hospital s picture archival systems .the fully - anonymized dataset is publicly available through the openi image collection system . for this paper ,we use a subset of 2,599 frontal view cxr images that have matched radiology reports available for different patients . following the label generation strategy published in for this dataset , we extracted nine most frequently occurring unique patterns of medical subject headings ( mesh ) terms related to cardiopulmonary diseases from these expert - annotated radiology report .these include normal , opacity , calcified granuloma , calcinosis , cardiomegaly , granulomatous disease , lung hyperdistention , lung hypoinflation and nodule .the dataset was divided into non - overlapping subsets for training ( 80% ) and testing ( 20% ) with patient - level splits .the semantic similarity matrix is contructed using the mesh terms _ i.e. _ a pair of images are considered similar if they share atleast one mesh term .* comparative methods and baselines * : we evaluate and compare the retrieval performance of the proposed drh network with nine state - of - the art methods including five unsupervised shallow - learning methods : lsh , itq , cbe ; two supervised shallow - learning methods : ksh and mhf and two deep learning based methods : alexnet - ksh ( a - ksh ) and vggf - ksh ( v - ksh ) . to justify the proposed formulation ,we include simplified four variants of the proposed drh network as baselines : dph ( deep plain net hashing ) by removing the residual connections , drhnq ( deep residual hashing without quantization ) by removing the hashing related losses and generating binary codes only through tanh activation , drn - ksh by training a deep residual network with only the supervised retrieval loss and quantizing through ksh post training and drh - nb which is a variant of drh where continuous embeddings are used sans quantization , which may act as an upper bound on performance .we used the standard metrics for evaluating retrieval quality as proposed by lai _et al . _ : mean average precision ( map ) and precision - recall curves varying the code size(16 , 32 , 48 and 64 bits ) . for fair comparison , all the methods were trained and tested on identical data folds . the retrieval performance of methods involving residual learning and baselines is evaluated for two variants varying the number of layers : and .for the shallow learning methods , we represent each image as a 512 dimensional gist vector . for the drh and associated baselines ,the input image is resized to and normalized to a dynamic range of 0 - 1 using the pre - processing steps discussed in . for a - ksh and v - ksh ,the image normalization routines were identical to that reported in the original works .we implement all our deep learning networks ( including drh ) on the open - source matconvnet framework .the hyper - parameters , and were set at 0.05 , 0.025 and 0.01 empirically .the momentum term was set at 0.9 , the initial learning rate at and batchsize at 128 .the training data was augmented on - the - fly extensively through jittering , rotation and intensity augmentation by matching histograms between images sharing similar co - morbidities .all the comparative deep learning methods were also trained with similar augmentation .furthermore , for a - ksh and v - ksh variants , we pre - initialized the network parameters from the pre - trained models by removing the final probability layer .these network learnt a -dimensional embedding by fine - tuning it with cross - entropy loss .the hashing was performed explicitly through ksh upon convergence of the network . * results * : [ sec : results ] the results of the map of the hamming ranking for varyingcode sizes of all the comparative methods are listed in table [ wrap - tab:1 ] .we report the precision - recall curves for the comparative methods at a code size of 64 bits in fig .[ fig : prcomp ] . to justify the proposed formulation for drh , several variants of drh (namely , drn - ksh , dph , drh - nq and drh - nb ) were investigated and compare their retrieval results are tabulated in table [ wrap - tab:2 ] .in addition to map , we also report the retrieval precision withing hamming radius of 2 ( p @ h2 ) .the associated precision - recall curves are shown in fig .[ fig : prbl ] .[ fig : prcomp ] [ fig : prbl ]within this section , we present our discussion answering the questions posed in section [ sec : intro ] , w.r.t . to the results and observations we reported in section [ sec : results ] .* trainability * : the introduction of residual connections offers short - cut connections which act as zero - resistance paths for gradient flow thus effectively mitigating vanishing of gradients as network depth increases .this is strongly substantiated by comparing the performance of drh - 34 to drh - 18 _ vs. _ the plain net variants of the same depth dph - 34 to dph - 18 .there is a strong improvement in map with increasing depth for drh of about 9.3% . on the other hand ,we observe a degradation of 2.2% map performance on increasing layer depth in dph .the performance of drh-18 is fractionally better than dph - 18 indicating that drh exhibits better generalizability and the degradation problem is addressed well as we have significant map gains from increased depth . with the introduction of batch normalisation and residual connections ,we ensure that the signals during forward pass have non - zero variances and that the back propagated gradients exhibit healthy norms .therefore , neither forward nor backward signals vanish within the network .this is substantiated by the differences in map observed in table [ wrap - tab:1 ] between methods using bn ( drh , dph and v - ksh ) in comparison to a - ksh which does not use bn .* representability * : ideally , the latent embeddings in the hamming space should be such that similar samples are mapped closer while simultaneously mapping dissimilar samples further apart .we plot the t - stochastic neighbourhood embeddings ( t - sne ) of the hash codes for four comparative methods ( gist - itq , vggf - ksh , dph - 18 and drh - 34 ) in fig .[ fig : nettsne ] to visually assess the quality of the hash codes generated .visually , we observe that hand - crafted gist features with unsupervised hashing method itq fail to sufficiently induce semantic separability . in comparison , though vggf - ksh improves significantly owing to network fine - tuning , better embedding results from drh - 34 ( dph-18 is highly comparable to drh-34 ) .additionally , the significant differences in map reported in table [ wrap - tab:1 ] between these methods substantiates our hypothesis that in scenarios of limited training data it is better to train smaller models from scratch over finetuning to avoid overfitting ( drh - 34 has 0.183 m in comparison to vggf with 138 m parameters ) .also the significant domain shift between natural images ( imagenet - vggf ) and cxr poses a significant challenge for generalizability of networks finetuned from pre - trained nets .l5.6 cm * compactness * : hashing aims at generating compact representations preserving the semantic relevance to the maximal extent .varying the code sizes , we observe from table [ wrap - tab:1 ] that the map performance of majority of the supervised hashing methods improves significantly .in particular for drh - 34 , we observe that the improvement in the performance from 48 bits to 64 bits is only fractional .the performance of drh - 34 at 32 bits is highly comparable to drh - 18 at 64 bits .this testifies that with increasing layer depth drh learns more compact binary embeddings such that shorter codes can already result in good retrieval quality . * semantic similarity preservation * : visually assessing the t - sne representation of gist - itq ( fig .[ fig : nettsne](a ) ) we can observe that it fails to sufficiently represent the underlying semantic relevance within the cxr images in the latent hamming space , which retestifies the concerns over hand - crafted features that were raised in section [ sec : motivation ] .vggf - ksh ( fig .[ fig : nettsne](b ) ) improves over gist - itq substantially , however it fails to induce sufficient class - separability . despite ksh considering pair - wise relationships while learning to hash ,the feature representation generated by fine - tuned vgg - f is limited in representability as the cross - entropy loss is evaluated point - wise . finally , the tsne embedding of drh - 34 shown in fig .[ fig : nettsne ] visually reaffirms that semantic relevance remains preserved upon embedding and the method generates clusters well separated within the hamming space .the high degree of variance associated with the tsne embedding of normal class ( red in color ) is conformal with the high population variability expected within that class .[ fig : netresults ] demonstrates the first five retrieval results sorted according to their hamming rank for four randomly selected cxr images from the testing set . in particular , for case ( d ) , where we observe that the top neighbours ( d 1 - 5 ) share atleast one co - occurring pathology .for cases ( a ) , ( b ) and ( c ) , all the top five retrieved neighbours share the same class .r0.5 * joint optimisation * : the main contribution of the work hinges on the hypothesis that performing an end - to - end learning of hash codes is better than a two stage learning process .comparative validations against the two - stage deep learning methods ( a - ksh , v - ksh and baseline variant drn - ksh ) strongly support this hypothesis .in particular , we observe over 14.2% improvement in map comparing drn - ksh ( 34 - l ) to drh - 34 .this difference in performance may be owed to a crucial disadvantage of drn - ksh that the generated feature representation is not optimally compatible to binararization .we can also observe that , drh - 18 and drh - 34 incur very small average map decrease fo 1.8% and 0.7% when binarizing hash codes against non - binarized continuous embeddings in drh - b- 18 and drh - b - 34 respectively .in contrast , drh - nq suffers from very large map decreases of 6.6% and 10.8% in comparison to drh - b. these observations validate the need for the proposed quantization loss as it leads to nearly lossless binarization .in this paper , we have presented a novel deep learning based hashing approach leveraging upon residual learning , termed as deep residual hashing ( drh ) .drh integrates representation learning and hash coding into a joint optimisation framework with dedicated losses for improving retrieval performance and hashing related losses to control the quantization error and improve the hash code quality .our approach demonstrated very promising results on a challenging chest x ray dataset with co - occurring morbidities .taking insights from this pilot study on retrieval of cxr images with cardiopulmonary diseases , we believe gives rise to the following open questions for further discussion : how deep is deep enough ? how does drh extend to include an additional anatomical view ( like the dorsal view for cxr ) improve retrieval performance ?does drh generalize to unseen disease manifestations ? ; and can we visualize what drh learns ? in conclusion , we believe that our paper strongly supports our initial premise of using drh for retrieval but also opens up questions for future discussions . mesbah s , conjeti s , kumaraswamy a , rautenberg p , navab n , katouzian a. hashing forests for morphological search and retrieval in neuroscientific image databases . in miccai 2015 , pp .135 - 143 , springer international publishing .demner - fushman d , kohli md , rosenman mb , shooshan se , rodriguez l , antani s , thoma gr , mcdonald cj .preparing a collection of radiology examinations for distribution and retrieval . in jamia .2015 jul 1:ocv080 .
hashing aims at generating highly compact similarity preserving code words which are well suited for large - scale image retrieval tasks . most existing hashing methods first encode the images as a vector of hand - crafted features followed by a separate binarization step to generate hash codes . this two - stage process may produce sub - optimal encoding . in this paper , for the first time , we propose a deep architecture for supervised hashing through residual learning , termed deep residual hashing ( drh ) , for an end - to - end simultaneous representation learning and hash coding . the drh model constitutes four key elements : ( 1 ) a sub - network with multiple stacked residual blocks ; ( 2 ) hashing layer for binarization ; ( 3 ) supervised retrieval loss function based on neighbourhood component analysis for similarity preserving embedding ; and ( 4 ) hashing related losses and regularisation to control the quantization error and improve the quality of hash coding . we present results of extensive experiments on a large public chest x - ray image database with co - morbidities and discuss the outcome showing substantial improvements over the latest state - of - the art methods .
force - free magnetic fields , , satisfy , or equivalently calculation of such force - free fields is of importance in many astrophysical settings , for example accretion disks around various objects ( e.g. * ? ? ?* ; * ? ? ?* ) , neutron stars , pulsars , magnetic clouds , and solar and stellar coronae ( e.g. * ? ? ? * ) .a particular application in solar physics is the controversial ` topological dissipation ' model proposed by .the assertion of this model is that if an equilibrium magnetic field is perturbed by arbitrary motions at a line - tied boundary , then the subsequent field can not relax to a smooth force - free equilibrium .rather , the equilibrium must contain tangential discontinuities corresponding to current sheets .doubt has been cast upon the model however , as a number of authors have demonstrated the existence of smooth solutions in the scenario posed .the question as to whether current sheets form spontaneously in the coronal magnetic field is key to understanding the so - called coronal heating problem .this is just one example which demonstrates that determining both the structure and stability of force - free magnetic fields is of fundamental importance .there are different approaches that one may take when searching for force - free magnetic fields .one method , often used when modelling the solar corona , is to solve a boundary value problem ( , and see for a comparison of numerical schemes ) .the force - free field is reconstructed from boundary data , provided for example by a vector magnetogram .an alternative approach is to begin with an initial magnetic field that is not force - free and to perform a relaxation procedure .this is the natural approach if one wants to investigate the properties of particular magnetic topologies .as long as the relaxation procedure can be guaranteed to be ideal , then the topology will be conserved during the relaxation .one powerful computational approach for investigating the properties of force - free fields is to employ an ideal lagrangian relaxation scheme .such schemes exploit the property that under ideal mhd the vector evolves according to the equation where is the material derivative , the plasma density and the plasma velocity .this is of exactly the same form as the evolution equation of a line element in a flow ( see , e.g. ) , and thus a lagrangian description facilitates a relaxation that is , by construction , ideal .these schemes can be used to investigate the structure and ( ideal mhd ) stability of force - free fields .the latter is guaranteed by the iterative convergence of the scheme provided that the resolution is sufficient .the primary variables that the numerical scheme dynamically updates are the locations of the mesh points , with the quantities and being calculated via matrix products involving the initial magnetic field and derivatives of the mapping that describes the mesh deformation .an artificial frictional term is included in the equation of motion ( see also ) which guarantees a monotonic decrease of the energy .two implementations of this method are described in and .the method has been used extensively to investigate the stability and equilibrium properties of various different magnetic configurations , such as the kink instability of magnetic flux tubes , line - tied collapse of 2d and 3d magnetic null points and the parker problem . in the following sectionwe describe a test problem that illustrates one major difficulty in the computation of force - free fields , in the context of the lagrangian relaxation scheme outlined above . in section [ numsec ]we present two possible extensions of the numerical scheme . in section [ ressec ]we describe our results , and in section [ concsec ] we present our conclusions .in a numerical relaxation experiment using braided initial fields we came across an inconsistency of the resulting numerical force - free state , which is best explained with the help of the following example . consider a magnetic field obtained from the homogenous field by a simple twisting deformation as shown in fig .[ twoblobfig](a ) .obviously an ideal relaxation towards a force - free state must end again in a homogenous state ( ) . during this processthe lagrangian relaxation leads to a deformation of the initial computational mesh which exactly cancels the initial deformation applied to the homogenous field .this is a well defined setup in which we know exactly the initial and final states .we now employ the implicit ( adi ) relaxation scheme detailed by to relax our twisted field to a force - free equilibrium .the magnetic field is line - tied on all boundaries ( on and boundaries ) .the force as calculated by the numerical scheme decreases monotonically to an arbitrarily small value ( e.g. ) , giving the appearance that the scheme converges ( in an iterative sense ) to a force - free equilibrium ( to any desired accuracy , down to machine precision ) .however , when plotting , the force - free proportionallity factor , along a field line it shows variations which are by orders of magnitude higher than would be expected from .it is this inconsistency that we investigate in what follows .as we will discuss the convergence of the numerical scheme in what follows , it is worth emphasising here the distinction between iterative convergence ( at fixed resolution ) and real convergence , i.e. convergence towards a ` correct ' solution as the resolution becomes sufficiently large . in order to investigate the source of the inconsistency described in the previous section , we consider the test problem outlined there . specifically , we begin with an initially uniform magnetic field ( ) , and superimpose two regions of toroidal field , centred on the -axis at , with exactly the same functional form , but of opposite signs : with , , and , , , .we refer to this field in the following as .the field ( see fig .[ twoblobfig](a ) ) is constructed such that the two regions of twisted field , which are of opposite sign , should exactly cancel one another under an ideal relaxation , approaching the uniform field ( with ) as the equilibrium .note that is the maximum turning angle of field lines around the -axis .( a ) , given by eq . ( [ t2 ] ) .( b ) mesh in the plane for the test problem with artificially imposed deformation , with and resolution .,title="fig : " ] ( b ) , given by eq .( [ t2 ] ) .( b ) mesh in the plane for the test problem with artificially imposed deformation , with and resolution .,title="fig : " ] one of the great advantages of an ideal lagrangian relaxation is that it is possible to extract the paths of the magnetic field lines of the final state if one knows them in the initial state , simply by interpolating over the mesh displacement . calculating the field lines in this way , no error is accumulated by integrating along .given knowledge of the field line paths , one can test the quality of the force - free approximation by plotting along field lines . for a force - free field and be constant along field lines since taking the divergence of the above yields we begin by defining the variable , motivated by eq .( [ fff ] ) , as we find that for the magnetic field calculated by the relaxation scheme , the value of changes dramatically along field lines .of course the relaxation gives a magnetic field for which is not identically zero .so for a given value of , what is the maximum possible variation in along a field line ?consider say , where is representative of the error in calculating . using eq .( [ alphastar ] ) to replace gives or where is a parameter along a magnetic field line with units of length .now suppose that within our domain .this implies that , so that where is the length scale of variations perpendicular to the magnetic field . then from eq .( [ eq8 ] ) returning to our relaxation results , we have for example , with , .however , we find that .the discrepancy between this figure and the value of must come from the final term in eq .( [ fffquality ] ) .this has been checked by interpolating the data onto a rectangular mesh and approximating using standard finite differences .we find , and it therefore appears that the residual currents parallel to are not relaxed because . as demonstrated below , this error does however decrease as the resolution is increased ( see tables [ errortab][tab3 ] ) .it turns out that the appearance of the errors is related to the way in which is calculated within the scheme , via a combination of 1st and 2nd derivatives of the deformation matrix .these derivatives are calculated via finite differences in the numerical scheme , and it is here that these discretisation errors arise .this is demonstrated below . to ascertain the source of the errors , we take our initial state and _ instead of performing the relaxation procedure _ , we artificially apply a deformation to the mesh which we can write down as a closed form expression , and moreover for which we can obtain the derivatives of the mesh displacement , and thus the resultant and fields , as closed form expressions .motivated by the results of the relaxation , we impose a similar rotational distortion of the mesh which acts to ` untwist ' the field , via the transformation where constant .we now apply this transformation to an initially rectangular mesh on which is given by , and compare the numerical and exact values for each entry in the mesh deformation jacobian , and each component of and .results are shown for three different values of the parameter in table [ errortab ] ..errors in and for deformations with ( in eq .( [ analytangle ] ) ) using 2nd - order finite differences . is the mesh resolution . in each casethe upper number shows the maximum relative percentage error in the domain over all components of , i.e. , where is the exact value .the lower number is . [cols="^,^,^,^ " , ] we now leave the artificially imposed deformation , and relax using both the 2nd- and 4th - order schemes . the relaxation is allowed to run until as calculated by the relevant numerical scheme is reduced to .we compare this value with ( defined by eq .( [ epsilonstar ] ) ) , and also the maximum value of the lorentz force obtained by calculating via the stokes - based routine , denoted .the results for two levels of initial twist ( ) are displayed in tables [ tab2 ] and [ tab3 ] .c || c|c |@| c | c & & + & & & & + & & & & + 21 & 0.11 & 0.053 & 0.048 & 0.063 + 41 & 0.054 & 0.046 & 0.0067 & 0.018 + 61 & 0.032 & 0.035 & 0.0042 & 0.0050 + 81 & 0.021 & 0.027 & 0.0015 & 0.0019 c || c|c|@| c | c & & + & & & & + & & & & + 21 & 0.28 & 0.18 & 0.17 & 0.21 + 41 & 0.16 & 0.17 & 0.071 & 0.13 + 61 & 0.11 & 0.13 & 0.026 & 0.062 + 81 & 0.074 & 0.11 & 0.021 & 0.023 a number of points are immediately clear .first , in no simulation do we approach the apparent value of .however , the use of fourth- rather than second - order finite differences improves the quality of the relaxed field by an order of magnitude in when . increasing the deformation in the final state ( by increasing in the initial state ) has a strong adverse effect on the relaxation process .this is found to be because spurious ( unphysical ) current concentrations arise where none should reasonably be expected .examining the corresponding mesh , we find that these ` false ' current regions appear where the grid is most distorted see fig .[ jiso ] , and compare with fig .[ twoblobfig](b ) .\(a ) at 2/3 of maximum , for ( a ) an intermediate stage in the relaxation ( ) and ( b ) the final state ( ) .4th - order scheme , , .inset : view in -plane (i.e. from ).,title="fig : " ] ( b ) at 2/3 of maximum , for ( a ) an intermediate stage in the relaxation ( ) and ( b ) the final state ( ) .4th - order scheme , , .inset : view in -plane ( i.e. from ).,title="fig : " ] the ` current shards ' shown in fig .[ jiso](b ) actually intensify as the relaxation proceeds .we find that this is possible since ( approximated by interpolating onto a uniform mesh ) is not close to zero . as a resultthere is no ` return current ' associated with these localised current regions , which might be expected to generate a lorentz force that would act against the further intensification of the current shards ( if they have no physical basis ) . in previous studies using such codes ( e.g. * ? ? ?* ; * ? ? ?* ) , intensification of as decreased in time was associated with current singularities , so at first sight it appears that these current shards could naively be interpreted as ` current sheets ' , which would of course be unphysical .note , however , that the most important signature of current singularity in previous studies was a ( power - law ) proportionality of the peak current with mesh resolution ( for given ) .we have found that in fact the current shards become less intense as is increased , so there is a clear distinction between the two phenomena . finally , consider the values of we find for the relaxed fields ( tables [ tab2][tab3 ] ) .they are clearly of the same order as ( note that based on from the numerical scheme and based on are of the same order ) , and thus it seems that an implementation involving the stokes - based routine has the capacity to yield a magnetic field that is much closer to being force - free ( with lower ) .this is illustrated in fig .[ alpha_t ] .s through the relaxation with second order finite differences with and .the dashed line is the observed value of , while the solid line is the maximum allowable defined via eq .( [ alphamax ] ) with given by from the 2nd - order numerical scheme .the dot - dashed line is the maximum with given by .inset : close - up of behaviour at early time . ]we consider the observed value of based on eq .( [ alphastar ] ) ( dashed line ) , compared with a maximum allowable value for . since is anti - symmetric about , the maximum value allowed , based on eq .( [ fffquality ] ) , is and we take and as before , and , the length of the domain .then we obtain the maximum possible by taking to be the maximum value during the relaxation of ( solid line ) or ( dot - dashed ) .we see that very early in the relaxation the actual value of becomes greater than the maximum allowed by from the numerical scheme ( 2nd - order ) .moreover , the discrepancy grow steadily .however , always remains less than the maximum allowed by the stokes - based method , implying that this may be a more sound method to calculate the current and resulting lorentz force .force - free magnetic fields are important in many astrophysical applications . determining the properties of such force - free fields especially smoothness and stability properties is key to understanding energy release processes that heat the plasma and lead to dynamic events such as flares in the solar corona .we have investigated the properties of different relaxation procedures for determining force - free fields based on a lagrangian mesh approach .these techniques have previously been shown to have many powerful and advantageous properties .previous understanding was that such schemes would iteratively converge ( i.e. decreasing monotonically to a given level ) up to a certain degree of mesh deformation . beyond this level of mesh deformationthe scheme no longer converges ( oscillates or grows ) , and it is this phenomenon that was thought to limit the method .however , we have shown above that even when the numerical scheme iteratively converges , the accuracy of the force - free approximation can become seriously compromised for even ` moderate ' mesh deformations .this error is an accumulation of numerical discretisation errors resulting from the calculation of via combinations of 1st and 2nd derivatives of the mesh deformation jacobian which are calculated using finite differences .the result is that neither nor subsequently are well satisfied .it was demonstrated that a result of the breaking of the solenoidal condition for can be the development of spurious ( unphysical ) current structures .however , we note that these rogue currents do diminish with resolution ( ) , so when using these schemes this property should always be checked where possible .we expect that , as a result , if it were possible to systematically increase indefinitely the rogue currents would eventually vanish . in other words ,the real problem is that the iterative convergence ( i.e. monotonic decrease of with ) is not compromised by the rogue currents but the real convergence to a correct solution is severely impaired .a force - free field is defined by .one key result of this equation is that must be constant along magnetic field lines .we therefore argued that a correct diagnostic to measure the quality of a force - free apprioximation is the constancy of the parameter along field lines .an appropriate normalisation is given in eq .( [ epsilonstar ] ) .the results of our investigations suggest that for lagrangian schemes the measure does not provide a good indicator of true convergence-a better measure in the constancy of along .we note that other authors have proposed measures other than the maximum of for testing a force - free approximation for example introduced the mean current - weighted angle between and " .however , calculation of this measures still relies upon the value of in the numerical scheme , and in the present scenario we have shown that the errors arise not because and are not parallel , but because . since errors in the ( lagrangian ) numerical scheme investigated here arise as the mesh becomes increasingly distorted , a natural choice is to begin with a non - equilibrium field on a non - rectangular mesh , and relax towards a ( perhaps approximately ) rectangular one .however , this approach is not feasible if the field has complex topology . in the case of a braided field which is of particular interest to the theory of the solar corona we find that for our realisation of such a field there is no escaping having at least a moderately distorted mesh in the final state .we proposed two possible extensions to the numerical method .the first was to increase the order of the finite differences used .it was found that for certain levels of deformation this can give an order of magnitude improvement in the quality of the force - free approximation obtained .it is therefore certainly a good approach to use in some circumstances .as the mesh became more and more highly deformed , the advantage of the scheme with 4th - order finite differences was lost for our test case .furthermore , we found that for relaxation of the braided field described in no appreciable improvement arose from using the 4th - order scheme .the other extension that we proposed to the scheme seems very promising . in section [ stokessec ]we presented an algorithm for calculating the curl of a vector field on an arbitrary mesh , based on stokes theorem .for increasing levels of mesh deformation , this performed progressively better than the finite difference methods .what s more , in all of our tests the resultant lorentz force had lower errors than that calculated by the traditional finite difference .in order for a relaxation experiment to remain accurate as it proceeds , the maximum allowed value of based on ( see eq .( [ alphamax ] ) ) must always remain greater than the maximum observed value of .we found that this is the case for the stokes - based down to at least an order of magnitude lower in than for the finite difference methods ( see fig .[ alpha_t ] ) .all of the above leads us to believe that the stokes - based algorithm is a highly promising one for improving the accuracy of lagrangian relaxation schemes . at present it has not been implemented ( i.e. the code does not act to minimise ) because this requires a complete re - writing of the implicit ( adi ) time - stepping , and a simple explicit implementation turns out to be prohibitively computationally expensive . however , our intended next step in this investigation is to implement this scheme , either by introducing the stokes - based current calculation as a correction term in the existing scheme or by employing a more sophisticated explicit time - stepping to reduce the computational expense to acceptable levels .we note that while the algorithm at present only uses two nearest neighbour points in each direction , it could be extended to include further line integrals as corrections to the present formula for in much the same way as is done by increasing the order of finite difference derivatives .
force - free magnetic fields are important in many astrophysical settings . determining the properties of such force - free fields especially smoothness and stability properties is crucial to understanding many key phenomena in astrophysical plasmas , for example energy release processes that heat the plasma and lead to dynamic or explosive events . here we report on a serious limitation on the computation of force - free fields that has the potential to invalidate the results produced by numerical force - free field solvers even for cases in which they appear to converge ( at fixed grid resolution ) to an equilibrium magnetic field . in the present work we discuss this problem within the context of a lagrangian relaxation scheme that conserves magnetic flux and identically . error estimates are introduced to assess the quality of the calculated equilibrium . we go on to present an algorithm , based on re - writing the operation via stokes theorem , for calculating the current which holds great promise for improving dramatically the accuracy of the lagrangian relaxation procedure .
the change with time in the size of the interval between two consecutive heartbeats is called heart rate variability ( hrv ). there are many ( not necessarily independent ) sources of hrv in people , but the variations are largely controlled by the autonomic nervous system through the action of both the sympathetic and the parasympathetic branches , while the main mechanical influences are respiration and blood pressure . it has been found that long term hrv shows noise .this behaviour is found in many dynamical systems and draw attention to long term hrv . the detrended fluctuation analysis ( dfa )was introduced by peng et al . to study the long range correlations found in hrv . a cross over was identified around a scale of 10 beats in all healthy subjects , signalling a change of dynamics when going from short to long time scales .since then dfa has been used to characterise hrv in healthy conditions and in the presence of heart disease .another promising approach is the analysis of hrv using wavelets , which is a mathematical technique specifically suited to analyse non stationary series .the wavelet transform extracts the cumulative amplitudes of fluctuations of data at each point in time for a given scale .ivanov et al . presented the cumulative variation amplitude analysis ( cvaa ) , where the inter beat series were treated with consecutive wavelet and hilbert transforms and an instantaneous amplitude is assigned to each inter beat interval .it was found that the same gamma distribution describes the distributions of instantaneous amplitudes at all scales and for all the healthy subjects in the study .further studies using wavelets on long term recordings have explored the possibility to define methods which could be used as markers of heart disease .the dfa and cvaa results suggest that there are intrinsic unknown dynamics underlying the long term behaviour of the healthy human heart .it has been shown by hausdorff and peng that it is extremely unlikely that the emergence of these complex patterns is due to having a system of many different independent influences each with their own timescale .the question of the origin of the universal long term behaviour of hrv remains open . in this article we present dfa and cvaa studies of long term hrv in the eastern oyster in conditions resembling those of their natural habitat . in the first section ,the circulatory system of the oyster is briefly described .the basic components of the system designed for the monitoring and acquisition of the heartbeat data are also presented . in the second section ,the mathematical principles of dfa and cvaa are reviewed and then applied to the analysis of the oyster s heartbeat data . in the last section , the conclusions and some general remarks are given .the eastern oyster is a fairly well studied mollusk which lives in coastal waters and lagoons from canada to mexico .it has an open circulatory system , i.e. , the blood moves not only in the arteries and veins but also throughout the tissues .there are two accessory hearts which beat a couple of times per minute , independently from the principal heart .the main heart has three chambers .the two auricles receive the blood from the gills and send it , about half a second later , to the ventricle .the automatism of the heart is of a diffuse nature and contractions originate at any point of the ventricle .contractions are not induced by impulses from the central nervous system and it is not known if there are any localised pacemakers .two types of vesicles are found in the nerve endings of the myocardium , but it is not clear if they correspond to a rudimentary version of the sympathetic and parasympathetic systems .the main external influences on the heart rhythm are the temperature , the level of oxygen and the salinity of water , while the main mechanical influences are the movement of the shell valves and the gills .when compared with people there is a factor of hundred in the size of the hearts , the period of respiration in the eastern oyster is two to three times longer and the heart beats about two times slower .the heartbeat receives perturbations from the shell valves and from the accessory hearts , which are not present in the case of the human heart . in summaryall the suspected leading causes of the variability of the heart rate in people are not present or are quite different in the case of the eastern oyster . the experimental set up was composed of a low power laser diode , a fibre optic bundle , a photo diode and the data acquisition and monitoring systems .we have used a laser diode of 4 mw and 632 nm .the radius of the laser beam was 1 mm . the fibre optic bundle had a length of 1.3 m and a cross section of 38 mm with a transmittance of approximately 60 percent at this wavelength .the active size of the eg&g judson j165sp r03m sc germanium photo diode was 3x3 mm .the voltage signal was recorded with an at mio 16 data acquisition card connected through a bnc2080 multiple channel interface board .the system was controlled with a labview program written by us .a sketch of the technique is shown in fig . 1 .we have studied a set of six oysters with a uniform size of 7 cm and with a relatively thin section of the shell on top of the heart .some of the oysters were measured several times , having as a result a set of 15 time series .there were at least 12 hours between measurements on the same mollusk .the oysters were kept in a big water tank under conditions resembling those of their natural habitat .a short time before the measurements , they were transfered to a small recipient on the focus of the optical set up .a system of pumps kept the water circulating and passing through other containers where the water was filtered and where the salinity , and temperature were controlled .the analyses shown below were performed on each of the inter beat time series of the oyster s heartbeat .the laser light was pointed onto the beating heart .the intensity of the reflected light increases as the thickness of the walls increases in each systole of the ventricle .hence , the inter beat of a cardiac signal measured with our laser technique corresponds to the time between two consecutive peaks ( see fig .1b ) . the monitored signals of the oyster s heartbeat are : i ) highly non stationary , ii ) non periodic , and iii ) irregular .further physiological perturbations on the cardiac signals such as gills and valves movements are clearly identified in the long term .this set of special characteristics of our cardiac signals exclude the application of algorithms currently used to calculate the inter beats in ecgs of humans .the most popular , are based on the qrs complex identification , which does not apply to the cardiac signals from the oyster . to find the inter beat intervals produced by the systole we have used the following method ( see fig .2 ) : ( i ) a set of boxes of the same width is used to find the maximum in each interval .( ii ) a secondary box of width centred at the limit of two contiguous boxes is used as a range of confidence because the inter beat period is not constant .( iii ) the value of and are fixed visually such that the efficiency for finding the peaks is optimal .( iv ) furthermore , after the calculation of the inter beat intervals ,the outliers are filtered with a similar procedure as suggested in ref .we obtained as a result of applying the algorithm , 15 time series with lengths varying from 10403 to 25802 inter beats .the detrended fluctuation analysis is a technique that permits to identify long range correlations in non stationary time series . as commented in ref . , dfa has been applied to the study of a broad range of systems , such as the human gait , dna sequences , the heartbeat dynamics , the weather , and even in economics .specially , in the analysis of natural inter beat heartbeat fluctuation , dfa has helped to discriminate healthy from heart diseased humans .it has provided also , a quantitative difference between old and young people .several works show the robustness of dfa , although improvements are still being done .dfa is a simple yet powerful tool for studying physiological data .the dfa is as follows .let be a time series .then ( i ) integrate , ( ii ) divide the time series in equal amplitude intervals , ( iii ) in each box of width do a polynomial fit of order ( it defines dfa analysis ) : , ( iv ) eliminate the polynomial trend in each box , ( v ) calculate the root mean squared fluctuation as a function of the intervals , and ( vi ) do steps ( i)(v ) for several box widths to find the functional relation between and .the presence of scaling in the original signal produces a straight line in a double log plot of versus .if the slope , , is 0.5 , the data is uncorrelated and corresponds to a random walk . a slope between 0.5 and 1 signals the presence of a long range power law , where corresponds to noise .for a slope bigger than 1 , the correlations no longer correspond to a power law .a value of 1.5 indicates brownian noise .healthy people have a slope of 1.5 for values of less than 10 beats , and a slope of 1 for time scales between 100 and 10000 beats . using dfa1 we found ( fig .3 ) that all the oysters in our study present noise behaviour for scales above , corresponding to beats .the average slope in this region is where the error is the standard deviation of all the samples . for shorter time scales , between 10 and 100 beats ,a slope is found .the slope at even shorter scales ( ) is also close to 1 in all cases , but the dfa method has potentially large intrinsic systematic effects in this range , making it difficult to extract reliable information .we found the same variation in the results from measurement to measurement when comparing different sets from the same oyster and sets of data from different oysters ( see table 1 ) .it is interesting to note that the respiration and the accessory hearts have independent oscillations with periods around 20 to 40 beats . in contrast , the shell valves present intermittent activity but , when active , the period is also in the 2030 beats range .these complex perturbations on the oyster s heartbeat could be the source of the slope at short time scales . to discard the possibility that polynomial trends could be the source of the cross over obtained in this work , we performed analyses of oyster s heartbeat using dfa for , and found in each case a cross over approximately in the same region indicated by dfa1 , although decreased down to 20% while keeping its same value .we also analysed our measurements using cvaa .this technique that is based on consecutive wavelet and hilbert transforms was applied for the first time in the study of natural heartbeat fluctuation .it was found that a common gamma distribution characterises a group of healthy people . on the other hand , in the case of a group of people suffering sleep apnea, it was found that such data collapse does not happens .even more , in some cases it was not possible to get a gamma distribution . in general , a gamma distribution is characteristic of physical systems out of equilibrium .hence , the results previously commented suggest that the heartbeat in healthy people owns an intrinsic underlying dynamics .mathematically , cvaa consists of the next steps : ( i ) choose adequate scales to analyse the data , ( ii ) from the original series , a set of /eries each at a different scale is obtained using a continuous wavelet transform .there are many wavelet families to choose to perform this step and several have been tried .each family eliminates local polynomial trends from the signal in a different way .the coefficients of the wavelet transformation in each scale reflect the cumulative variation of the signal .( iii ) then , each of the new time series is processed with a hilbert transform to extract the instantaneous amplitudes of the variations at each point in the series .( iv ) construct the time series and calculate the amplitudes , ( v ) finally , the histogram of these amplitudes is normalised to 1 to form a probability distribution , , which is then re - scaled such that and .remarkably , we found that each distribution of instantaneous amplitudes is fitted by a gamma distribution ( fig .4a ) . furthermore , as in the case of healthy people , the distributions for all the oysters in the study are well described by the same gamma distribution ( fig .4b ) , i.e. , there is a common parameter which describes the normalised distribution of instantaneous amplitudes from any oyster .this behaviour is found at scales and for all the wavelets analysed : daubechies ( moments 310 ) , gaussian ( moments 310 ) , meyer , morlet , and b spline biorthogonals ( decomposition moments 1 and 3 ) .the results obtained for the parameters of the gamma distributions were all very similar . as an example of the robustness of our results , in table 2are shown the values of the parameters of the fits to the gamma distributions performed with orthogonal , biorthogonal , and non orthogonal wavelets .the numerical value for the parameter in the eastern oyster is which is slightly lower than the value of found for healthy people during sleep hours by ivanov et al . using gaussian wavelets .using the dfa and cvaa methods we find long range correlations and scaling in the long term hrv of the eastern oyster .dfa shows noise behaviour at large scales and a cross over to a smaller slope for scales of the order of 300 beats for all oysters in the study .the cross over happens at a scale well above the region where dfa presents some bias .the source of the cross over seems to be the result of the complex interactions between the components of the circulatory system of the eastern oyster .models of this phenomena such as polynomial trends added linearly to a correlated signal do not seem to apply to the case of the eastern oyster .with cvaa we find that all oyster records collapse to a gamma distribution with the same numerical value of the parameter , , for a wide variety of wavelets and scales .these results are remarkably similar to those previously reported in the study of healthy people , in spite of the fact that the circulatory system of the eastern oyster and the influences it is exposed to are dramatically different from those in the case of people , pointing thus to an intrinsic origin of these complex patterns .characterisations of long term hrv are promising candidates for clinical prognostic tools making it vital to understand its origin in order to exploit fully this type of techniques .our results pose stringent constrains and offer new hints and challenges to models attempting to describe the long term dynamics of the heart .we thank d. vera and j. bante for their technical assistance and g. oskam for fruitful discussions .this work was partially supported by conacyt grant 28387e .( * a * ) a low power laser is pointed to the beating heart .the walls reflect more light when contracted than during the diastole .( * b * ) the periodic variation of light intensity is measured with a photo diode whose voltage output varies with time capturing the beating of the oyster s heart . herethe stars show the peaks found by our algorithm .this is a schematic representation of the way our algorithm for finding heartbeat peaks works .the cardiac signal is divided with a set of boxes of equal length . due to the fact the heartbeat period is not constant , a secondary set of boxes of width localised at the limit of two contiguous boxes . in the plot and common values which optimises the search of peaks .see text for more details .( * a * ) result of the dfa performed on an inter beat series from an oyster showing the cross over behaviour .the error bars represents the fluctuations in in a region around each .the slope for longer time scales , corresponds to the noise behaviour which is also found in healthy human hearts . for shorter scales a behaviour close to white noiseis found indicating that the signal is almost completely random in these time scales .( * b * ) the value of and for all the samples .the solid lines correspond to and .the error bars reflect the variation of the slope when changing the start and end points of the fitting range .the bigger error bars in reflect the statistical fluctuation at large .note that the cross over does not happen in a point but that there is a transition region ( table 1 ) .( * a * ) result of the cumulative variation amplitude analysis performed on a heart series from an oyster at a scale corresponding to beats using the fourth wavelet of the daubechies family .the points are the data , while the solid line is the result of a fit to a gamma distribution .the parameters obtained from the fit are and with a of 0.3 .( * b * ) the data points , corresponding to the same scale and wavelet , for all oyster records .all of them collapse to a single gamma distribution .the solid line corresponds to the same value of the parameters and shown in * a * ( table 2 ) .table 1 : cross over region and slopes before ( ) and after ( ) the cross over for each sample .the error bars reflect the variation of the slope when changing the start and end points of the fitting range .the bigger error bars in reflect the statistical fluctuation at large .
characterisations of the long term behaviour of heart rate variability in humans have emerged in the last few years as promising candidates to became clinically significant tools . we present two different statistical analyses of long time recordings of the heart rate variation in the eastern oyster . the circulatory system of this marine mollusk has important anatomical and physiological dissimilitudes in comparison to that of humans and it is exposed to dramatically different environmental influences . our results resemble those previously obtained in humans . this suggests that in spite of the discrepancies , the mechanisms of long term cardiac control on both systems share a common underlying dynamic . , , and dfa , wavelets , eastern oyster , heartbeat , laser 87.19.hh , 87.80.tq , 89.20.ff , 89.75.da
this paper summarizes a talk presented at the albert einstein century international conference , held in paris , france , in july 2005 to mark the centennial of einstein s `` miracle '' year 1905 . strictly speaking, black holes are a consequence of another einstein miracle that did not occur in 1905 and still had to wait ten years , namely general relativity . with his theory of special relativity , however , einstein had laid the ground work already in 1905 for his theory of general relativity . andeven more strictly speaking , speculations on black holes predate even the special theory of relativity by over a century . in the late 1700 s johnmitchell in england and jean simon laplace in france independently realized that celestial bodies that are both small and massive may become invisible .the basis for this speculation is the observation that the escape speed where and are the stellar mass and radius , is independent of the mass of the test particle . within newton s particle theory of lightit seems quite reasonable that this should also apply to light , in which case light can no longer escape the star if the escape speed exceeds the speed of light , evidently this happens when meaning that stars with a large enough mass and a small enough radius become `` dark '' .laplace went on to speculate that such objects may not only exist , but even in as great a number as the visible stars . with the demise of the particle theory of light , however , these speculations also lost popularity , and dark stars remained obscure until well after the development of general relativity . in 1915albert einstein published his field equations of general relativity , one could argue that superficially this equation may not be all that different from the newtonian field equation the left - hand - side of the newtonian field equation ( [ field_newt ] ) features a second derivative of the newtonian potential , and the right - hand - side contains matter densities .quite similarly the einstein tensor on the left - hand - side of einstein s field equations ( [ field_gr ] ) contains second derivatives of the fundamental object of general relativity , the spacetime metric . to complete the analogy, the stress - energy tensor on the right - hand - side contains matter sources . for the vacuum solutions in which we are interested in this article , the stress - energy tensor vanishes , .unfortunately the einstein tensor contains many lower - order , non - linear terms , making einstein s equations a complicated set of ten coupled , quasi - linear equations for the ten independent components of the spacetime metric . clearly , it is very difficult to find meaningful exact solutions .karl schwarzschild , returning fatally wounded from the battle fields of world war i , was nevertheless able to derive a fully non - linear solution in spherical symmetry within a year of einstein s original publication . written as a line element , the spacetime metric describing this solution is this solution is the direct analog of the newtonian point - mass solution and describes the strength of the gravitational fields , expressed by the spacetime metric , created by a point mass at a distance . the relativistic schwarzschild solution ( [ schwarzschild ] )is significantly more mysterious than its newtonian analog , though . particularly puzzling is the `` schwarzschild radius '' at which the metric ( [ schwarzschild ] ) becomes singular .the existence of this singularity obscured the physical interpretation of this solution , and in fact schwarzschild himself died believing that his metric was physically irrelevant .other aspects also contributed to the fact that the astrophysical significance of the schwarzschild solution ( [ schwarzschild ] ) remained unappreciated for decades .for one thing it was not clear how such an object could possibly form , even though in 1939 oppenheimer and snyder published a remarkable analytic calculation describing the `` continued gravitational contraction '' of a dust ball that leaves behind the schwarzschild metric ( [ schwarzschild ] ) .this calculation , serving as a crude model of stellar collapse , made several simplifying assumptions : that the matter has zero pressure ( so that it can be described as dust ) , that the angular momentum is zero , and that the spacetime is spherically symmetric .critics maintained that in any realistic astrophysical situation none of these assumptions would hold , and that any deviation from these idealizations could easily halt the collapse , preventing continued gravitational contraction .the absence of angular momentum seems particularly troubling .since the schwarzschild solution does not carry any angular momentum it was completely unclear how any astrophysical object , which would necessarily carry _ some _ angular momentum , could collapse and form such a solution .finally , no astronomical observations had revealed phenomena requiring a gravitationally collapsed object as an explanation .given that at least some of the astronomical community had been quite reluctant to accept the much less exotic white dwarfs as an astrophysical reality , it is not surprising that a solution to einstein s field equations that became singular at finite radius was not immediately embraced as a celestial object .all three of these factors a lack of understanding of the singularity at the schwarzschild radius , general skepticism concerning gravitational collapse , and the absence of astronomical observations of gravitationally collapsed objects resulted in the fact that the great astrophysical significance of schwarzschild s solution ( [ schwarzschild ] ) remained unappreciated for almost 50 years .all of this changed in the 1960 s , the `` golden age of black hole physics '' .the golden age of black hole physics was ushered in when advances in our theoretical understanding of the schwarzschild geometry and gravitational collapse in general coincided with new astronomical observations of highly energetic objects that clearly pointed to a gravitationally collapsed object as their central engine . to begin with, it became clear that the apparent singularity at the schwarzschild radius ( [ r_ss ] ) is simply a harmless coordinate singularity , not completely unlike the poles of a sphere when described in terms of longitude and latitude .this was demonstrated by , who introduced a new coordinate system that remains perfectly regular at . instead of being a mysterious singularitythe schwarzschild radius now emerged as a point of no return : a one - way membrane called `` event horizon '' through which nothing , not even light can leave the collapsed region inside .inside this event horizon lurks a true singularity at which the curvature of spacetime becomes infinite . .the light cones of an observer falling into the black hole become increasingly tilted , until they completely tip over at the event horizon.,width=384 ] consider the unfortunate observer in the spacetime cartoon of figure [ fig1 ] .as long as he is far away from the event horizon an `` outgoing '' light ray emitted away from the horizon can propagate toward larger distance almost as easily as an `` ingoing '' ray emitted toward the horizon can propagate toward smaller distance . at this point the observer s local light cone , which is the section of spacetime on whichlight propagates , is almost upright , and he can easily send signals to a buddy even further away . as our first observer approaches the event horizon , however , his light cones become increasingly tilted , and it becomes increasingly difficult for outgoing light rays to actually move outward . at the event horizon the light cone tips over .an `` outgoing '' light ray emitted exactly on the event horizon will hover at the schwarzschild radius forever , and inside this point all light rays , even `` outgoing '' ones , immediately move inward .nothing can emerge from the event horizon and everything inside will soon reach the singularity that lurks at the black hole s center .our observer will cross the event horizon and reach the singularity in finite proper time .as seen by his buddy far away , his signals slowly fade away as he approaches the event horizon . note also that the in - falling observer would not necessarily observe anything special occurring at the event horizon .ultimately the observer will be torn apart by the increasingly strong tidal forces , but that may occur inside or outside the event horizon , depending on the black hole s mass .it is only through careful experiments with flashlights , studying the properties of outgoing light - rays , that the observer would be able to tell that he has crossed an event horizon .it is interesting to note that , by sheer coincidence , the location of the event horizon ( [ r_ss ] ) as expressed in schwarzschild coordinates coincides exactly with the radius of a `` dark star '' ( [ laplace ] ) as determined by mitchell and laplace . almost simultaneously with the improved understanding of the schwarzschild geometry , it became clear that gravitational collapse is much more generic than was believed earlier . in 1963 roy kerr discovered a generalization of the schwarzschild solution ( [ schwarzschild ] ) that carries angular momentum . with this discoverythe argument that objects carrying angular momentum can not collapse gravitationally immediately collapsed itself .the first numerical relativity simulations also demonstrated that pressure may not be able to prevent gravitational collapse .further progress in our understanding of gravitational collapse arrived with a number of theorems .perhaps most importantly roger penrose showed that the formation of a spacetime singularity is generic after a so - called trapped surface has formed .another set of theorems are collectively called the `` no - hair '' theorems ; in essence these theorems state that black holes have no distinguishing features .that is to say that kerr s solution describing a rotating black hole is the _ only _ solution describing a rotating black hole all stationary black holes are kerr black holes , parametrized only by their mass and the angular momentum , but from an astrophysical perspective this is irrelevant since any charge would very be neutralized very quickly . ] .black holes can be perturbed , but any perturbation is quickly radiated away , leaving behind a kerr black hole . this is a truly remarkable statement : it means that the structure of black holes is _ uniquely _ determined by their mass and angular momentum alone , completely independently of what formed the black hole at first place .this observation led chandrasekhar to eloquently conclude in his nobel lecture _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this is the only instance we have of an exact description of a macroscopic object ... they are , thus , almost by definition , the most perfect macroscopic objects there are in the universe . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _the technical advances during world war ii lead to the development of radio astronomy in the post - war years . within a few yearsseveral discrete sources had been detected , but with few exceptions the origin of these sources remained a mystery .it was generally believed that the sources were otherwise dark `` radio stars '' in our galaxy , since at extra - galactic distances they would have to be enormously energetic .this opinion started to change when the positioning of these `` quasars '' improved , and some were identified with galaxies .the true breakthrough arrived when lunar occultations of the radio source 3c273 lead to its identification with another object whose optical spectrum showed a redshift of clearly establishing it as an extra - galactic object .this realization is documented in the remarkable march 16 issue of nature .the same volume contains a theoretical paper by , in which the authors conclude _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ our present opinion is that only through the contraction of a mass of to the relativity limit can the energies of the strongest sources be obtained . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ these events ushered in `` relativistic astrophysics '' as a new field . as a sign of the time the first _ texas symposium on relativistic astrophysics _was convened in december of 1963 .the new field also attracted many new people into the field , including john wheeler .in fact , it was john wheeler who in 1967 coined the term _ black hole _ , marking the transition from speculative ideas on dark stars to the astrophysical reality of black holes .clearly , we do not have any absolutely water - tight proof that black holes exist in our universe .however , we do have some extremely convincing evidence that makes black holes by far the most conservative explanation of the observed phenomena .observations clearly point to two different populations of black holes .one of these populations are `` stellar - mass black holes '' , which have masses in the order of 10 ; another group are `` supermassive black holes '' , which have masses in the order of .i will discuss these two groups in more detail below .there is also some observational evidence for `` intermediate mass black holes '' of about , and finally there are speculations on `` primordial black holes '' that would be left over from the big bang .even though these different kinds of black holes have vastly different masses , they are , from a mathematical perspective , the exact same kind of animal : a kerr solution to einstein s field equations with certain values for their mass and angular momentum . from an astrophysical perspectivethey differ not only in their mass and angular momentum , but also in how we can observe them and in how they form , i.e. in their evolutionary history .the prime example of a stellar - mass black hole is cygnus x-1 , which was first discovered in the data of the x - ray satellite uhuru , launched on dec 12 1970 .the case for cygnus x-1 as a black hole can be summarized as follows : to begin with , cygnus x-1 shows very short time variations in the x - ray signal ( in the order of ms and less ) .this implies that cyg x-1 is a very small object , in the order of m. it is also the unseen binary companion to a 9th magnitude supergiant star called hde226868 .the doppler curve of this star shows that the binary has an orbital period of about 5.6 days ; from the amplitude of the dopplershift we can determine the mass function to be about .combining this with the mass of supergiant stars we can derive a lower limit on the mass of cyg x-1 , an independent argument involving some constraints on the binary s orbit arrives at a similar limit .given its size , cyg x-1 has to be a `` compact object '' : a white dwarf , a neutron star , or a black hole .even under very conservative assumptions both white dwarfs and neutron stars have maximum masses safely below the lower limit of cyg x-1 s mass .this leaves a black hole as the most likely explanation .cyg x-1 is an example of a _ stellar - mass _ black hole , which form the end - point of the evolutionary cycle for massive stars . when massive stars run out of nuclear fuel they can no longer support themselves against gravitational contraction . for stars with masses larger than about 20 can halt the subsequent gravitational collapse , which therefore leads to prompt black hole formation . for stars with masses between about 8 and 20 collapse can be halted when the density of the compressed stellar material reaches nuclear densities .the resulting shock - wave launches a `` core - collapse '' supernova and leaves behind a newly formed neutron star .this neutron star may either remain stable , or it may form a black hole at a later time . a `` delayed collapse '' to a black holecan be triggered by a variety of mechanisms , including fall - back of matter and phase transitions in the neutron star interior .finally , two neutron stars may collide .if the remnant exceeds the maximum allowed mass for rotating neutron stars this coalescence will also lead to the formation of a stellar - mass black hole .we currently know of about 20 confirmed black hole binaries , but presumably that is only a tiny fraction of the total number of stellar - mass black holes in our own galaxy .( figure from .).,width=384 ] perhaps the most convincing evidence for a black hole comes from the center of our own galaxy , sagittarius a .observations of our galactic center in the near infrared reveal several stars that orbit a central object in bound orbits .particularly compelling is the orbit of the star s2 , about two - thirds of which has now been mapped with increasingly accurate positioning ( see figure [ fig2 ] ) .the emerging orbit is a kepler orbit with a period of about years and a semi - major axis of about 4.62 mpc . from kepler s third lawwe can conclude that the enclosed mass is s2 s orbit also has a significant eccentricity of . at pericenterits distance to the central object is only 124 au .this implies that the central object harbors an enormous mass in a very small volume .the most conservative explanation for such an object which also must have remained stable over the lifetime of the galaxy is again a black hole .sagittarius a is an example of a _ supermassive _ black hole .there is some very convincing evidence for supermassive black holes in other galaxies as well for example the maser observations from ngc 4258 ( also known as m106 ) .in fact , there is evidence that supermassive black holes lurk at the cores of most galaxies .it is less clear exactly how supermassive black holes form .many different routes may lead to the formation of massive black holes in active galactic nuclei , but which of these routes nature tends to take is still under debate .a constraint comes from the recent observation of quasars at redshift in the sloan digital sky survey . if these quasars are indeed powered by supermassive black holes, this implies that the latter must have formed very quickly in the early universe .one model that may account for that is accretion onto seed black holes that form in the collapse of first - generation ( pop .iii ) stars . as a function of separation of a test mass in orbit about a newtonian point mass ( left panel ) and a schwarzschild black hole of mass ( right panel ) .the solid lines denote contours of constant orbital angular momentum .extrema of these contours identify circular orbits , marked by the dashed lines .the circular orbits are stable if the extremum is a minimum , otherwise they are unstable.,title="fig:",width=288 ] as a function of separation of a test mass in orbit about a newtonian point mass ( left panel ) and a schwarzschild black hole of mass ( right panel ) .the solid lines denote contours of constant orbital angular momentum .extrema of these contours identify circular orbits , marked by the dashed lines .the circular orbits are stable if the extremum is a minimum , otherwise they are unstable.,title="fig:",width=288 ] most observational evidence for black holes to date is based on the argument `` a lot of mass in a tiny volume '' , which leaves black holes as the most conservative explanation . also , most observations to date only provide information about the black hole mass , and not about the angular momentum . clearly it would be desirable to go beyond that .a number of efforts are under way to find observational evidence for certain black hole characteristics .one such characteristic is the absence of a stellar surface .x - rays emitted by black hole candidates indeed shows some differences from that emitted by neutron stars .these differences can be explained in terms of an accretion flow hitting the stellar surface in the case of neutron stars , but freely falling through the horizon in case of the black holes .another effort aims at resolving the event horizon of sgr a , which would show up as a `` black hole shadow '' since it absorbs all radiation emitted behind it .an interesting idea for the measurement of the black hole angular momentum is based on the concept of the _ innermost stable circular orbit _ , or isco for short .consider a test mass in orbit about a point mass .a circular orbit can be found by identifying an extremum of the test mass s binding energy at constant orbital angular momentum ; a minimum corresponds to a stable circular orbit , while a maximum corresponds to an unstable orbit . in newtonian physics the binding energy as a function of separation given by where .to find an extremum of the binding energy we differentiate with respect to at constant and set the result to zero , which we recognize as kepler s third law ( since ) .evidently we can find a circular orbit for arbitrary , and the equilibrium energy of these circular orbits results from inserting ( [ l ] ) back into ( [ newt_energy ] ) , we can also verify that all these extrema are minima , meaning that the orbits are stable .alternatively , we can identify the orbits graphically as in the left panel in fig .[ fig3 ] . for a test mass in orbit about a schwarzschild black hole the binding energy is given by where is the schwarzschild radius .taking derivatives we would find that circular orbits exist only for radii .moreover , these orbits are stable only outside which therefore marks the isco for a test particle orbiting a schwarzschild black hole ( see also the right panel in fig .[ fig3 ] ) .the presence of an isco is of great relevance for black hole observations because most of the emitted radiation is believed to originate from an accretion disk . in this accretion disk particlesfollow almost circular orbits as they spiral toward the black hole .since circular orbits are unstable inside the isco , the accretion disk can only exist outside .accordingly , the doppler shift of spectral lines is limited by the speed of matter at the isco .the key point is that the location of the isco depends on the black hole s angular momentum ; it is at only for a schwarzschild black hole with , and can get much closer to the event horizon for a spinning kerr black hole . fora spinning black hole the accretion disk may therefore extend closer to event horizon , and the correspondingly higher speeds result in a greater broadening of the emitted spectral lines .some results based on this idea have been reported in .even if successful , however , these techniques can only determine the global parameters and .clearly it would be desirable to map out the local properties of the spacetime geometry around a black hole .our best chance of doing that comes with gravitational wave observations .maxwell s equations predict that accelerating charges emit electromagnetic radiation . for an electric dipole ,the emitted power is given by the lamor formula where is the electric dipole moment , its second time derivative , and where we sum over repeated indices .einstein s equations similarly predict that accelerating masses emit gravitational radiation .one might expect that the power emitted from a gravitational wave source is given by an expression similar to lamor s formula . however , the analog of the electric dipole moment is the mass dipole moment , and its first time derivative the total momentum . for an isolated systemthe total momentum is constant , so that the second derivative of the dipole moment vanishes there is no gravitational dipole radiation .the first and often dominant contribution to gravitational radiation comes from the quadrupole term .the equivalent of the lamor formula in general relativity is therefore here is the reduced quadrupole moment the bracket denotes averaging over several characteristic periods of the system , and the triple dot denotes the third time derivative . for a strong signal of gravitational radiationwe evidently need large and rapidly changing quadrupole moments , which brings to mind binary systems . to estimate for a binary system we evaluate for a newtonian binary and find here is the binary s total mass , the reduced mass and the semi - major axis , andwe have also used kepler s law to eliminate the orbital frequency . inserting numberswe find that the only binary systems that emit appreciable amounts of gravitational radiation have huge masses and very small binary separations .the most promising candidates are therefore _ compact binaries _ consisting of black holes or neutron stars .the loss of energy due to the emission of gravitational radiation leads a shrinking of the binary s orbit . to see this we compute where is the generalization of ( [ newt_bind ] ) . computing the loss of angular momentum we would also find that that the emission of gravitational radiation leads to a reduction of the binary s eccentricity , i.e. to a circularization of its orbit .a binary inspiral then proceeds as illustrated in fig .[ fig4 ] . presumably the binary starts out at a large binary separation .the emission of gravitational radiation leads to a continuous decrease in the binary separation , and also to a decrease in the binary s eccentricity . at sufficiently late timeswe may therefore approximate the binary orbit as circular , except for the slow inspiral . as the binary separation shrinks , both the amplitude and the frequency of the emitted gravitational wave signal increases .this leads to the typical `` chirp '' signal sketched in fig .[ fig4 ] . in analogy to our discussion of point masses orbiting a single black hole , the binary will at some point reach an isco .inside this separation it is energetically favorable for the binary companions to abandon circular orbits , and instead plunge toward each other and merge . in the final `` ring - down '' phasethe remnant will settle down quickly into an axisymmetric equilibrium object .even for the most promising sources of gravitational radiation any astrophysical signal that we might hope for is going to be extremely weak . to see this we estimate the effect of a gravitational wave on the spacetime metric . if the spacetime is almost flat , we may write , where is the flat minkowski metric and where is the small perturbation that we observe as a gravitational wave . for a perturbation caused by a distant gravitational wave source we have here the symbol denotes the `` transverse traceless '' part of the corresponding tensors , is the distance from the observer to the gravitational wave source , and the quadrupole moment is evaluated at the retarded time .we again evaluate this term for a newtonian binary , and estimate the gravitational wave amplitude to be in the order of where we have used kepler s third law to eliminate and where is the schwarzschild radius ( [ r_ss ] ) of the mass .evidently we expect the strongest gravitational wave signal when the binary s semi - major axis is small , i.e. close to the isco where ( if we neglect factors of a few ). consider , for example , a stellar - mass compact binary that is coalescing somewhere in the virgo cluster .for such a binary , is in the order of a few km , and the distance is about 15 mpc or a few km .we then have .since the corrections are added to the background metric , which is of order unity , the relative change in the spacetime metric is exceedingly small , and the proposal to measure these perturbations remarkably ambitious . the weakness of any gravitational wave signal that we may realistically expect has two important consequences for the prospect of observing them : we need extremely sensitive gravitational wave detectors , and we need accurate models of any potential sources to aid in the identification of any signal in the noisy output of the detector .we will discuss both of these aspects in the following two sections .a new generation of gravitational wave observatories is now operational . basically , these instruments are gigantic michelson - morley interferometers .the two ligo observatories in the us , for example , have an arm - length of 4 km .somewhat smaller instruments exist in italy ( virgo ) , germany ( geo ) , and japan ( tama ) , and another observatory is currently under construction in australia ( aciga ) .the idea is that a gravitational wave passing through these interferometers will slightly distort the relative length of the two perpendicular arms .tracking these distortions with the help of laser interferences as a function of time should reveal the passing gravitational wave signal .the ground - based detectors mentioned above all have arm - lengths in the order of a kilometer , which makes them sensitive to the gravitational radiation emitted from stellar - mass black holes .a space - based gravitational wave antenna lisa with an arm - length of several million kilometer is being planned ; this instrument would be sensitive to gravitational radiation emitted from supermassive black holes .as we have seen in the previous section an astrophysical gravitational wave signal that we might realistically expect will lead to a tiny perturbation of the spacetime metric and hence to only a minuscule distortion in the arm - lengths of the interferometers in fact , only a tiny fraction of the size of the nucleus of a hydrogen atom .it is therefore extremely difficult to reduce the dominant sources of noise seismic , thermal and photon shot - noise and increase the signal - to - noise ratio so that an astrophysical source can be identified unambiguously .it is therefore truly remarkable that the ligo collaboration has recently achieved its design goal of a strain sensitivity exceeding for a large part of its frequency range ( see fig .[ fig5 ] ) . given this sensitivity , and assuming a signal - to - noise ratio of eight, ligo is now able to detect binary neutron stars to a distance somewhat greater than 10 mpc . for binary black holes with a slightly larger massthis range is also slightly larger , and already includes the virgo cluster. whether or not the current ligo observatory , ligo i , will be able to detect a binary black hole system or any other source of gravitational radiation depends on how often such binaries coalesce in our or our neighboring galaxies .estimates for binary neutron star systems can be based on the statistics of known binaries , while estimates for binary black hole systems , which have never been observed , are typically based on population synthesis calculations . according to these estimates it is not completely impossible that ligoi will observe a compact binary , and it seems almost certain that an advanced ligo ii observatory , as it is currently being planned , will see many such sources .we finally point out that a gravitational wave detector tracks the amplitude of a gravitational wave at one particular point in space , but , unlike a telescope , does not produce an image .a single detector therefore can not position any source .two detectors can position a source to a ring in the sky , and with the world - wide network of detectors there is hope that any potential source can be positioned to within a reasonable accuracy .different approximations can be used to model the inspiral of a compact binary in its different phases . for the initial inspiral , while the binary separation is sufficiently large and the effects of relativistic and tidal interactions sufficiently small , post - newtonian point - mass calculations provide excellent approximations .the very late stage can be modeled very accurately with the help of perturbation theory .neither one of these approximations can be used in the intermediate regime around the isco during which the binary emits the strongest gravitational wave signal .the most promising tool for the modeling of this dynamical phase of the binary coalescence and merger is numerical relativity .numerical relativity calculations typically adopt a 3 + 1 decomposition . in such a decomposition the four - dimensional space is carved up into a foliation of three - dimensional spatial slices , each one of which corresponds to an instant of constant coordinate time .einstein equations for the four - dimensional spacetime metric can then be rewritten as a set of three - dimensional equations for the three - dimensional , spatial metric on the spatial slices , as well as its time derivative .the general coordinate freedom of general relativity has an important consequence for the structure of these three - dimensional equations . since there are three space coordinates and one time coordinate , we can choose four of the ten independent components of the symmetric spacetime metric freely .but that means that the ten equations in einstein s equations ( [ field_gr ] ) can not be independent otherwise the metric would be over - determined .four of the ten equations in einstein s equations must be redundant .in the framework of a 3 + 1 decomposition these four equations are _ constraint equations _ that constrain the gravitational fields within each spatial slice , while the remaining six _ evolution equations _ govern the time - evolution of from one slice to the next .the equations are compatible , meaning that fields that satisfy the constraint equations at one instant of time continue to satisfy the constraints at all later times if the fields are evolved with the evolution equations .this structure of the equations is very similar to that of maxwell s equations , where the `` div '' equations constrain the electric and magnetic field at any instant of time , while the `` curl '' equations govern the dynamical evolution of the fields .finding a numerical solution to einstein s equations usually proceeds in two steps .in the first step the constraint equations are solved to construct _ initial data _ , describing the gravitational fields together with any matter or other sources at some initial time , and in the second step these data are _ evolved _ forward in time by solving the evolution equations .one of the challenges in constructing initial data results from the fact that the constraint equations determine only some of the gravitational fields .the remaining fields are related to the degrees of freedom associated with gravitational waves , which depend on the past history and can not be determined from einstein s equations at only one instant of time . instead, these `` background fields '' are freely specifiable and have to be chosen before the constraint equations can be solved .the challenge , then , lies in making appropriate choices that reflect the astrophysical scenario one wishes to model . for the modeling of binary black holes we would like to construct initial data that model a binary at a reasonably close binary separation outside the isco as it emerges from the inspiral from a much larger separation .we expect such a binary to be in `` quasi - equilibrium '' , meaning that the orbit is circular ( except for the slow inspiral ) and that the individual black holes are in equilibrium . a number of different approaches have been pursued , but currently the best approximations to quasi - equilibrium black holes are the models of .probably there are ways to improve these data , in particular as far as the background fields and the resulting `` gravitational wave content '' are concerned , but even without these improvements the current models are probably excellent approximations . ) from a binary black hole coalescence as recently computed by pretorius ( private communication ) .the right panel shows the trajectory of the black hole horizons in the calculation of .,title="fig:",width=297 ] ) from a binary black hole coalescence as recently computed by pretorius ( private communication ) .the right panel shows the trajectory of the black hole horizons in the calculation of .,title="fig:",width=336 ] while initial data describing binary black holes have been reasonably well understood for a number of years , progress on dynamical simulations of black holes has been much slower . until recently , these simulations had been plagued by numerical instabilities that made the codes crash long before the black holes had done anything interesting .the past year , however , has seen a dramatic break - through in this field , and by now several groups can perform reliable simulations of the binary black hole coalescence , merger and ring - down .the first announcement of such a calculation came from .his approach differs from the more traditional numerical relativity calculations in that he integrates the four - dimensional einstein equations ( [ field_gr ] ) directly , instead of casting them into a 3 + 1 form .the left panel of fig .[ fig6 ] shows a gravitational wave form from one of his recent simulations , starting with the initial data of . at early timessome noise is seen , which is either related to the imperfect initial data themselves or the imperfect matching of the numerical methods used in the initial data and the evolution .the noise disappears quickly , and leaves behind a very clean waveform , tracking the inspiral through several orbits , through the plunge and merger , until the newly formed remnant settles down into a single kerr black hole . shortly after pretorius announcement a number of other groups announced similarly successful calculations .all of these later simulations do adopt a 3 + 1 decomposition of einstein s equations , and cast these equations in the bssn form .the right panel of fig .[ fig6 ] shows the trajectory of the black holes in the calculation of . and adopt a particularly simple method that treats the black hole singularities as `` punctures '' and avoids having to excise the black hole interior from the numerical grid . in the meantimeseveral follow - ups on these calculations have already been published , including simulations of binaries with mass - ratios different from unity and spinning black holes . especially comparing with the situation just a year ago , it is truly remarkable and reassuring that different groups using independent techniques and implementations can now carry out reliable simulations of binary black hole coalescence and merger. it may still be a while until results from these simulations can be used to assemble a catalog of realistic wave templates to be used in the analysis of data from gravitational wave detectors , but the past year has certainly seen a huge step forward in that direction .black holes may well be the most fascinating consequence of einstein s theory of general relativity .similarly fascinating is the development of our understanding of black holes .speculations on so - called `` dark stars '' actually predate both special and general relativity by over a century , but it nevertheless took almost half a century after the publication of general relativity and schwarzschild s derivation of his famous solution until black holes became generally accepted as an astrophysical reality .by now we have very convincing observational evidence for both stellar - mass and supermassive black holes .so far these observations only constrain the black hole mass , and clearly it would be desirable to probe the local black hole geometry in addition to the global parameters . there is hope that we will be able to do that in the near future with the new generation of gravitational wave detectors .the ligo observatories have recently achieved their design sensitivities , enabling us to detect inspiraling binary black holes to a distance of approximately the virgo cluster . the next generation advanced ligo will improve the sensitivity by over a factor of ten , which increases the event rate by over a factor of thousand . with the recent advances in numerical relativity we are also much closer to producing theoretical gravitational wave templates , which will aid both in the identification of gravitational wave signals and in their interpretation .there is hope , then , that we can discuss detailed gravitational wave observations of binary black holes by the time we celebrate the centennial of einstein s general relativity .r. schdel , t. ott , r. genzel , r. hofmann , m. lehnert , a. eckart , n. mouawad , t. alexander , m. j. reid , r. lenzen , m. hartung , f. lacombe , d. rouan , e. gendron , g. rousset , a .-lagrange , w. brandner , n. ageorges , c. lidman , a. f. m. moorwood , j. spyromilio , n. hubin , and k. m. menten , _ nature _ * 419 * , 694 ( 2002 ) .a. m. ghez , g. duchne , k. matthews , s. d. hornstein , a. tanner , j. larkin , m. morris , e. e. becklin , s. salim , t. kremenek , d. thompson , b. t. soifer , g. neugebauer , and i. mclean , _ astrophys .j. lett . _* 586 * , l127 ( 2003 ) .d. richstone , e. a. ajhar , r. bender , g. bower , a. dressler , s. m. faber , a. v. filippenko , k. gebhardt , r. green , l. c. ho , j. kormendy , t. r. lauer , j. magorrian , and s. tremaine , _ nature _ * 395 * , a14 ( 1998 ) .x. fan , m. a. strauss , d. p. schneider , r. h. becker , r. l. white , z. haiman , m. gregg , l. pentericci , e. k. grebel , v. k. narayanan , y .- s .loh , g. t. richards , j. e. gunn , r. h. lupton , g. r. knapp , .ivezi , w. n. brandt , m. collinge , l. hao , d. harbeck , f. prada , j. schaye , i. strateva , n. zakamska , s. anderson , j. brinkmann , n. a. bahcall , d. q. lamb , s. okamura , a. szalay , and d. g. york , _ astron .j. _ * 125 * , 1649 ( 2003 ) . j. m. miller , a. c. fabian , c. s. reynolds , m. a. nowak , j. homan , m. j. freyberg , m. ehle , t. belloni , r. wijnands , m. van der klis , p. a. charles , and w. h. g. lewin , _ astrophys . j. lett . _* 606 * , l131 ( 2004 ) . c. kim , v. kalogera , d. r. lorimer , m. ihm , and k. belczynski , `` the galactic double - neutron - star merger rate : most current estimates , '' in _ asp conf .328 : binary radio pulsars _ , edited by f. a. rasio , and i. h. stairs , 2005 , p. 83 .v. kalogera , c. kim , d. r. lorimer , m. burgay , n. damico , a. possenti , r. n. manchester , a. g. lyne , b. c. joshi , m. a. mclaughlin , m. kramer , j. m. sarkissian , and f. camilo , _ astrophys . j. lett . _* 601 * , l179 ( 2004 ) .
this paper provides a brief review of the history of our understanding and knowledge of black holes . starting with early speculations on `` dark stars '' i discuss the schwarzschild `` black hole '' solution to einstein s field equations and the development of its interpretation from `` physically meaningless '' to describing the perhaps most exotic and yet `` most perfect '' macroscopic object in the universe . i describe different astrophysical black hole populations and discuss some of their observational evidence . finally i close by speculating about future observations of black holes with the new generation of gravitational wave detectors . address = department of physics and astronomy , bowdoin college , brunswick , me 04011 , altaddress = department of physics , university of illinois at urbana - champaign , urbana , il 61801
.3 cm for many years , graphs have interested physicists as well as mathematicians . for instance, equilibrium statistical physics widely uses model systems defined on lattices , the most popular being certainly the ising model . on another hand , in solid - state physics , tight - binding models ( see , for instance , )involve discretized versions of schrdinger operators on graphs .for all those models , the thermodynamic properties of the system heavily depend on geometrical characteristics of the lattice such as the connectivity and the dimensionality of the embedding space .however , in general , they do nt depend explicitly on the lengths of the edges .random walks on graphs , where a particle hops from one vertex to one of its nearest - neighbours , have also been studied by considering discrete laplacian operators on graphs ..1 cm such laplacian operators can also be useful if they are defined on each link of the graph .for example , in the context of organic molecules , they can describe free electrons on networks made of one - dimensional wires .many other applications can be found in the physical literature .let us simply cite the study of vibrational properties of fractal structures such as the sierpinski gasket or the investigation of quantum transport in mesoscopic physics .weakly disordered systems can also be studied in that context .it appears that weak localization corrections in the presence of an eventual magnetic field are related to a spectral determinant on the graph .this last quantity is actually of central importance and interesting by itself . in particular, it allows to recover a trace formula that was first derived by roth .moreover , the spectral determinant , when computed with generalized boundary conditions at the vertices , is useful to enumerate constrained random walks on a general graph , a problem that has been addressed many times in the mathematical literature ..1 cm brownian motion on graphs is also worthwhile to be investigated from , both , the physical and mathematical viewpoints .for instance , the probability distribution of the time spent on a link ( the so - called occupation time ) was first studied by p levy who considered the time spent on an infinite half - line by a one - dimensional brownian motion stopped at some time .this work allowed levy to discover in 1939 one of his famous arc - sine laws .since that time , this result has been generalized to a star - graph and also to a quite general graph .local time distributions have also been obtained in ..1 cm it has been pointed out since a long time that first - passage times and , more generally , occupation times are of special interest in the context of reaction - diffusion processes .computations of such quantities in the presence of a constant external field have already been performed for one - dimensional systems with absorbing points ( see , for example , ) .this was done with the help of a linear fokker - planck equation ..1 cm the purpose of the present work is to extend those results on a general graph with some absorbing vertices .we will consider a brownian particle diffusing with a spatially - dependent diffusion constant and subjected to a drift that is defined in every point of each link .the paper is organized as follows . in section 2, we present the notations that will be used throughout the paper .we discuss the boundary conditions to be used at each vertex in section 3 and , also , in the appendices .more precisely , we analyse in details specific graphs in the appendices a and b. the obtained results allow to deal with a general graph in appendix c. section 4 is devoted to the computation of the average time spent , before absorption , by a brownian particle on a part of the graph . in this section , we also calculate the laplace transform of the joint law of the occupation times on each link . in the following section , we present additional results , especially concerning conditional and splitting probabilities .various examples are discussed all along the different sections .finally , a brief summary is given in section 6 ..3 cm let us consider a general graph made of vertices linked by bonds of finite lengths . on each bond [ , of length , we define the coordinate that runs from ( vertex ) to ( vertex ) .( we have , of course , ) ..1 cm moreover , we suppose that , among all the vertices , of them are absorbing .( a particle gets trapped if it reaches such a vertex ) ..3 cm we will study the motion on of a brownian particle that starts at from some non - absorbing point .the particle with a spatially - dependent diffusion constant is subjected to a drift defined on the bonds of .more precisely , and are differentiable functions of on each link . in particular , on each link [ , the following limits , , , ... ,are well defined .such notations will be used extensively throughout the paper ..3 cm the continuity properties of and at each vertex will be discussed in the following section ..1 cm we also specify the motion of the particle when it reaches some vertex .let us call ( ) the nearest neighbours of .we assume that the particle will come out towards with some arbitrary probability ( see for a rigourous mathematical definition ) . of course , if is an absorbing vertex or if ] ( and also the time ) with steps of length ( resp . ) .it is easy to realize that : p(y , ( n+1)t | , 0 ) & = & _ i=1^m _ p__i p ( y , n t | x_i , 0 ) [ back1 ] + p ( y , ( n+1 ) t | x_i , 0 ) & = & p(y , n t | , 0 ) + p ( y , n t | x_i , 0 ) [ back2 ] .3 cm taking the limit , , , , we obtain , with ( [ back2 ] ) : [ back3 ] p_(_i)= p(y , t | , 0 ) + p_(_i ) thus [ back30 ] p_(_i)= p(y , t | , 0 ) i .3 cm this shows that is continuous in ..3 cm moreover , expanding ( [ back1 ] ) at order , we get : [ back4 ] p(y , t | , 0 ) = _ i=1^m _ p__i p_(_i ) + x ( _ i=1^m _ p__i p_(_i ) ) + o((x)^2 ) with ( [ back30 ] ) and , we show that : [ back5 ] _ i=1^m _p__i p_(_i ) = 0 .6 cm on the other hand , for , the equation ( [ fp ] ) on the link ] p(y_i , ( n+1)t | x , 0 ) & = & p__i p ( , n t | x , 0 ) + p(y_i ,n t | x , 0 ) [ forw1 ] + p ( , ( n+1 ) t | x , 0 ) & = & _ i=1^m _ p ( y_i , n t | x , 0 ) [ forw2 ] .3 cm with the limit , , , , ( [ forw1 ] ) leads to : [ forw3 ] p_(_i)= p__i p ( , t | x , 0 ) + p_(_i ) .3 cm thus [ st1 ] = = ... = = 2 p ( , t | x , 0 ) .3 cm we see that , in general , is not continuous in ..3 cm moreover , expanding ( [ forw2 ] ) at order , we get : [ forw4 ] p ( , t | x , 0 ) = _ i=1^m _ ( p_(_i ) + y p_(_i ) ) + o((y)^2 ) with ( [ st1 ] ) , we can write : [ forw5 ] _ i=1^m _p_(_i ) = 0 .1 cmso , the current conservation does nt involve the s ..3 cm now , for , the equation ( [ fpforw ] ) on the link ] , for the link ] .( notice that if is absorbing ) ..1 cm it is now easy to realize that we have the relationship : [ s1 ] s(t|x0 ) = _ g y q(yt|x0 ) .1 cm in the following , we will be especially interested in the laplace transform of : [ lt ] ( |x0 ) ( x ) = _0^t e^-t s(t|x0 ) .1 cm on the bond [ , satisfies the following equation ( ) : [ lteq ] ( l^+(x _ ) - _ ) = - 1 .1 cm setting and performing the transformation [ tr ] ( x _ ) = ( x _ ) ( is defined in ( [ 2 ] ) ) , we are left with the following equation for : [ eqchi ] - d + = 0 .1 cm let us call and two solutions of ( [ eqchi ] ) such that , .so , writes : = & + & a _ ( _ 0^x _ ) _ ( x _ ) + & + & a _ ( _ l_^x _ ) _ ( x _ ) [ sol2 ] the constants are determined by imposing the boundary conditions. continuity of at each vertex implies : [ c10 ] _x__i 0 ( x__i ) _= + a__i moreover , if is absorbing then ] if has no absorbing vertex ( in that case , ) ..1 cm setting and } t'_{ij } \equiv \tau ] , the backward equation : [ spl ] l^+ ( x _ ) _ ( x _ ) = 0 the probability , , for a particle starting from the vertex to be absorbed by , is defined as ..1 cm with ( [ pimu ] ) and ( [ pmulam ] ) , it is easy to realize that [ pimulam ] : _ , = _ .2 cm following the same lines as previously ( see section [ mrt ] ) , we find that ( non absorbing ) is again written as the ratio of two determinants : [ splittingres1 ] _ , = where except for the column : [ m2 ] ( m_2^ ( , ) ) _ i = ( is defined in ( [ m],[mm ] ) and in ( [ jab ] ) ) ..2 cm with simple manipulations on determinants , we check the normalization condition ..5 cm let us , for one moment , comment the case when there is no drift ( constant but variable , eventually discontinuous at some vertices ) . in that case and , also , .we conclude that the splitting probabilities do nt depend on the varying diffusion constant when there is no drift .this fact can be understood in the following way .let us consider a discretization of each link and a continuous time . modifying the diffusion constant amounts to change the waiting time at each site of the discretized graph .but , this would not change the trajectories if there is no drift . only the time spentis changed .finally , the splitting probabilities remain unaffected without drift , we could expect , with the same argument , that the average time spent on a part of would not depend on the diffusion constant on the rest , , of the graph .this is exactly what can be checked with the formulae ( [ res1]-[m1 ] ) of section [ mrt ] and , also , with the formulae ( [ examp32],[examp33 ] ) of the example 3 of section [ exam1 ] .] by a change of ..1 cm .3 cm let us consider a star - graph without drift .the root has neighbours , all absorbing .the links have lengths , . with ( [ splittingres1 ] ) ,we obtain [ starsplit ] _ i,0 = ( ) / ( _ m=1^m_0 ) .1 cm .3 cmwe now turn to the study of the conditional mean first passage time , which is defined as the mean exit time , given that exit is through the absorbing vertex ( rather than any other absorbing vertex ) .we set ..2 cm actually , it is simpler to first compute the quantity ..2 cm indeed , we have : _ ( x ) & = & _ 0^t t _ ( t| x 0 ) + l^+ _ ( x ) & = & _ 0^t t _ ( t| x 0 ) = - _ 0^t _ ( t| x 0 ) + l^+ _ ( x ) & = & - _ ( x ) .1 cm moreover , for any absorbing vertex , we get : because ..2 cm thus , comparing this equation with eq.([tau ] ) , we find for a particle starting from the vertex [ tcond ] _ , t_,= where except for the column : [ m3 ] ( m_3^ ( , ) ) _i = _ m p_im with ^()_(ij ) & = & _ 0^l_ij u_ij i ( u_ij ) ( _ 0^u_ij z_ij ) in this last equation , has to be computed by equation _ ( z_ij ) = _ , i + _ 0^z_ij u_ij i(u_ij ) with given by eq.([splittingres1 ] ) ..1 cm .3 cm with the same star - graph as in section [ exam3 ] ( no drift ) and a diffusion constant equal to on each link ] ( length ) and , on this link , a potential that interpolates linearly between and ; is constant on ] of length ( see figure [ fig6 ] b ) ) in such a way that nothing is changed for the rest ( for example , the potential on the link ] of the modified graph , ... ) . on ] . ].5 cm as we already did for the case ( a ) , we modify the graph and obtain figure [ fig7 ] b ) . between vertices and ,we choose : and .thus , in the modified graph , and are continuous everywhere on the graph ..5 cm on the links ] , the solution of equation ( [ a02 ] ) is still given by ( [ a03 ] ) and ( [ a05 ] ) ( with the conditions ( [ a08 ] ) and ( [ a09 ] ) ) .but , on the link ] and ] and ] and ] is identical to the original link $ ] ( same length , same potential and diffusion constant ) .moreover , in the added subgraph ( figure [ bca ] b ) , heavy lines of lengths ) the potential and the diffusion constant are assumed to be constant ( respectively equal to some value and to ) .so , in the vicinity of , the discontinuities of will occur at the s ( will be continuous in the same domain ) .of course , for the transition probabilities from , we choose . by taking the limit , we will recover the original graph .now , for the small subgraph where and are constant , we can take advantage of the result , equation ( [ back5 ] ) , to write : [ back9 ] _ i=1^m _p__i p_(_i ) = 0 moreover , for the vertex , where , we can use ( [ back8 ] ) and also ( [ a14 ] ) ( directly established in appendix a ) to get : [ back10 ] e^-u / d ( ) p_(_i ) + e^-u_(_i_i ) /d ( ) p_(_i_i ) = 0 now , taking the limit , we have from ( [ b101 ] ) ( appendix b ) : [ back12 ] _ l0 - 1 in this limit , the vertex moves to and we recover the original graph . finally , with ( [ back9],[back11],[back12 ] ) ,we obtain , for the case ( a ) , the boundary condition ) is unchanged when we add a constant to .now , if we want to consider the case when and are both discontinuous at some vertex , we must add , on each link , vertices and where either or are discontinuous .the resulting boundary condition will depend on the repartition of those additional vertices .moreover , inconsistencies will appear when we add a constant to .this is why we say that , in our opinion , this problem is ill - defined . ] : [ back13c ] _p__i e^-u_(_i ) /d( ) p_(_i ) = 0 moreover , for the modified graph , is continuous in and in . from appendix b , we know that ( [ b100 ] ) : [ back14 ] _l0 1 that s enough to conclude that , for the original graph , is continuous in .
we consider a particle diffusing along the links of a general graph possessing some absorbing vertices . the particle , with a spatially - dependent diffusion constant is subjected to a drift that is defined in every point of each link . we establish the boundary conditions to be used at the vertices and we derive general expressions for the average time spent on a part of the graph before absorption and , also , for the laplace transform of the joint law of the occupation times . exit times distributions and splitting probabilities are also studied and several examples are discussed . laboratoire de physique thorique de la matire condense , universit pierre et marie curie , 4 , place jussieu , 75005 paris , france . laboratoire de physique thorique et modles statistiques . universit paris - sud , bt . 100 , f-91405 orsay cedex , france .
the problem of learning is one of the most interesting aspects of feed - forward neural networks .recent activities in the theory of learning have gradually shifted toward the issue of on - line learning . in the on - line learning scenario ,the student is trained only by the most recent example which is never referred to again .in contrast , in the off - line ( or batch ) learning scheme , the student is given a set of examples repeatedly and memorizes these examples so as to minimize the global cost function .therefore , the on - line learning has several advantages over the off - line method .for example , it is not necessary for the student to memorize the whole set of examples , which saves a lot of memory space .in addition , theoretical analysis of on - line learning is usually much less complicated than that of off - line learning which often makes use of the replica method . in many of the studies of learning, authors assume that the teacher and student networks have the same structures .the problem is called learnable in these cases . however , in the real world we find innumerable unlearnable problems where the student is not able to perfectly reproduce the output of teacher in principle .it is therefore both important and interesting to devote our efforts to the study of learning unlearnable rules .if the teacher and student have the same structure , a natural strategy of learning is to modify the weight vector of student so that this approaches teacher s weight as quickly as possible .however , if the teacher and student have different structures , the student trained to satisfy sometimes can not generalize the unlearnable rule better than the student with .several years ago , watkin and rau investigated the off - line learning of unlearnable rule where the teacher is a perceptron with a non - monotonic transfer function while the student is a simple perceptron .they discussed the case where the number of examples is of order unity and therefore did not derive the asymptotic form of the generalization error in the limit of large number of training examples .furthermore , as they used the replica method under the replica symmetric ansatz , the result may be unstable against replica symmetry breaking .for such a type of non - monotonic transfer function , a lot of interesting phenomena have been reported .for example , the critical loading rate of the model of hopfield type or the optimal storage capacity of perceptron is known to increase dramatically by non - monotonicity .it is also worth noting that perceptrons with the non - monotonic transfer function can be regarded as a toy model of a multilayer perceptron , a parity machine . in this context , inoue , nishimori andkabashima recently investigated the problem of on - line learning of unlearnable rules where the teacher is a non - monotonic perceptron : the output of the teacher is $ ] , where is the input potential of the teacher , with being a training example , and the student is a simple perceptron . for this system , difficulties of learning for the student can be controlled by the width of the reversed wedge . if or , the student can learn the rule perfectly and the generalization error decays to zero as for the conventional perceptron learning algorithm and for the hebbian learning algorithm , where is the number of presented examples , , divided by the number of input nodes , . for finite , the student can not generalize perfectly and the generalization error converges exponentially to a non - vanishing -dependent value . in this paperwe investigate the generalization ability of student trained by the on - line adatron learning algorithm with examples generated by the above - mentioned non - monotonic rule .the adatron learning is a powerful method for learnable rules both in on - line and off - line modes in the sense that this algorithm gives a fast decay , proportional to , of the generalization error , in contrast to the and decays of the perceptron and hebbian algorithms .we investigate the performance of the adatron learning algorithm in the unlearnable situation and discuss the asymptotic behavior of the generalization error .this paper is organized as follows . in the next section ,we explain the generic properties of the generalization error for our system and formulate the on - line adatron learning .some of the results of our previous paper are collected here concerning the perceptron and hebbian learning algorithms which are to be compared with the adatron learning .section iii deals with the conventional adatron learning both for learnable and unlearnable rules . in sec .iv we investigate the effect of optimization of the learning rate . in sec .v the issue of optimization is treated from a different point of view where we do not use the parameter , which is unknown to the student , in the learning rate . in last sectionwe summarize our results and discuss several future problems .let us first fix the notation .the input signal comes from input nodes and is represented by an -dimensional vector .the components of are randomly drawn from a uniform distribution and then is normalized to unity .synaptic connections from input nodes to the student perceptron are also expressed by an -dimensional vector which is not normalized .the teacher receives the same input signal through the normalized synaptic weight vector .the generalization error is , where is the student output with the internal potential and stands for the average over the distribution function . \label{dist1}\ ] ] here stands for the overlap between the teacher and student weight vectors , .this distribution has been derived from randomness of and is valid in the limit . the generalization error is easily calculated as a function of as follows where with .it is important that this expression is independent of specific learning algorithm .minimization of with respect to gives the theoretical lower bound , or the best possible value , of the generalization error for given . in fig .1 we show for several values of .this figure indicates that the generalization error goes to zero if the student is trained so that the overlap becomes 1 for and for .if the parameter is larger than some critical value , decreases monotonically from to as increases from to .when is smaller than , a local minimum appears at , but the global minimum is still at as long as is larger than .if is less than , the global minimum is found at , not at .this situation is depicted in figs . 2 and 3 where we show the optimal overlap giving the smallest value of and the corresponding best possible value of the generalization error as functions of .from these two figures , we see that the optimal overlap which gives the theoretical lower bound shows a first - order phase transition at .therefore , our efforts should be directed to finding the best strategy which gives the best possible value of the generalization error for a wide range of the parameter .it may be useful to review some of the results of , inoue , nishimori and kabashima who studied the present problem under the perceptron and hebbian algorithms .for the conventional perceptron learning , the generalization error decays to zero as if the rule is learnable ( ) , whereas it converges to a non - vanishing value , where , exponentially for the unlearnable case .this value of is larger than the best possible value as seen in fig .3 . introduction of optimization processes of the learning rate improves the performance significantly in the sense that the generalization error then converges to the best possible value when .for the conventional hebbian learning , the generalization error decays to the theoretical lower bound as not only in the learnable limit but for a finite range of , .however , for , the generalization error does not converge to the optimal value .the on - line training dynamics of the adatron algorithm is where stands for the number of presented patterns and is the leaning rate . it is straightforward to obtain the recursion equations for the overlap and the length of the student weight vector . in the limit ,these two dynamical quantities become self - averaging with respect to the random training data . for continuous time in the limit , with kept finite , the evolutions of and are given by the following differential equations : where with and \hspace{1.0in}\nonumber \\ \mbox{}\hspace{.4in}+\sqrt{\frac{2}{\pi}}\ , ra ( \sqrt{1-r^{2}}){\delta } \left [ 1 - 2h\left ( \frac{ra}{\sqrt{1-r^{2}}}\right ) \right ] + re_{\rm ad}. \label{gg}\end{aligned}\ ] ] equations ( [ dlda ] ) and ( [ drda ] ) determine the learning process . in the rest of the present sectionwe restrict ourselves to the case of corresponding to the conventional adatron learning .we first consider the case of and , the learnable rule .we investigate the asymptotic behavior of the generalization error when approaches 1 , , and , a constant . from eqs .( [ ead ] ) and ( [ gg ] ) , we find and with . then eq .( [ drda ] ) is solved as with using this equation and eq .( [ ge ] ) , we obtain the asymptotic form of the generalization error as the above expression of the generalization error depends on , the asymptotic value of , through .apparently is a function of the initial value of as shown in fig .a special case is in which case does not change as learning proceeds as is apparent from eq .( [ dlda ] ) as well as from fig .such a constant- problem was studied by biehl and riegler who concluded for the adatron algorithm . our formula ( [ eg2 ] )reproduces this result when . if one takes as an adjustable parameter , it is possible to minimize by maximizing in the denominator of eq .( [ eg2 ] ) .the smallest value of is achieved when , yielding which is smaller than eq .( [ ge3 ] ) for a fixed .we therefore have found that the asymptotic behavior of the generalization error depends upon whether or not the student weight vector is normalized and that a better result is obtained for the un - normalized case .we plot the generalization error for the present learnable case with the initial value of in fig .we see that the hebbian learning has the highest generalization ability and the adatron learning shows the slowest decay among the three algorithms in the initial stage of learning .however , as the number of presented patterns increases , the adatron algorithm eventually achieves the smallest value of the generalization error . in this sensethe adatron learning algorithm is the most efficient learning strategy among the three in the case of the learnable rule . for unlearnable case , there can exist only one fixed point reason is , for finite , appearing in eq .( [ dlda ] ) does not vanish in the limit of large and has a finite value for . for this finite ,the above differential equation has only one fixed point .in contrast , for the learnable case , behaves as in the limit of and thus becomes zero irrespective of asymptotically .we plot trajectories in the - plane for in fig . 6 and the corresponding generalization error is plotted in fig . 7 as an example . from fig .6 , we see that the destination of is for all initial conditions .figure 7 tells us that for the unlearnable case , the adatron learning has the lowest generalization ability among the three .we should notice that the generalization error decays to its asymptotic value , the residual error , as for the hebbian learning and decays exponentially for perceptron learning .the residual error of the hebbian learning is also the best possible value of the generalization error for as seen in fig .3 . in fig . 8 we also plot the generalization error of the adatron algorithm for several values of . for the adatron learning of the unlearnable case , the generalization error converges to a non - optimal value exponentially . for all unlearnable cases ,the - flow is attracted into the fixed point , where is obtained from the solution of the above equation is not the optimal value because the optimal value of the present learning system is for and for . from figs .3 and 7 , we see that the residual error of the adatron learning is larger than that of the conventional perceptron learning .therefore , we conclude that if the student learns from the unlearnable rules , the on - line adatron algorithm becomes the worst strategy among three learning algorithms as we discussed above although for the learnable case , the on - line adatron learning is a sophisticated algorithm and the generalization error decays to zero as quickly as the off - line learning .in the previous section , we saw that the on - line adatron learning fails to get the best possible value of the generalization error for the unlearnable case and its residual error is larger than that of the conventional perceptron learning or hebbian learning .we show that it is possible to overcome this difficulty .we now consider an optimization the learning rate .this optimization procedure is different from the technique of kinouchi and caticha . as the optimal value of which gives the best possible value of the generalization error is for , we determine so that is accelerated to become . in order to determine using the above strategy, we maximize the right hand side of eq .( [ drda ] ) with respect to and obtain .using this optimal learning rate , eqs .( [ dlda ] ) and ( [ drda ] ) are rewritten as follows for the learnable case , we obtain the asymptotic form of the generalization error from eqs .( [ dlda2 ] ) and ( [ drda2 ] ) by the same relation , as we used for the case of as this is the same asymptotic behavior as that obtained by optimizing the initial value of as we saw in the previous section .next we investigate the unlearnable case .the asymptotic forms of and in the limit of are obtained as and then we get the asymptotic solution of eq .( [ drda2 ] ) with respect to , , as as the asymptotic behavior of is obtained as , we find the generalization error in the limit of as follows where is the best possible value of the generalization error for .therefore , our strategy to optimize the learning rate succeeds in training the student to obtain the optimal overlap for . for the perceptron learning, this type of optimization failed to reach the theoretical lower bound of the generalization error for exactly at in which case the generalization error is , equivalent to a random guess because for optimal learning rate vanishes .in contrast , for the adatron learning , the optimal learning rate has a non - zero value even at . in this sense ,the on - line adatron learning with optimal learning rate is superior to the perceptron learning .in the previous section , we were able to get the theoretical lower bound of the generalization error for by introducing the optimal learning rate .however , as the optimal learning rate contains a parameter unknown to the student , the above result can be regarded only as a lower bound of the generalization error .the reason is that the student can get information only about teacher s output and no knowledge of or . in realistic situations ,the student does not know or and therefore has a larger value of the generalization error . in this section, we construct a learning algorithm without the unknown parameter using the asymptotic form of the optimal learning rate. for the learnable case , the optimal learning rate is estimated in the limit of as this asymptotic form of the optimal learning rate depends on only through the length of student s weight vector .we therefore adopt proportional to , , also in the case of the parameter - free optimization and adjust the parameter so that the student obtains the best generalization ability . substituting this expression into the differential equation ( [ drda ] ) for and using with , we get where we have set this leads to . then , the generalization error is obtained from as in order to minimize , we maximize with respect to .the optimal choice of in this sense is and we find in such a case this is the same asymptotic form as the previous -dependent result ( [ ge5 ] ) .next we consider the unlearnable case .the asymptotic form of the learning rate we derived in the previous section for the unlearnable case is where we used eq .( [ asep ] ) to obtain the right - most equality and we set the -dependent prefactor of as . using this learning rate ( [ asg2 ] ) and the asymptotic forms of and as and in the limit of , we obtain the differential equation with respect to from eq .( [ drda ] ) as follows \frac{{\eta}^{2}}{{\alpha}^{2 } } -{\eta } \frac{4a}{\sqrt{2\pi } } { \delta}\frac{\varepsilon}{\alpha}. \label{difeq}\ ] ] this differential equation can be solved analytically as where is a constant determined by the initial condition .therefore , if we choose to satisfy , the generalization error converges to the optimal value as in order to obtain the best generalization ability , we minimize the prefactor of in the second term of eq .( [ type1 ] ) and obtain for this , the condition is satisfied . in general ,if we take independent of , the condition is not always satisfied .the quantity takes the maximum value at .therefore , whatever value of we choose , we can not obtain the convergence if the product of this maximum value and is not larger than unity .this means that should satisfy for the first term of eq .( [ sol ] ) dominate asymptotically , yielding eq .( [ type1 ] ) , for a non - vanishing range of .in contrast , if we choose to satisfy , the generalization error is dominated by the second term of eq .( [ sol ] ) and behaves as in this case , the generalization error converges less quickly than ( [ type1 ] ) .for example , if we choose , we find that the condition can not be satisfied by any and the generalization error converges as in eq .( [ type2 ] ) .if we set ( ) as another example , the asymptotic form of the generalization error is either eq . ( [ type1 ] ) or eq . ( [ type2 ] ) depending on the value of .we have investigated the generalization abilities of a simple perceptron trained by the teacher who is also a simple perceptron but has a non - monotonic transfer function using the on - line adatron algorithm . for the learnable case ( ) ,if we fix the length of the student weight vector as , the generalization error converges to zero as as biehl and riegler reported .however , if we allow the time development of the length of student weight vector , the asymptotic behavior of the generalization error shows dependence on the initial value of . when the student starts the training process from the optimal length of weight vector , we can obtain the generalization error which is a little faster than .as the student is able to know the length of its own weight vector in principle , we can get the better generalization ability by a heuristic search of the optimal initial value of . on the other hand ,if the width of the reversed wedge has a finite value , the generalization error converges exponentially to a non - optimal -dependent value .in addition , these residual errors are larger than those of the conventional perceptron learning for the whole range of .therefore we conclude that , although the adatron learning is powerful for the learnable case including the situation in which the input vector is structured , it is not necessarily suitable for learning of the non - monotonic input - output relations .next we introduced the learning rate and optimized it .for the learnable case , the generalization error converges to zero as which is as fast as the result obtained by selecting the optimal initial condition for the case of non - optimization , . for this learnable case, the asymptotic form of the optimal leaning rate is .therefore , for the on - line adatron learning , it seems that the length of the student weight vector plays an important role to obtain a better generalization ability .if the task is unlearnable , the generalization error under optimized learning rate converges to the theoretical lower bound as for . using this strategy, we can get the optimal residual error for even exactly at for which the optimized perceptron learning failed to obtain the optimal residual error .we also investigated the generalization ability using a parameter - free learning rate .when the task is learnable , we assumed and optimized the prefactor . as a result , we obtained which is the same asymptotic form as the parameter - dependent case .therefore , we can obtain this generalization ability by a heuristic choice of ; we may choose the best by trial and error .on the other hand , for the unlearnable case , we used the asymptotic form of the -dependent learning rate in the limit of , , and optimized the coefficient .the generalization error then converges to as for . if , the generalization error decays to as , where the exponent is smaller than because .similar slowing down of the convergence rate of the generalization error by tuning a control parameter was also reported by kabashima and shinomoto in the problem of learning of two - dimensional blurred dichotomy . in conclusion, we could overcome the difficulty of the adatron learning of unlearnable problems by optimizing the learning rate and the generalization error was shown to converge to the best possible value as long as the width of reversed wedge satisfies .for the parameter region , this approach does not work well because the optimal value of is instead of ; our optimization is designed to accelerate the increase to toward . in this paper, we could construct a learning strategy suitable to achieve the -dependent optimal value for .however , for , it is a very difficult but challenging future problem to get the optimal value by improving the conventional adatron learning .the authors would like to thank dr .yoshiyuki kabashima for helpful suggestions and comments .one of the authors ( j. i. ) thanks dr .siegfried b for several useful comments .j. a. hertz , a. krogh and r. g. palmer , _ introduction to the theory of neural computation _( addison - wesley , redwood city , 1991 ) .t. h. watkin , a. rau and m. biehl , rev .65*,499 ( 1993 ) .m. opper and w. kinzel , in _ physics of neural networks iii _ , eds .e. domany , j. l. van hemmen and k. schulten ( springer , berlin , 1995 ) .t. h. watkin and a. rau , phys .a * 45 * , 4111 ( 1992 ). m. morita , s. yoshizawa and k. nakano , trans .ieice * j73-d - ii * , 242 ( 1990 ) ( in japanese ) .h. nishimori and i. opris , neural networks * 6 * , 1061 ( 1993 ) .j. inoue , j. phys .a : math . gen .* 29 * , 4815 ( 1996 ) .g. boffetta , r. monasson and r. zecchina , j. phys .a : math . gen . * 26 * , l507 ( 1993 ) .r. monasson and d. okane , europhys . lett . * 27 * , 85 ( 1994 ) .j. inoue , h. nishimori and y. kabashima , ( unpublished ) .m. biehl and p. riegler , europhys .* 28 * , 525 ( 1994 ) .j. k. anlauf and m. biehl , europhys . lett . * 10 * , 687 ( 1989 ) .p. riegler , m. biehl , s. a. solla and c. marangi , in _ proc .of italian workshop on neural nets vii _ ( 1995 ) .m. opper , w. kinzel , j. kleinz and r. nehl , j. phys .a : math . gen .* 23 * , l581(1990 ) .o. kinouchi and n. caticha , j. phys .a : math . gen .* 26 * , 6161 ( 1993 ) .y. kabashima and s. shinomoto , neural comp .* 7 * , 158 ( 1995 ) .
we study the on - line adatron learning of linearly non - separable rules by a simple perceptron . training examples are provided by a perceptron with a non - monotonic transfer function which reduces to the usual monotonic relation in a certain limit . we find that , although the on - line adatron learning is a powerful algorithm for the learnable rule , it does not give the best possible generalization error for unlearnable problems . optimization of the learning rate is shown to greatly improve the performance of the adatron algorithm , leading to the best possible generalization error for a wide range of the parameter which controls the shape of the transfer function . pacs numbers : 87.10.+e
from motion ( sfm ) is the process to find three - dimensional ( 3d ) structure and camera motion parameters from a set of 2d image sequences .the classical method for 3d reconstruction is stereo vision using two or three images . stereo vision , due to limited information , is sensitive to image noise .given a sequence of images , structure from motion is a powerful method to build a consistent 3d map with the knowledge of multiple - view geometry . over the past two to three decades , tremendous progress has been made in sfm , the results of this research have a wide range of potential applications , including robot navigation and obstacle avoidance , autonomous driving , video surveillance , and environment modeling . structure and motion factorization algorithm , pioneered by tomasi and kanade , is an effective approach for sfm .based on a bilinear formulation using singular value decomposition ( svd ) with low - rank approximation , the method decomposes image measurement directly into the 3d structure and the camera motion components , given a set of tracked features across the sequence . by dealing uniformly with the data from all images ,the algorithm achieves a more robust and more accurate solution than stereo vision - based methods .a linear affine camera has been adopted by most research in sfm due to its simplicity .it was extended to a more accurate nonlinear perspective camera model in by incrementally performing the affine factorization of a scaled tracking matrix .triggs proposed a full projective factorization algorithm with projective depths recovered from epipolar geometry .the method was further studied and different iterative schemes were proposed to recover the projective depths by minimizing image reprojection errors .oliensis and hartley provided a complete theoretical convergence analysis for the iterative extensions . because full perspective model is computational intensive , a quasi - perspective model was proposed in as a trade - off of efficiency and accuracy . by assuming deformation constraints that the nonrigid 3d structure lies in a span of rigid bases , the factorization algorithm was extended to handle nonrigid deformation , where the shape bases , combination coefficients , and camera motions are solved simultaneously in a svd framwork .this idea has been extensively investigated and developed in .a manifold - learning framework was proposed in to relax the deformation assumption . proposed a sequential approach for nonrigid sfm .yan and pollefeys developed a similar factorization framework to recover the structure and motion of articulated objects . in a dual trajectory space , proposed a duality solution to this problem based on basis trajectories .most factorization algorithms are based on the svd decomposition of the tracking matrix composed by all features tracked across the sequence . in the presence of missing data , however , svd factorization could not be performed directly .different alternative factorization approaches have been proposed to handle incomplete data , such as power factorization , alternative factorization , and factor analysis . in practical application ,the tracked features are usually corrupted by outliers or larger errors , in this case , most algorithms will degrade or even fail .the most popular strategies to handle outliers are random sample consensus ( ransac ) and its variations , least median of squares ( lmeds ) , and other similar framework based on hypothesis - and - test .most of these methods are computational intensive and only work well for two or three views . in recent years , some robust structure and motion factorization algorithmshave been proposed .a scalar - weighted svd algorithm was proposed by aguitar and moura through minimizing weighted square errors .gruber and weiss enhanced the robustness to missing data and uncertainties using a factor analysis in an expectation maximization ( em ) framework .zelnik - manor _ defined a new type of motion consistency based on temporal consistency , and applied it to multi - body factorization with directional uncertainty .a gaussian mixture model with the em algorithm was introduced by zaharescu and horaud . proposed to correct the outliers with pseudo observations through iterations .ke and kanade dealt the outliers by minimizing a l1 norm of the reprojection errors .eriksson and hengel utilized the l1 norm to the wiberg algorithm to handle missing and outlying features . alternatively , okatani _ solved the problem by introducing a damping factor into the wiberg algorithm .more recently , yu _ presented a quadratic program formulation for robust multi - model fitting of geometric structures . proposed an adaptive kernel - scale weighted hypotheses to segment multiple - structure data with a large number of outliers .an alternating bilinear approach was proposed by paladini __ to solve nonrigid structure from motion by introducing a globally optimal projection step of the motion matrices onto the manifold of metric constraints ._ designed a spatial - and - temporal - weighted factorization algorithm to deal with significant noise in the measurement . developed an optimal approach based on branch - and - bound . in this paper , by exploring the fact that the reprojection residuals are in general proportional to measurement errors of the tracking data , we propose to handle the outlying data through a new viewpoint via the distribution of image reprojection residuals .the proposed approach is based on a new augmented factorization formulation , which circumvents the problem of image registration in the presence of missing data and outliers .an alternative weighted factorization algorithm is developed to handle the missing data and measurement uncertainties .finally , a robust factorization scheme is proposed to handle outlying and missing data in both rigid and nonrigid structure and motion recovery .in addition , the proposed scheme can be directly applied to handle nonrigid structure and motion factorization .the remainder of this paper is organized as follows .some background on affine factorization is offered in section [ sec : background ] .the augmented factorization algorithm is elaborated in section [ sec : rank4 ] .section [ sec : pf ] presents the alternative factorization algorithm for incomplete data . an outlier detection scheme and the robust factorization algorithmare discussed in section [ sec : outliers ] .section [ sec : nonrigid ] discusses the extension to nonrigid factorization .experimental evaluations and comparisons on synthetic and real images are described in sections [ sec : experiment1 ] and [ sec : experiment2 ] , respectively .finally , a short conclusion is drawn in section [ sec : conclusions ] .under perspective projection , a 3d point ^{t} ] in frame according to the imaging equation \mathbf{\tilde{x}}_j\ ] ] where is a non - zero scale factor ; and are the homogeneous form of and , respectively ; is a projection matrix of the -th frame ; is the camera calibration matrix ; and are the corresponding rotation matrix and translation vector of the camera with respect to the world system .when the object is far away from the camera with relatively small depth variation , we may assume a simplified affine camera model as below to approximate the perspective projection . where is a affine projection matrix of the -th camera ; is a two - dimensional translation term of the frame . under the affine assumption ,the mapping from the space to the image becomes linear as the unknown depth scalar in ( [ eq : perspective_projection ] ) is eliminated in ( [ eq : affine_projection1 ] ) .consequently , the projection of all image points in the -th frame can be denoted as = \mathbf{a}_{i}[\mathbf{{x}}_1 , \mathbf{{x}}_2 , \cdots , \mathbf{{x}}_n ] + \mathbf{c}_i\ ] ] where ] .then , the projection for the entire sequence is formulated as follows .}_{\mathbf{w}_{2m\times n } } = \underbrace{\left[{\begin{smallmatrix } \mathbf{a}_{1 } & | & \mathbf{c}_{1 } \\\vdots & | & \vdots\\ \mathbf{a}_{m } & | & \mathbf{c}_{m } \\\end{smallmatrix } } \right]}_{\mathbf{m}_{2m\times 4 } } \underbrace{\left[{\begin{smallmatrix } \mathbf { { \tilde{x}}}_1 , & \cdots , & \mathbf { { \tilde{x}}}_n \end{smallmatrix } } \right]}_{\mathbf{{s}}_{4\times n}}\ ] ] which can be written concisely as compared to the rank-3 factorization , the motion matrix in ( [ eq : affine_rigid_fact4 ] ) is augmented by an extra column , while the shape matrix is augmented by an extra row . as a result ,the rank of the tracking matrix becomes four in this case .thus , given the tracking matrix , the factorization can be simply obtained via svd decomposition by imposing the rank-4 constraint .we call the equation ( [ eq : affine_rigid_fact4 ] ) augmented factorization .it is obvious that the expression ( [ eq : affine_rigid_fact4 ] ) is derived directly from the affine projection model ( [ eq : affine_projection1 ] ) , which does not require image registration with respect to the centroid .therefore , it is applicable to imperfect data with significant noise , missing entries , and outlying points .both factorization algorithms ( [ eq : sim_affine ] ) and ( [ eq : affine_rigid_fact4_concise ] ) can be equivalently denoted as the following minimization scheme . by enforcing different rank constraints , the residual errors corresponding to the algorithms ( [ eq : sim_affine ] ) and ( [ eq : affine_rigid_fact4_concise ] ) would be respectively , where are singular values of the tracking matrix in descending order , and .it is obvious that the error difference by the two algorithm is . for noise free data ,if all image points are perfectly registered to the corresponding centroids of every views and the origin of the world system is set at the gravity center of the space points , i.e. , . then , the last column of the motion matrix in ( [ eq : affine_rigid_fact4 ] ) vanishes since , and the expression ( [ eq : affine_rigid_fact4_concise ] ) is equivalent to the rank-3 factorization ( [ eq : sim_affine ] ) . thus ,( [ eq : sim_affine ] ) is a special case of the augmented factorization after registration to the centroid .nonetheless , since the image centroid can not be accurately recovered due to the presence of outlying and missing data , the rank- algorithm will yield a big error as does not approach zero in this situation .suppose the rank-4 decomposition of ( [ eq : affine_rigid_fact4 ] ) yields a set of solutions .the decomposition is not unique since it is defined up to a nonsingular linear transformation as and .the recovery of the metric upgrading matrix , as discussed below , is different with that in the rank-3 factorization .the upgrading matrix is a nonsingular matrix which can be denoted as the following form .\ ] ] where denotes the first three columns of , and is the fourth column , i.e. , the last column of .suppose is the -th two - row submatrix of , then the upgraded motion matrix can be written as = \left [ \mathbf{a}_i | \mathbf{{c}}_i \right]\ ] ] the left submatrix of in ( [ eq : mi ] ) can be written as \ ] ] where is the focal length of cameras , and are the first two rows of the camera rotation matrix .let us denote , then , is constrained from ( [ eq : mhl ] ) as \ ] ] the above equation provides two independent constraints to , which is a positive semidefinite symmetric matrix with nine degree - of - freedom since it is defined up to a scale .thus , the matrix can be linearly solved via least squares given five or more images .furthermore , the submatrix can be decomposed from via extended cholesky decomposition as proved in . after recovering , the last column of the upgrading matrixis then determined straightforwardly . from the expression ( [ eq : mi ] ) , the projection equation ( [ eq : affine_projection4 ] ) can be written as it can be easily proved from ( [ eq : affine_projection4 m ] ) that the last column corresponds to the translation from the world coordinate system to the image system . under a given coordinate system , different values of will only result in a pure translation of the world system , however , it has no influence to the euclidean structure of the reconstructed object . thus , can be set freely as any 4-vector that is independent of the columns of so as to guarantee the nonsingularity of the resulted upgrading matrix .practically , may be constructed as follows .suppose the svd decomposition of is \left [ \begin { smallmatrix } \sigma_1 & 0 & 0 \\ 0 & \sigma_2 & 0 \\ 0 & 0 & \sigma_3\\ 0 & 0 & 0 \end{smallmatrix}\right ] [ \mathbf{v}_1,\mathbf{v}_2,\mathbf{v}_3]^t \nonumber\end{aligned}\ ] ] where and are two orthogonal matrices , and is a diagonal matrix of the three singular values .then , can be simply set as where is the second singular value , and is the last column of .the construction guarantees a good numerical stability in computing the inverse of , since the constructed matrix has the same condition number as .the above proposed augmented rank-4 affine factorization algorithm is summarized in algorithm [ alg : augmented ] . 1 .perform svd decomposition on the tracking matrix ; + 2 . obtain a set of rank-4 solutions of and ; + 3 . recover the metric upgrading matrix matrix ; + 4 . compute the euclidean solution from and .svd decomposition is a convenient technique for structure and motion factorization , however , svd only works when the tracking matrix is complete , i.e. , all features are tracked across the sequence . in practice ,missing data are inevitable since some features may get lost during the process of tracking due to occlusion or other factors .researchers proposed different alternative factorization approaches to handle missing data . in this section , a two - step alternative and weighted factorization algorithmsare introduced to handle missing data and image uncertainties .the essence of structure and motion factorization ( [ eq : affine_rigid_fact4 ] ) is equivalent to finding a set of rank-4 solutions and by minimizing the following frobenious norm . the basic idea of the two step factorization is to minimize the cost function ( [ eq : fact_fnormwhat ] ) over and alternatively until convergence , while leaving the other one fixed , i.e. , rcl f ( ) & = & _ -_f^2 [ eq : fact_pfw1 ] + f ( ) & = & _ -_f^2 [ eq : fact_pfw2 ] each cost function of the algorithm is indeed a convex function , and thus , a global minimum can be found .the algorithm converges fast if the tracking matrix is close to rank-4 , even with a random initialization .the idea has been adopted by several researches .different with the svd decomposition , the minimization process is carried out by least squares .let us rewrite the cost function ( [ eq : fact_pfw1 ] ) with respect to each feature as follows . where is the -th column of , and is the -th column of .thus , the least squares solution of is given as where is the moore - penrose pseudoinverse of matrix .the solution ( [ eq : lq_s2 ] ) can easily handle the missing data in the tracking matrix .for example , if some entries in is unavailable , one can simply delete those elements in and the corresponding columns in , or simply set those entries in to zeros , then , can still be solved from ( [ eq : lq_s2 ] ) via least squares . similarly , the second cost function ( [ eq : fact_pfw2 ] ) can be rewritten with respect to each frame as which yields the following least - square solution of the motion matrix . where the pseudoinverse , and denote the -th row of the matrices and , respectively . in case of missing elements ,one can simply set those entries in to zeros .the alternative algorithm is summarized in algorithm [ alg : alternative ] .measurement errors are inevitable in the process of feature detection and tracking . if prior knowledge about distribution of the errors is available , all elements of the approximation error can be weighted by taking account of the error distribution to increase the robustness and accuracy of the algorithm .the basic idea is to give each image measurement a weight according to its uncertainty .reliable features are assigned higher weights while unreliable features receive lower weights .the weighted factorization is formulated as follows . where denotes the hadamard product , which is an element - by - element multiplication of two matrices ; is an uncertainty matrix whose entries are weights derived from the confidence of the image measurements . the general weighted factorization could not be solved analytically using the singular value decomposition .many researchers have proposed different schemes to solve the problem . in this section, the solution of ( [ eq : fact_weightedall ] ) will be obtained using the alternative factorization algorithm by solving and alternatively as follows .rcl f ( ) & = & _ _ j_j(_j-_j)_f^2 [ eq : fact_pfwighted1 ] + f ( ) & = & _ _ i^t_i^t(_i^t-_i^t)_f^2 [ eq : fact_pfweighted2 ] where denotes the -th column of and the -th row of .the close - form solutions of the shape and motion matrices can be obtained by least squares .rcl _ j & = & ( ( _ j))^((_j)_j ) , j=1 , , n [ eq : fact_lswighted1 ] + _ i^t & = & ( _ i^t ( _ i^t ) ) ( ( _ i^t))^ , i=1 , , m [ eq : fact_lsweighted2 ] where denotes the pseudoinverse of a matrix , and stands for the diagonal matrix generated by a vector . equations ( [ eq : fact_lswighted1 ] ) and ( [ eq : fact_lsweighted2 ] ) yield the least - square solutions of and .same as the alternative factorization , when there are any missing data in the tracking matrix , one can simply set those entries in to zeros .the alternative weighted factorization algorithm is summarized in algorithm [ alg : weighted ] , where the motion matrix can be initialized randomly or using provious estimation , while the initial value of the weight matrix , as will be discussed in next section , is estimated from the reprojection residuals .based on the rank-4 factorization algorithm proposed in the foregoing sections . a fast and practical scheme for outlier detectionis proposed in this section .outlying data are inevitable in the process of feature tracking . some most popular strategies in the computer vision field are based on the hypothesis - and - test scheme , which are computationally intensive .we will investigate the problem from a new viewpoint via the distribution of image reprojection residuals . both the svd - based and the alternative factorization - based algorithms yield a set of least - square solutions .the best fit model is obtained by minimizing the sum of the squared residuals between the observed data and the fitted values provided by the model .extensive experiments show that the least - square algorithms usually yield reasonable solutions even in the presence of certain amount of outliers and the reprojection residuals of these outlying data are usually outstandingly larger than those associated with inliers .suppose and are a solution set of the rank-4 factorization of a tracking matrix , the reprojection residuals can be computed by reprojecting the set of solutions back onto all images .let us define a residual matrix as follows .{2m\times n}\ ] ] where \ ] ] is the residual of point in both directions .the reprojection error of the point is defined by the euclidean distance of the image point and its reprojection , and the reprojection error of the entire sequence is defined by an matrix {m\times n}\ ] ] fig.[fig : s1 ] shows an example of the distribution of the error matrix ( [ eq : error_matrix ] ) , where 40 images are generated from 100 random 3d space points via affine projection .the image resolution is , and the images are corrupted by gaussian noise and 10% outliers .the added noise level is 3 pixels , and the outliers are simulated by random noise whose level ( standard deviation of the noise ) is set at 15 pixels .the real added noise and outliers are illustrated by an image as shown in fig.[fig : s1 ] ( a ) , where the grayscale of each pixel corresponds to the inverse magnitude of the error on that point , the darker the pixel , the larger the error magnitude on that point .the distribution of the real added outliers is depicted as a binary image in fig.[fig : s1 ] ( c ) , which correspond to the darker points in fig.[fig : s1 ] ( a ) . ) ; ( c ) the distribution of the added outlying data ; ( d ) the outliers segmented from reprojection error by a single threshold ; ( e ) the distribution of false positive error given by the thresholding ; and ( f ) the false negative error given by the thresholding.,title="fig : " ] + ( a)(b ) + ) ; ( c ) the distribution of the added outlying data ; ( d ) the outliers segmented from reprojection error by a single threshold ; ( e ) the distribution of false positive error given by the thresholding ; and ( f ) the false negative error given by the thresholding.,title="fig : " ] + ( c)(d ) + ) ; ( c ) the distribution of the added outlying data ; ( d ) the outliers segmented from reprojection error by a single threshold ; ( e ) the distribution of false positive error given by the thresholding ; and ( f ) the false negative error given by the thresholding.,title="fig : " ] + ( e)(f ) + using the corrupted data , a set of motion and shape matrices were estimated by employing the rank-4 factorization algorithm and the error matrix was then computed .the distribution of the reprojection error ( [ eq : error_matrix ] ) is illustrated in fig.[fig : s1 ] ( b ) with each pixel corresponds to the reprojection error of that point .it is evident that the reprojection error and the real added noise have similar distribution .the points with large reprojection errors correspond to those with large noise levels .fig.[fig : s1 ] ( d ) shows the binary image of fig.[fig : s1 ] ( b ) by simply applying a global threshold to the reprojected residuals . it is obvious from the test that almost all outliers are successfully segmented by a single threshold .the distribution of false positive error ( the inlier points being classified as outliers by the given threshold ) and the false negative error ( the outliers not being detected by the thresholding ) are given in fig.[fig : s1 ] ( e ) and ( f ) , respectively .the false positive error is mainly caused by those inliers with large noise ( which should be treated as outliers ) , while the false negative error is caused by the outliers with small deviations ( which can actually be treated as inliers ) , these two types errors are related to the threshold , however , they will not make a big influence to the final solutions .inspired by this observation , an intuitive outlier detection and robust factorization scheme is proposed .the flowchart of the strategy is shown in fig.[fig : s2 ] , and the computation details is given in algorithm [ alg : robust ] .+ 1 . normalize the tracking matrix via point - wise and image - wise rescalings , as in , to improve numerical stability ; 2 .perform rank-4 affine factorization on the tracking matrix to obtain a set of solutions of and ; 3 .estimate the reprojection residuals and determine a global threshold to segment the outliers from the distribution ; 4 .eliminate the outliers and recalculate the matrices and using the inlying data via algorithm [ alg : alternative ] ; 5 . estimate the uncertainty of each inlying feature from the distribution of the reprojection residuals ; 6 .refine the solutions by weighted factorization algorithm [ alg : weighted ] ; 7 .recover the metric upgrading matrix and upgrade the solutions to the euclidean space as and ; 8 .perform a global optimization via bundle adjustment . in algorithm [ alg : robust ] , steps 3 and 4 can be repeated for one more time to ensure a more refined inlying data and solutions . in practice , however , the repetition does not make much difference to the final results . during computation , the algorithms [ alg : alternative ] and [ alg : weighted ] are employed to handle the missing data and measurement uncertainties . concerning the initialization of the alternative algorithm , since an initial set of solutions have been obtained in the previous steps , these solutions can be used as initial values in the iteration so as to speed up the convergence of the algorithm .suppose the image noise is modeled by gaussian distribution , it can be verified that the reprojection residuals ( [ eq : residual_matrix ] ) also follow the same distribution as the image noise , while the reprojection errors ( [ eq : error_matrix ] ) follow distribution with codimention 2 .it is nature to assume that the noise at both coordinate directions in the image is independent and identically distributed ( i.i.d . ) .let be the -vector formed by the residual matrix , then , should also be gaussian .fig.[fig : s3 ] shows an example , we add gaussian noise and outliers to a tracking matrix , as shown in fig.[fig : s3 ] ( a ) , then , we compute a set of solutions using the proposed augmented factorization and calculate the reprojection residuals , whose distribution is also gaussian , as shown in fig.[fig : s3 ] ( b ) . + ( a ) + + ( b ) + let and be the mean and standard deviation of , respectively , and register the residual vector with respect to its mean , then , the outlier threshold can be chosen as where is a parameter , which can be set from 3.0 to 5.0 ( the result is not sensitive to this value , and we choose 4.0 in our experiment ) .the points whose absolute values of the registered residuals in either direction , or the registered reprojection errors , are greater than will be classified as outliers , i.e. , rcl = & & \{_i , j ||u_ij-| > |v_ij-| > + & & ( ( u_ij-)^2+(v_ij-)^2)^ > } [ eq : outlier_se ] since the residual vector contains outliers , which have signifycant influence to the estimation of the mean and standard deviation ( std ) due to the large deviations of the outliers . in practice, the mean is estimated from the data that are less than the median value of . while the standard deviation is estimated from the median absolute deviation ( mad ) as since the mad is resistant to outliers .the above computation usually guarantees a robust estimation of the mean and the standard deviation . as for the weights in weighted factorization, most researchers estimate it from the uncertainty of each feature based on the information such as sharpness and intensity contrast around its neighborhood .some researchers modeled the errors isotropically with different variances ; others adopted directional distribution to describe the uncertainties . the uncertainty is usually estimated during the process of feature detection and tracking or given as prior informationnonetheless , this information is unavailable in many practical applications . in our early study , it was shown that the uncertainty of each feature is generally proportional to the reprojection residual of that point .for example , from the structure and motion matrices computed at step 5 , the residuals of inlying data can be estimated . as depicted in fig.[fig : s4 ] , it is evident that the reprojection residuals have very similar distribution as the added noise .based on this observation , the weight of each point can be estimated from the residual value obtained from the data after eliminating the outliers .the points with higher residual values have larger uncertainties , and thus , lower weights are assigned .the weight of each point at each coordinate direction is determined as follows in a shape like normal distribution . where the standard deviation is estimated from equation ( [ eq : sigma ] ) , is the -th element of the residual matrix ( [ eq : residual_matrix ] ) , and is a normalization scale . clearly , one point may have different weights at the two image directions based on the residual value at that direction .for the missing data and outliers , the corresponding weights are set as .direction.,title="fig : " ] +the preceding discussions are all based on the assumption that the scene is globally rigid or static . in case of nonrigid scenarios ,we can follow bregler s assumption that represent the nonrigid structure as a linear combination of a set of rigid shape bases , i.e. , where is the combination weight ; and is the number of bases . under this assumption , the imaging process of one image can be modeled as = \mathbf{a}_{i}\mathbf{s}_i+[\mathbf{c}_{i } , \cdots , \mathbf{c}_{i}]\nonumber\\ & = & \left [ { \omega_{i1 } \mathbf{a}_{i } } , \cdots , { \omega_{ik } \mathbf{a}_{i } } \right ] \left[\begin{smallmatrix } \mathbf{b}_1 \\ \vdots \\ \mathbf{b}_k \\ \end{smallmatrix}\right]+{ [ \mathbf{c}_{i } , \cdots , \mathbf{c}_{i}]}\nonumber\end{aligned}\ ] ] similar to rigid factorization , if all image points in each frame are registered to the centroid and relative image coordinates are employed , the translation term vanishes , i.e. , .consequently , the nonrigid factorization under affine camera model is expressed as }_{\mathbf{w}_{2m\times n } } = \underbrace{\left [ { { \begin{smallmatrix } { \omega_{11 } \mathbf{a}_1 } & \cdots & { \omega_{1k } \mathbf{a}_1 } \\\vdots & \ddots & \vdots \\ { \omega_{m1 } \mathbf{a}_m } & \cdots & { \omega_{mk } \mathbf{a}_m } \\ \end{smallmatrix } } } \right]}_{\mathbf{m}_{2m\times 3k } } \underbrace{\left[\begin{smallmatrix } \mathbf{b}_1 \\ \vdots \\ \mathbf{b}_k \\ \end{smallmatrix}\right]}_{\mathbf{b}_{3k\times n}}\ ] ] it is obvious from ( [ eq : nonrigid_fact ] ) that the rank of the tracking matrix is at most .previous studies on nonrigid sfm are based on the rank- constraint due to its simplicity .however , the expression ( [ eq : nonrigid_fact ] ) is based on the same assumption as ( [ eq : sim_affine ] ) that all image measurements are registered to the corresponding centroid of each frame . obviously , the assumption is not valid in the presence of outlying and missing data .similar to the rigid case , we can employ a similar augmented formulation like ( [ eq : affine_rigid_fact4 ] ) to circumvent the registration step . by adopting homogeneous expression ( [ eq : affine_projection4 ] ) ,the above nonrigid factorization can be expressed in the following augmented form .}_{\mathbf{w}_{2m\times n } } = \underbrace{\left[{\begin{smallmatrix } { \omega_{11 } \mathbf{a}_{1 } } & \cdots & { \omega_{1k } \mathbf{a}_{1 } } & \mathbf{c}_{1 } \\\vdots & \ddots & \vdots & \vdots\\ { \omega_{m1 } \mathbf{a}_{m } } & \cdots & { \omega_{mk } \mathbf{a}_{m } } & \mathbf{c}_{m } \\\end{smallmatrix } } \right]}_{\mathbf{m}_{2m\times ( 3k+1 ) } } \underbrace{\left[\begin{smallmatrix } \mathbf{b}_1 \\ \vdots \\ \mathbf{b}_k \\ \mathbf{t}_i^t \end{smallmatrix}\right]}_{\mathbf{{b}}_{(3k+1)\times n}}\ ] ] it is obvious that the rank of the tracking matrix becomes in this case .given the tracking matrix , the shape and motion matrices can be easily obtained via svd decomposition by imposing rank-( ) constraint . the expression ( [ eq : affine_nonrigid_fact4 ] )does not require any image registration , and thus , can directly work with outlying and missing data .based on the new formulation , the preceding proposed augmented factorization , alternative factorization , and weighted factorization algorithms can be directly extended to the nonrigid scenario .the only difference here with respect to the rigid case lies in the rank constraint applied to the tracking matrix .thus , a set of motion and structure matrices can be easily decomposed as shown in ( [ eq : affine_nonrigid_fact4 ] ) .obviously , the decomposition is not unique and we need to find a metric upgrading matrix to upgrade the solution to the euclidean space .then , the nonrigid structure and camera motion parameters can be factorized from and , respectively .a detailed discussion about this approach can be found in our early study .the proposed technique was validated and evaluated extensively on synthetic data and compared with other similiar algorithms in the literature . during the simulation, we produced 100 random space points within a cube of , and generated a sequence of 50 images from these points by affine projection .the following settings were used in the test : image resolution : pixel ; focal lengths : randomly varying from 500 to 550 ; rotation angles : randomly varying between and ; camera positions : randomly setting inside a sphere with a diameter of 40 ; average distance from the cameras to the object : 600 .these imaging conditions are very close to the assumption of affine projection .we compared the proposed rank-4 factorization algorithm with its rank-3 counterpart with respect to different image centroid displacements . during the test, different levels of gaussian white noise was added to the generated images and the centroid of each image was deviated with a displacement ; then , all imaged points were registered to the deviated centroid .this is a simulation of the situation when the centroid could not be reliably recovered due to missing and outlying data . using the misaligned data, we recover the motion and shape matrices using the svd factorization with the rank-4 and the rank-3 constraints , respectively ; then , reproject the solution back onto the images and calculate the reprojection residuals . in order to evaluate the performance of different algorithms ,we calculate the difference between the ground truth and the corresponding back - projected features , we call the average of all these errors as the mean reprojection variance , which is defined as follows . where is the noise - free tracking matrix ; and and are the estimated motion and shape matrices , respectively . in order to obtain a statistically meaningful comparison ,100 independent tests were performed at each noise level . the mean reprojection variance with respect to different centroid displacements at different noise levelsis shown in fig .[ fig : s4 ] , where the noise level is defined as the standard deviation of the gaussian noise . + ( a ) ( b ) + as shown in fig .[ fig : s4 ] , it is evident that the miscalculated centroid has no influence to the proposed rank-4 factorization algorithm , however , the centroid error has a huge impact to the performance of the rank-3 based algorithm .the test proves that the influence caused by the centroid errors is far more significant than that caused by the image noise .thus , the rank-4 affine factorization approach is a better choice in practice , especially in the presence of missing data , outliers , or large measurement errors . in this test, we evaluated and compared the performance of the proposed approach with respect to other robust factorization algorithms in terms of accuracy and computational complexity .we add gaussian noise to the above generated image sequence and vary the noise level from 1 to 5 pixels in steps of 1 pixel . in the mean time ,5% and 20% outliers were added to the tracking matrix , respectively . using the contaminated data, we recover the motion and shape matrices using the propose technique . the mean reprojection variance at different noise levels and outliers ratios is shown in fig .[ fig : s5 ] . as a comparison ,two closely related algorithms in the literature were implemented as well , one is an outlier correction scheme proposed by huynh _ , the other one is proposed by ke and kanade based on minimization . + ( a ) ( b ) + the results in fig .[ fig : s5 ] were evaluated by 100 independent tests , where direct stands for the regular rank-4 factorization algorithm without outlier rejection . herethe reprojection variance was estimated only from the original inlying data by eliminating the added outliers so as to provide a fair comparion of different approaches .it is obvious that the proposed scheme outperforms other algorithms in terms of accuracy .the direct factorization algorithm yields significantly large error due to the influence of outliers , and the error increases with the increase of outliers .the experiment also shows that all three robust algorithms are resilient to outliers , as can be seen in fig .[ fig : s5 ] , the ratio of outliers has little influence to the reprojection variance of the three robust algorithms .the complexity of different approaches was compared in terms of real computation time in the above test .all algorithms were implemented in matlab 2009 on a lenovo t500 laptop with 2,26ghz intel cpu .the frame number was varied from 50 to 300 in steps of 50 so as to generate different sizes of the tracking matrix , and 10% outliers were added in the data .table [ tab : time ] shows the real computation time of different algorithms .obviously , the complexity of the proposed scheme lies in between of and .this is because huynh s method does not include the weighted factorization , while the minimization of norm in is computationally more intensive than the alternative factorization algorithm .we evaluated the performance of the augmented nonrigid factorization algorithm . during the experiment, we generated a deformable space cube , which was composed of 21 evenly distributed rigid points on each side and three sets of dynamic points ( points ) on the adjacent surfaces of the cube that were moving outward .there are 252 space points in total as shown in fig.[fig : s6 ] . using the synthetic cube, we generated 100 images by the affine projection with randomly selected camera parameter .each image corresponds to a different 3d structure , and the image resolution is set at .+ ( a ) ( b)(c)(d ) + + ( e ) ( f)(g)(h ) + 1 * 6c@ frame no . & 50 & 100 & 150 & 200 & 250 & 300 + huynh & 1.19 & 2.35 & 3.68 & 6.41 & 10.45 & 12.69 + ke & 1.81 & 6.27 & 14.32 & 26.94 & 44.87 & 67.53 + proposed & 1.27 & 3.93 & 8.12 & 14.28 & 22.40 & 32.13 + [ tab : time ] for the above simulated image sequence , we added 3 pixels gaussian noise to the image sequence and also added 10% outliers to the tracking matrix .fig.[fig : s6 ] ( b)-(d ) show three noise and outlier corrupted images . using the contaminated data , all outliers were successfully removed and the motion and shape matrices were recovered by employing the proposed robust algorithm .the corresponding 3d dynamic structures recovered by the proposed approach are shown in fig.[fig : s6 ] ( f)-(h ) , respectively .it is evident that the deformable cubic structures are correctly retrieved by the proposed robust strategy .the method was extensively tested on a number of real image sequences .we will report the experimental results and evaluations on four real sequences in the paper .the first test is on a fountain base sequence captured at downtown san francisco .the sequence consists of 10 images and on average 5648 features were tracked across the sequence using the feature tracking system . since the images have a large portion of homogeneous and repetitive textures , it is thus hard to track accurately for this type of scene , though visually only a few features were obviously mismatched . in order to test the algorithm ,an additional 5% outliers were added to the tracking data .all these features with disparities to the first image are shown in fig.[fig : r1 ] . using the proposed scheme ,the outliers were successfully rejected . after rejecting the outliers ,the weighted alternative algorithm was employed to recover the motion and structure matrices .finally , the solution was upgraded to the euclidean space . as shown in fig.[fig : r1 ] , the structure looks realistic and most details are correctly recovered . the histogram distribution of the reprojection residual matrix ( [ eq : residual_matrix ] ) with outliers is shown in fig.[fig : r2 ] ( a ) .the residuals are largely conform to the assumption of normal distribution .it can be seen from the distribution that the outliers can be obviously distinguished from inliers , the computed threshhold is shown in the figure .the histogram distribution of the residuals of the detected inlying data is shown in fig.[fig : r2 ] ( b ) .obviously , the residual error is reduced significantly after rejecting the outliers . as a quantitative evaluation ,the final reprojection errors by different approaches are tabulated in table [ tab : reprojection_error ] , from which we can see that the proposed scheme yields the lowest reprojection error .+ + + + ( a ) ( b ) + the second sequence is a corner of hearst gym at uc berkeley .there are 12 images in the sequence , and on average 1890 features were tracked in total .the correctly detected inlying features , together with about 5% outliers are shown in fig.[fig : r3 ] . using the proposed robust algorithm, we successfully recovered the euclidean structure of the scene , as shown in fig.[fig : r3 ] , all outliers are correctly detected and removed . as a comparison , the reprojection errors obtained using different algorithms are listed in table [ tab : reprojection_error ] , which shows that the proposed approach outperforms other robust algorithms .the third test is on a deformable dinosaur sequence from the literature .the sequence consists of 231 images with different movement and deformation of a dinosaur model .the image resolution is pixel and 49 features were tracked across the sequence . in order to test the robustness of the algorithm , an additional 8% outliers were added to the tracking data as shown in fig.[fig : nr1 ] . using the proposed approach ,all outliers were successfully rejected , however , a few tracked features were also eliminated due to large tracking errors .the nonrigid extended approach was employed to recover the motion and structure matrices , and the solution was then upgraded to the euclidean space .fig.[fig : nr1 ] shows the reconstructed structure and wireframes .the vrml model is visually realistic and the deformation at different instant is correctly recovered , although the initial tracking data are not very reliable .+ + + 1 * 4c@ dataset & fountain & hearst & dinosaur & face + huynh & 0.736 & 0.742 & 0.926 & 0.697 + ke & 0.579 & 0.635 & 0.733 & 0.581 + proposed & 0.426 & 0.508 & 0.597 & 0.453 + [ tab : reprojection_error ] + + ( a ) ( b ) + the histogram distribution of the reprojection residual matrix ( [ eq : residual_matrix ] ) with outliers is shown in fig.[fig : nr2 ] ( a ) .the residuals are largely conform to the assumption of normal distribution .as can be seen from the histogram , the outliers are obviously distinguished from inliers .a threshhold is computed from the mean and std of the distribution , and the histogram of the residuals produced by the final solution after rejecting outliers is shown in fig.[fig : nr2 ] ( b ) , which shows a significant resuction on the residual errors . for comparison, we also extend the algorithms of huynh and ke to the nonrigid scenarios , and the reprojection errors by different algorithms are shown in table [ tab : reprojection_error ] .the proposed scheme yields the lowest reprojection error in this test .the last experiment is on a face sequence with different facial expressions .the dataset was downloaded from fgnet at http://www-prima.inrialpes.fr/fgnet/html/home.html .we selected 200 images from the sequence .the image resolution is with 68 automatically tracked feature points using the active appearance model ( aam ) . for test purpose, 8% outliers were added to the tracking data as shown in fig.[fig : nr3 ] .the proposed robust algorithm was used to recover the euclidean structure of the face .fig.[fig : nr3 ] shows the reconstructed vrml model with texture and the corresponding wireframes from different viewpoints . as demonstrated in the results ,different facial expressions have been correctly recovered by the proposed approach .the reprojection errors obtained from different algorithms are tabulated in table [ tab : reprojection_error ] . as in other experiments, the proposed robust approach also yields the best performance .in this paper , we first proposed a new augmented factorization framework which has been proved to be more accurate than classical affine factorization , especially in the situation when the centroid of the imaged features could not be reliably recovered due to the presence of missing and outlying data .then , we presented an alternatively weighted factorization algorithm to handle incomplete tracking data and alleviate the influence of large image noise .finally , a robust factorization scheme was designed to deal with contaminated data with outliers and missing points .the proposed technique requires no prior information of the error distribution in the tracking data , and it can be directly extended to nonrigid factorization , which was rarely discussed in the literature .extensive evaluations on both synthetic and real datasets demonstrated the advantages of the proposed scheme over previous methods .a. agudo , l. agapito , b. calvo , and j. m. montiel , `` good vibrations : a modal analysis approach for sequential non - rigid structure from motion , '' in _ proceedings of the ieee conference on computer vision and pattern recognition _ , 2014 , pp. 15581565 .a. agudo , b. calvo , and j. montiel , `` finite element based sequential bayesian non - rigid structure from motion , '' in _ computer vision and pattern recognition ( cvpr ) , 2012 ieee conference on_.1em plus 0.5em minus 0.4emieee , 2012 , pp .. p. m. aguiar and j. m. moura , `` rank 1 weighted factorization for 3d structure recovery : algorithms and performance analysis , '' _ ieee transactions on pattern analysis and machine intelligence _ , vol . 25 , no . 9 , pp . 11341149 , 2003 .i. akhter , y. sheikh , s. khan , and t. kanade , `` trajectory space : a dual representation for nonrigid structure from motion , '' _ ieee transactions on pattern analysis and machine intelligence _33 , no . 7 , pp .14421456 , 2011 .bazin , y. seo , r. hartley , and m. pollefeys , `` globally optimal inlier set maximization with unknown rotation and focal length , '' in _european conference on computer vision_.1em plus 0.5em minus 0.4em springer , 2014 , pp .803817 . c. bregler , a. hertzmann , and h. biermann ,`` recovering non - rigid 3d shape from image streams , '' in _ computer vision and pattern recognition , 2000 . proceedings .ieee conference on _ , vol .2.1em plus 0.5em minus 0.4emieee , 2000 , pp . 690696 . s. christy and r. horaud , `` euclidean shape and motion from multiple perspective views by affine iterations , ''_ ieee transactions on pattern analysis and machine intelligence _ , vol . 18 , no . 11 , pp .10981104 , 1996 . m. a. fischler and r. c. bolles , `` random sample consensus : a paradigm for model fitting with applications to image analysis and automated cartography , '' _ communications of the acm _ , vol .24 , no . 6 , pp .381395 , 1981 .a. gruber and y. weiss , `` multibody factorization with uncertainty and missing data using the em algorithm , '' in _ computer vision and pattern recognition , 2004 .cvpr 2004 .proceedings of the 2004 ieee computer society conference on _ , vol .1.1em plus 0.5em minus 0.4emieee , 2004 , pp .i707 .d. q. huynh , r. hartley , and a. heyden , `` outlier correction in image sequences for the affine camera , '' in _ computer vision , 2003. proceedings .ninth ieee international conference on_.1em plus 0.5em minus 0.4em ieee , 2003 , pp . 585590 .q. ke and t. kanade , `` robust l 1 norm factorization in the presence of outliers and missing data by alternative convex programming , '' in _ 2005 ieee computer society conference on computer vision and pattern recognition ( cvpr05 ) _ , vol .1.1em plus 0.5em minus 0.4emieee , 2005 , pp . 739746 .z. liu , p. monasse , and r. marlet , `` match selection and refinement for highly accurate two - view structure from motion , '' in _european conference on computer vision_.1em plus 0.5em minus 0.4emspringer , 2014 , pp . 818833 .d. d. morris and t. kanade , `` a unified factorization algorithm for points , line segments and planes with uncertainty models , '' in _ computer vision , 1998 .sixth international conference on_.1em plus 0.5em minus 0.4emieee , 1998 , pp . 696702 .i. nurutdinova and a. fitzgibbon , `` towards pointless structure from motion : 3d reconstruction and camera parameters from general 3d curves , '' in _ proceedings of the ieee international conference on computer vision _, 2015 , pp . 23632371 .t. okatani , t. yoshida , and k. deguchi , `` efficient algorithm for low - rank matrix factorization with missing components and performance comparison of latest algorithms , '' in _ 2011 international conference on computer vision_.1em plus 0.5em minus 0.4emieee , 2011 , pp .842849 .j. oliensis and r. hartley , `` iterative extensions of the sturm / triggs algorithm : convergence and nonconvergence , '' _ ieee transactions on pattern analysis and machine intelligence _ , vol .29 , no .12 , pp . 22172233 , 2007 .m. paladini , a. del bue , j. xavier , l. agapito , m. stoi , and m. dodig , `` optimal metric projections for deformable and articulated structure - from - motion , '' _ international journal of computer vision _ , vol . 96 , no . 2 , pp .252276 , 2012 .g. qian , r. chellappa , and q. zheng , `` bayesian algorithms for simultaneous structure from motion estimation of multiple independently moving objects , '' _ ieee transactions on image processing _ , vol .14 , no . 1 ,pp . 94109 , 2005 .b. resch , h. lensch , o. wang , m. pollefeys , and a. sorkine - hornung , `` scalable structure from motion for densely sampled videos , '' in _ proceedings of the ieee conference on computer vision and pattern recognition _ , 2015 , pp .39363944 .d. scaramuzza , `` 1-point - ransac structure from motion for vehicle - mounted cameras by exploiting non - holonomic constraints , '' _ international journal of computer vision _ ,95 , no . 1, pp . 7485 , 2011 .p. sturm and b. triggs , `` a factorization based algorithm for multi - image projective structure and motion , '' in _european conference on computer vision_.1em plus 0.5em minus 0.4emspringer , 1996 , pp .709720 .j. taylor , a. d. jepson , and k. n. kutulakos , `` non - rigid structure from locally - rigid motion , '' in _ computer vision and pattern recognition ( cvpr ) , 2010 ieee conference on_.1em plus 0.5em minus 0.4em ieee , 2010 , pp .27612768 .l. torresani , a. hertzmann , and c. bregler , `` nonrigid structure - from - motion : estimating shape and motion with hierarchical priors , '' _ ieee transactions on pattern analysis and machine intelligence _ , vol .30 , no . 5 , pp . 878892 , 2008 .b. triggs , `` factorization methods for projective structure and motion , '' in _ computer vision and pattern recognition , 1996 .proceedings cvpr96 , 1996 ieee computer society conference on_.1em plus 0.5em minus 0.4emieee , 1996 , pp .845851 .g. wang and q. j. wu , `` perspective 3-d euclidean reconstruction with varying camera parameters , '' _ ieee transactions on circuits and systems for video technology _ , vol .19 , no . 12 , pp . 17931803 , 2009 .g. wang , and q. j. wu , `` quasi - perspective projection model : theory and application to structure and motion factorization from uncalibrated image sequences , '' _ international journal of computer vision _ , vol .87 , no . 3 , pp .213234 , 2010 .g. wang , j. s. zelek , and q. j. wu , `` structure and motion recovery based on spatial - and - temporal - weighted factorization , '' _ ieee transactions on circuits and systems for video technology _ , vol .22 , no .11 , pp . 15901603 , 2012 .g. wang , j. s. zelek , and q. j. wu , `` robust structure from motion of nonrigid objects in the presence of outlying and missing data , '' in _ computer and robot vision ( crv ) , 2013 international conference on_.1em plus 0.5em minus 0.4emieee , 2013 , pp .159166 .g. wang , j. s. zelek , q. j. wu , and r. bajcsy , `` robust rank-4 affine factorization for structure from motion , '' in _ applications of computer vision ( wacv ) , 2013 ieee workshop on_.1em plus 0.5em minus 0.4em ieee , 2013 , pp .180185 .h. wang , t .- j . chin , and d. suter , `` simultaneously fitting and segmenting multiple - structure data with outliers , '' _ ieee transactions on pattern analysis and machine intelligence _ , vol .34 , no . 6 , pp .11771192 , 2012 .j. yu , t .- j . chin , and d. suter , `` a global optimization approach to robust multi - model fitting , '' in _ computer vision and pattern recognition ( cvpr ) , 2011 ieee conference on_.1em plus 0.5em minus 0.4em ieee , 2011 , pp. 20412048 .
the paper preposes a new scheme to promote the robustness of 3d structure and motion factorization from uncalibrated image sequences . first , an augmented affine factorization algorithm is proposed to circumvent the difficulty in image registration with imperfect data . then , an alternative weighted factorization algorithm is designed to handle the missing data and measurement uncertainties in the tracking matrix . finally , a robust structure and motion factorization scheme is proposed to deal with outlying and missing data . the novelty and main contribution of the paper are as follows : ( i ) the augmented factorization algorithm is a new addition to previous affine factorization family for both rigid and nonrigid objects ; ( ii ) it is demonstrated that image reprojection residuals are in general proportional to the error magnitude in the tracking data , and thus , the outliers can be detected directly from the distribution of image reprojection residuals , which are then used to estimate the uncertainty of inlying measurement ; ( iii ) the robust factorization scheme is proved empirically to be more efficient and more accurate than other robust algorithms ; and ( iv ) the proposed approach can be directly applied to nonrigid scenarios . extensive experiments on synthetic data and real images demonstrate the advantages of the proposed approach . computer vision , structure from motion , robust factorization , weighted factorization , reprojection residual , outlier detection .
of electromagnetic waves in transmission through various structures has been of fundamental importance in a great number of applications . through interaction of waves with matterit is possible to control the wave intensity , polarization and propagation direction .the simplest devices for wave control , such as optical lenses and mirrors , have evolved into numerous appliances operating with radiation from radiowaves to ultraviolet : fresnel and dielectric lenses , antenna arrays , etc .almost all known structures for wave manipulations perform a particular functionality in the frequency range they operate , while being not transparent and casting a `` shadow '' ( or creating some disturbance ) at other frequencies .figure [ fig : fig1a ] illustrates such functionality by the example of newton s prism .the prism designed for light of one color inevitably disturbs the paths of light of other colors . on the other hand , designing structures that manipulate waves only of specific frequencies ( see fig . [fig : fig1b ] ) , not interacting with radiation of other frequencies , would enable new exciting opportunities .in particular , such devices performing different functionalities at different frequencies could be cascaded and even combined in one _ single _ structure ( if its constitutional elements are of several different types ) [ see fig .[ fig : fig1c ] ] .+ even arrays of very small and non - resonant elements inevitably reflect and absorb electromagnetic waves , and here we focus on new design solutions which allow minimization of such parasitic interactions everywhere except the desired operational band . to the best of our knowledge ,such frequency - selective control of electromagnetic radiation has been explored only in artificial composites manipulating reflection and absorption of incident waves . to complete the full set of functionalities for wave control ,it is necessary to design a frequency - selective transmitter , i.e. a structure that tailors transmitted fields of incident waves of desired frequencies and passes through the others .such `` transmitters '' can be integrated into a cascade of different devices independently performing multifunctional multifrequency operations .obviously , conventional optical and microwave lenses can not take on the role of such a structure .another candidate could be transmitarray antennas ( also called array lenses ) invented several decades ago .they significantly extended our opportunities for wave control , enabling wavefront shaping and beam scanning .conventional transmitarray antennas incorporate a ground plane with the receiving and transmitting antenna arrays on its sides connected by matched cables . therefore , transmitarray antennas can not pass through the incident radiation at the frequencies beyond their bands , casting a shadow .recently , there have been considerable interest and progress in manipulation of electromagnetic waves using metamaterials . in this scenario ,the structure represents a composite comprising a two - dimensional array of sub - wavelength elements ( so - called _ metasurface _ ) .the elements are electrically and magnetically polarizable so that the dipole moments induced in each element form huygens pairs .therefore , each element does not scatter in the backward direction ( producing zero reflection ) , while it radiates waves with the prescribed phase and amplitude in the forward direction .the forward scattered waves from the elements together with the incident wave form the transmitted wave .it should be noted that in all known transmitarray metasurfaces the structural elements constitute reflectionless huygens sources only inside a narrow frequency band . beyond thisband reflections appear because of prevailing excitation of either electric or magnetic dipole .this is due to the fact that these elements have different frequency dispersions of the electric and magnetic dipole modes ( see ) . in order to design a structure that transforms waves but is invisible for the incident radiation outside of the operational band ( see fig . [fig : fig1b ] ) , its elements should be designed in such a way that the electric and magnetic dipole moments , induced in them , are balanced ( have equal amplitudes ) at all practically relevant frequencies .this implies that both dipole responses should be created by excitation of the same resonant mode ( the dipole moments are formed by the same current distribution in the element ) .such regime is possible only if each element consists of a _ single _ conductive wire or strip .these wire elements inevitably possess bianisotropic properties .interestingly , being narrowband , the single - wire huygens elements do not produce reflections in a very broad range of frequencies . in the earlier work , such scenariowas realized only for absorbers , but not for transmitarrays . in this paper, we synthesize a uniaxial ( isotropic in its plane ) low - loss reciprocal metasurface that transforms the wavefront of incident waves in a desired manner ( in transmission ) at a desired frequency , remaining transparent in a wide frequency band .we analyse all possible scenarios of realization of such a metasurface and determine the unique requirement for the electromagnetic response of its elements .we design and test two synthesized metasurfaces that demonstrate their abilities for wavefront shaping and anomalous refraction . moreover , we propose a promising approach to design multifunctional cascaded metasurfaces that provide different operations at different frequencies ( similarly to the conceptual example in fig .[ fig : fig1c ] ) .our approach , generally , can be extended to volumetric metamaterials .we find design solutions for integrated metasurfaces that provide three basic functions such as full control over reflection , absorption and transmission properties .based on these metasurfaces , one can synthesize arbitrary cascaded composites for general multifunctional wave manipulation .manipulation of waves transmitted through a thin metasurface can be accomplished due to specifically designed phase gradient over the metasurface plane ( e.g. , ) .the phase gradient can be achieved by precise adjustment of the phases of transmitted waves from each metasurface inclusion . to adjust the phases for each inclusion , we utilize so - called locally uniform homogenization approach , i.e. we tune an individual inclusion , assuming that it is located in an array with a uniform phase distribution .an array of such individually adjusted inclusions possesses nearly the required non - uniform phase distribution .therefore , it is important to design individual inclusions so that uniform arrays formed by them transmit incident waves , conserving its amplitude but changing its phase by a specific value ( different for each inclusion ) that belongs to the interval from 0 to .next , we examine all possible scenarios of designing metasurface elements that satisfy these conditions .let us consider a reciprocal metasurface ( the same transmission properties from both sides ) as a two - dimensional periodic array of sub - wavelength bianisotropic inclusions polarizable electrically and magnetically .the ability of the inclusions to get polarized in the external electric and magnetic fields is described , respectively , by the effective polarizability dyadics and , where denotes the transpose operation .bianisotropy implies that the electric ( magnetic ) field of the incident wave can also produce magnetic ( electric ) polarization in the inclusions .this effect is often called magnetoelectric coupling and can be characterized by the magnetoelectric polarizability dyadic , which for reciprocal structures is connected with the electromagnetic polarizability as . considering the uniaxial symmetry of the metasurface ,it is convenient to represent the polarizability dyadics in the following form : where and are the transverse unit and vector - product dyadics , respectively , and the indices and refer to the symmetric and antisymmetric parts of the corresponding dyadics .taking into account the reciprocity of the metasurface , equations ( [ eq : uniaxial1 ] ) can be rewritten as assuming that the incident wave impinges on the uniaxial metasurface normally along the -axis , the electric fields of the reflected and transmitted plane waves from the metasurface are given by \cdot\mathbf{e}_{\rm inc } , \vspace*{.2cm}\\ \displaystyle \label{eq : q21}\ ] ] \right)\overline{\overline{i}}_{\rm t } + \frac{j\omega}{s } \widehat{\alpha}^{\rm co}_{\rm em}\overline{\overline{j}}_{\rm t}\right]\cdot\mathbf{e}_{\rm inc } , \label{eq : q22}\ ] ] where is the angular frequency , is the area of the array unit cell , and is the free - space wave impedance . as discussed in the introduction , to realize broadband reflectionless regime , the metasurface elements must be bianisotropic single - wire inclusions ( see examples in fig . [fig : fig2 ] ) . in the literature ,bianisotropy is usually classified to two classes : chiral class with symmetric electromagnetic dyadic ( ) and omega class , when the dyadic is antisymmetric ( ) . based on this classification , for the sake of clarity, we consider this two cases separately .for a uniform array of single - wire omega inclusions ( see fig . [fig : fig2a ] ) the following relation between the effective polarizabilities of each inclusion holds : substituting from ( [ eq : q3 ] ) in ( [ eq : q21 ] ) , we find the fields of reflected waves from the omega metasurface : \cdot\mathbf{e}_{\rm inc}.\vspace*{.2cm}\\ \displaystyle \label{eq : q4}\ ] ] thus , the condition of zero reflection essential for our transmitarray ( ) implies a limitation on the effective polarizabilitities : this limitation on the effective polarizabilities ( which takes into account interactions between the inclusions ) leads to a corresponding limitation for the individual polarizabilities ( modelling the properties of an individual particle in free space ) : .this condition , obviously , can not be satisfied with passive inclusions . indeed , the opposite signs of the electric and magnetic polarizabilities imply that their imaginary parts have the opposite signs .this scenario corresponds to the case of a passive - active pair of dipole moments .furthermore , one can see from ( [ eq : q22 ] ) [ assuming ] and ( [ eq : q5 ] ) that in this case the phase of the transmitted wave through the metasurface is always equal to that of the incident wave ( ) .thus , it is impossible to synthesize a transmitarray with the desired properties using single - wire omega elements . likewise , effective polarizabilities of chiral single - wire inclusions ( see fig .[ fig : fig2b ] ) in a uniform array are related to one another as follows : one can see from ( [ eq : q21 ] ) that in the case of a chiral metasurface ( ) , the condition of zero reflection ( ) simply requires the balanced electric and magnetic dipoles of each metasurface inclusion . taking this result into account and combining with relation ( [ eq : q6 ] ) , the transmitted fields through the chiral metasurface ( [ eq : q22 ] )can be written as \cdot\mathbf { e}_{\rm inc } , \label{eq : q10}\ ] ] where the upper and lower signs correspond to chiral inclusions with the right and left handedness , respectively . from ( [ eq : q10 ] ) it is seen that , generally , the polarization of the wave transmitted through a chiral transmitarray is different from that of the incident wave . in designs of conventional transmitarraysalmost always it is assumed that the polarization of the wave passing through a transmitarray does not change .however , in many applications polarization - plane rotation of transmitted waves ( in focusing arrays designed for circularly polarized waves , for example ) , is acceptable .thus , it is important to consider also the case when the transmitarray transforms the incident wave polarization , since if there is no requirement for keeping the polarization constant , there is more design freedom in transmitarrays realizations .therefore , we look for a solution for the transmitted field in the most general form of elliptical polarization : where is the phase difference between the two orthogonal components of the elliptically polarized transmitted field , and are the semi - major and semi - minor axes of the polarization ellipse ( real values ) , and is the phase shift between the incident wave ( assumed to be linearly polarized ) and the elliptically polarized transmitted wave . comparing ( [ eq : q10 ] ) and ( [ eq : q11 ] ) , we find where symbol denotes the phase angle of the exponential representation of a complex number . from the energy conservation in lossless metasurfacesit follows that , which connects the real and imaginary parts of the electric polarizability of each unit cell : using ( [ eq : q14 ] ) , we can rewrite ( [ eq : q12 ] ) and ( [ eq : q13 ] ) as and . it should be noted that in order to achieve the maximum efficiency , all the elements of the transmitarray must radiate waves of the same polarization , ensuring constructive interferencethis implies that the polarization parameters and should be equal for all the elements .therefore , from ( [ eq : q15 ] ) one can see that the imaginary part of the polarizability must be the same for all the elements .evidently , in this case , from ( [ eq : q16 ] ) we see that the phases of the transmitted waves from each element are equal and can not be adjusted arbitrarily .this fact forbids designing efficient transmitarrays for wavefront control with single - wire chiral inclusions . in the previous sections it was shownthat design of a transmitarray which is `` invisible '' beyond its operational band requires the use of bianisotropic single - wire inclusions . on the other hand , it was demonstrated that bianisotropic arrays of single - wire inclusions do not provide full phase control from 0 to .the only solution to overcome these two contradictory statements is designing a transmitarray whose each unit cell consists of bianisotropic inclusions , being in overall not bianisotropic .this situation is possible if the bianisotropic effects of the inclusions in a single unit cell are mutually compensated . to realize it, one can compose a unit cell of inclusions with the opposite ( by sign ) bianisotropy parameters . therefore , there can be two different but equivalent scenarios : a unit cell consists of chiral inclusions with left and right handedness and a unit cell consists of oppositely oriented omega inclusions . in both these casesthe bianisotropic effects are completely compensated and the unit cell behaves as a pair of orthogonal electric and magnetic dipoles . however , in contrast to the well known unit cells consisting of a split ring resonator and a continuous wire , this anisotropic unit cell made of bianisotropic elements is reflectionless and `` invisible '' over a very broad frequency range .the field expressions ( [ eq : q21 ] ) and ( [ eq : q22 ] ) for the array of single - wire inclusions with compensated bianisotropy we rewrite as \cdot\mathbf{e}_{\rm inc } , \vspace*{.2cm}\\ \displaystyle \label{eq : q31}\ ] ] \cdot\mathbf{e}_{\rm inc } , \label{eq : q32}\ ] ] where reflection from the metasurface is suppressed , only if the dipole moments of the unit cells are balanced .assuming that the effective polarizabilities of lossless inclusions in a periodic array can be written as one can find from ( [ eq : q32 ] ) the fields transmitted through the metasurface : \cdot\mathbf{e}_{\rm inc}=e^{\displaystyle -j \phi_{\rm t } } \cdot\mathbf{e}_{\rm inc } , \label{eq : phase2}\ ] ] where figure [ fig : fig3a ] shows the amplitude and phase of the transmitted wave , dictated by ( [ eq : phase2 ] ) and ( [ eq : phase3 ] ) , through a uniform anisotropic array of single - wire inclusions . herewe have assumed that the real part of the individual polarizability of the unit cell has lorentzian dispersion , where and have been chosen to correlate with the numerical results described in the next section .it is seen that the amplitude of the transmitted wave is identically equal to unity at all frequencies , while its phase spans a full range ( the arctangent function in ( [ eq : phase3 ] ) varies over , therefore , varies over ) .similar frequency dispersions were explored in . since in our transmitarrayall the unit cells should operate at the same frequency , the required phase variations can be achieved by adjusting the polarizability according to ( [ eq : phase3 ] ) . the simplest way to control the polarizability strength of the unit cell is to proportionally scale all the sizes of its inclusions . as seen from fig .[ fig : fig3a ] , at the resonance ( ghz ) , the phase of transmission is .if we fix this frequency as the operational one , downscaling all the dimensions of the unit - cell inclusions will result in a phase increase ( from towards 0 ) of the transmitted wave at the operational frequency .upscaling the inclusions , vice versa , will lead to a phase decrease ( from towards ) .it is simple to prove that a metasurface possessing only electric dipole response ( ) can not provide full phase variation of transmission .indeed , in this case reflections from the metasurface inevitably appear and the phase of the transmitted wave spans only the range .therefore , metasurfaces possessing solely electric dipole response ( commonly called in the literature as single - layer frequency selective surfaces ) can not have 100% efficiency . in summary , our analysis shows that broadband reflectionless uniaxial transmitarrays can be realized only with bianisotropic single - wire inclusions whose magnetoelectric coupling is compensated on the level of the unit cell . in this case , the polarization of the transmitted wave is the same as that of the incident one .importantly , polarization plane rotation is impossible in such transmitarrays .based on the preceding theoretical analysis , we synthesize transmitarrays from chiral helical inclusions ( see fig . [fig : fig2b ] ) , compensating chirality on the level of the unit cell .alternatively , one could use inclusions with omega electromagnetic coupling . without loss of generality , in this paper we design transmitarrays operating in microwaves on account of peculiarities of the inclusions fabrication .arrays of helical inclusions operating at infrared frequencies can be manufactured based on fabrication technologies reported in .first , it is important to design the unit - cell topology with suppressed chirality . to ensure uniaxial symmetry, the unit cell should contain helices oriented in two orthogonal directions in the metasurface plane .we utilize the arrangement of helices proposed in and shown in fig .[ fig : fig4a ] .the unit cell includes two blocks of left - handed and two blocks of right - handed helices .the sub - wavelength size of the inclusions ensures that the unit - cell size is smaller than the operational wavelength .therefore , the array of such unit cells can be modelled as sheets of homogeneous surface electric and magnetic currents , and the reflected and transmitted plane - wave fields are determined by expressions ( [ eq : q31 ] ) and ( [ eq : q32 ] ). figure [ fig : fig3b ] shows numerically calculated amplitude and phase of transmission coefficients through an infinite periodic array of the unit cells shown in fig .[ fig : fig4a ] . the unit - cell dimensions in this example were chosen as follows : mm , mm ( the distance between the center of the block and the center of helices ) .the helices have the pitch ( the height of one turn ) mm , and the radius of the turn mm .the radius of the inclusion wire is mm . as one can see from fig .[ fig : fig3b ] , the transmittance is more than 88% at all frequencies , while the phase of transmission spans nearly full range from 3.5 ghz to 5.5 ghz .in contrast to the theoretical results in fig . [fig : fig3a ] , in this case transmission is not unity at the resonance due to some dissipation of energy in copper helices . as it was discussed in the previous section , the phase control of the transmitted waves can be accomplished by proportional scaling the inclusions dimensions .the phase variation is engineered , for simplicity , only along one direction , along the -axis . in our design of transmitarrays with a non - uniform phase distributionwe tune the phase individually for each block of helices ( not the entire unit cell ) to ensure smoother phase gradient over the transmitarray plane ( see fig .[ fig : fig4b ] ) .although in this case chirality of adjacent in the -direction blocks is not completely compensated ( because the helices in the blocks have slightly different sizes and polarizability amplitudes ) , overall , the chirality effect is nearly suppressed due to a great number of different unit cells . based on the preceding theoretical analysis , we design two transmitarrrays with different functionalities in order to demonstrate the potential of the approach .these examples show how to manipulate the direction of wave propagation as well as the wavefront shape .in this example we synthesize a transmitarray that refracts normally incident waves ( along the -direction ) at an angle in the -plane . to achieve the effect of anomalous refraction, we need to tune the inclusions dimensions in every block so that there is a linear phase gradient of transmission along the -direction of the array .thus , from the phased arrays theory , the array should be periodical along the -direction with the period mm , where is the wavelength at the operational frequency 4.32 ghz . the phase of transmission changes from 0 to along one period .the periodicity of the array in the direction is ( the period of the unit cell ) , since along this direction there is no phase variation . in order to ensure smooth phase variations, we place the maximal number of inclusions blocks with prescribed phases along the period . based on the dimensions of the helices ( about ) ,we form the period of six blocks of helices , i.e. mm . in this examplethe spacing between the helices in the blocks mm .the dimensions of the helices in each block are listed in table [ tabl : table2 ] in appendix [ app:1 ] .the simulated results for the designed transmitarray are shown in fig .[ fig : fig5a ] . indeed , the structure refracts the incident wave at from the normal . at the operating frequency 4.32 ghz( see fig . [fig : fig5b ] ) the transmittance from the structure reaches 83% . non - zero reflection of 5% and absorption of 12% in the transmitarray result from the non - ideal impedance equalization .remarkably , the transmitarray passes through more than 95% of the incident power ( without its modification ) beyond its operational band from 4.13 to 4.47 ghz ( see fig . [fig : fig5b ] ) . at very high frequenciessome parasitic reflections from the transmitarray appear .they are caused by the higher - order resonances in the double - turn helices of the transmitarray and occur near the triple operating frequency at 13.2 ghz .at very low frequencies , the transmitarray inclusions are not excited by incident waves and , therefore , are nearly fully transparent . in order to demonstrate the ability of wavefront shaping, we design a transmitarray that focuses normally incident plane waves in a line parallel to the -axis . due to reciprocity, the metasurface illuminated by a line source from the focal point transmits a collimated beam .such lens performance requires that the phase gradient of the transmitarray has a parabolic profile .the designed focal distance of the lens is just a fraction of the operational wavelength .such a short focal distance is provided by the sub - wavelength sizes of the helices .the dimensions of the blocks of helices in this example are as follows : mm and mm .the lens is infinite along the -axis with the periodicity equal to the size of one unit cell . along the -directionthe lens is mm long and contains 29 blocks of helices .the parabolic phase gradient dictated by is achieved due to precise tuning of the inclusions dimensions in each block ( described in table [ tabl : table3 ] in appendix [ app:1 ] ) . here, is the coordinate , is the phase of transmission in the center of the transmitarray ( chosen arbitrarily ) and is the wavelength at 4 ghz . to test the performance of the designed lens , we illuminated it by a source of cylindrical waves located at the focal distance from the lens .the simulation results at the operating frequency ghz ( the actual frequency was shifted from the designed one ) are presented in fig .[ fig : fig6 ] .as expected , the lens transforms the cylindrical wavefront of the incident wave into a planar one .next , experimental testing of the designed lens was conducted in a parallel - plate waveguide ( fig .[ fig : fig7a ] ) . according to the image theory ,images of chiral inclusions placed between the plates of the waveguide represent equivalent chiral inclusions with the opposite handedness .therefore , it is enough to place only one row of blocks ( one half of each unit cell ) inside the waveguide ( see fig . [fig : fig7b ] ) .effectively , it emulates full unit cells ( fig .[ fig : fig4a ] ) periodically repeated along the -direction .the helical inclusions were fabricated with precision 0.01 mm and embedded in rohacell-51hf material with and for mechanical support .the transmitarray was excited by a monopole antenna oriented along the -axis and placed in the focal point at mm .the bottom plate of the waveguide incorporates a copper mesh with the period of 5 mm ( see fig .[ fig : fig7b ] ) . due to the deeply sub - wavelength periodicity, the mesh practically does not disturb the fields inside the waveguide . on the other hand , outside of the waveguide there are decaying fields in the near proximity of the mesh .the electric field distribution inside the waveguide can be analysed through these near fields measured by a small probe antenna ( fig .[ fig : fig7a ] ) .more detailed information about the measurement set - up can be found in .the measured electric field distribution inside the waveguide at the resonance frequency 3.86 ghz is shown in fig .[ fig : fig8a ] .+ one can see that the fabricated lens in fact transforms the cylindrical wavefront of the incident wave into a planar one . according to fig .[ fig : fig8b ] and fig .[ fig : fig8c ] , as expected , the lens does not interact with the incident waves beyond the operational band .incident waves pass through the structure without attenuation and wavefront transformations .this experimental result confirms our theoretical findings .in this section we explore the possibility for integration of the designed transmitarrays in a cascade of metasurfaces . to highlight the three basic functionalities for wave control , such as manipulation of reflection , transmission and absorption properties, we design and test numerically a composite layer consisting of three cascaded metasurfaces with the corresponding properties ( see fig .[ fig : fig9a ] ) .the incident wave illuminates the cascade normally from the -direction .the first metasurface illuminated by the incident wave is a so - called metamirror proposed in .it nearly fully reflects normally incident waves at 5 ghz at an angle from the normal .the second metasurface was designed to totally absorb incident radiation at 6 ghz .it represents a composite of double - turn helices similar to that described in but tuned to operate at another frequency .all the helices in the composite have the same dimensions : the helix pitch is mm , the helix radius mm and the wire radius mm .the helices are made of lossy nichrome with the conductivity about s / m .the third cascaded metasurface is the lens designed in the present work and operating at 3.9 ghz .all the metasurfaces consist of 29 blocks and have the same spacing 14.14 mm between the adjacent blocks .the second ( middle ) metasurface is located at the origin of the coordinate system , while the first and the third structures are positioned at mm and mm , respectively .such spacing ensures that the metasurfaces are located from one another at a distance not less than at their operational frequencies to prevent strong near - field interactions .the overall thickness of the three - layer structure is mm , which does not exceed one wavelength at 6 ghz .the performance of the metasurface cascade at the three operating frequencies is shown in figs .[ fig : fig9b ] , [ fig : fig9c ] and [ fig : fig9d ] . at 5ghz , incident waves are nearly totally reflected by the first metasurface at the angle from the -axis ( fig .[ fig : fig9b ] ) . at 6 ghz ,the first metasurface becomes `` invisible '' for incident waves , and nearly all the power is absorbed by the second metasurface ( fig .[ fig : fig9c ] ) .finally , incident waves at 3.9 ghz pass through the first two metasurfaces and are focused by the third metasurface ( fig .[ fig : fig9d ] ) nearly at the designed focal distance .this sub - wavelength three - layer composite is equivalent to the structure shown in fig .[ fig : fig1c ] . the data for reflection, transmission and absorption properties of the three - layer structure is summarized in table [ tabl : table1 ] . since the structure has a finite size , reflectance and transmittance were introduced as ratios of reflected and transmitted powers to the power incident through the cross section of the metasurfaces .frequency , ghz & transmittance , % & reflectance , % & absorbance , % + & & & + 2.0 & 99.6 & 0.2 & 0.2 + 3.0 & 96.8 & 3.0 & 0.2 + 3.9 & 59.2 & 26.8 & 14.0 + 5.0 & 8.0 & 85.8 & 8.3 + 6.0 & 7.9 & 5.1 & 86.5 + 7.0 & 84.3 & 12.5 & 3.2 + 8.0 & 84.6 & 10.7 & 4.7 + as one can see from table [ tabl : table1 ] , while reflection and absorption levels at 5 and 6 ghz , respectively , are high ( more than 85% ) , transmission level at 3.9 ghz is moderate ( about 60% ) .this can be explained by two factors .first , there are some diffraction effects at the edges of the three finite - size metasurfaces .second , the spectrum separation between the metasurfaces operating at 3.9 and 5 ghz is not high enough .the metamirror still reflects a small part of the incident energy at 3.9 ghz .it is seen from table [ tabl : table1 ] , far from the operating frequencies transmission of incident waves through the metasurface cascade exceeds 84% .in this paper , we have proposed a new type of transmitarrays that allow full wave control ( with the efficiency more than 80% ) and are transparent beyond the operating frequency range . due to the frequency - selective response of the designed transmitarrays, they can be easily integrated in existing and new complexes of antennas and filters . in this paper , we have also proposed an approach for designing multifunctional cascades of metasurfaces . depending on the frequency of incident radiation ,such a cascade possesses different responses at different frequencies that can be carefully adjusted . to test the approach , we have designed a cascade of three metasurfaces that performs three different functions for wave control at different frequencies . despite the multifunctional response , the thickness of the designed structure is smaller than the operational wavelength .unique functionalities of the cascaded metasurfaces could be useful in a variety of new applications .importantly , going to the limiting case of cascading metasurfaces , one can design a _metasheet that incorporates different kinds of inclusions performing a multifunctional response .moreover , our approach of cascaded metasurfaces can be also extended to volumetric metamaterials . the main challenge for implementing the designed structures are fabrication issues .however , as we hope to show in our future work , the three - dimensional shape of the helical inclusions can be modified into an appropriate fabrication - friendly printed topology . [ [ app:1 ] ] location of the block along the -axis within the period & handedness of the helices in the block & loop radius of the helices , mm & pitch of the helices , mm & phase of waves transmitted through the block + & & & & + & left & 2.34 & 1.63 & + & right & 2.37 & 1.66 & + & left & 2.39 & 1.67 & + & right & 2.41 & 1.68 & + & left & 2.44 & 1.71 & + & right & 2.70 & 1.88 & + distance from the center of the block to the center of the lens , mm & handedness of the helices in the block & loop radius of the helices , mm & pitch of the helices , mm & phase of waves transmitted through the block , + & & & & + 0.0 & left & 2.50 & 1.54 & 50 + 14.1 & right & 2.48 & 1.53 & 60 + 28.3 & left & 2.45 & 1.51 & 87 + 42.4 & right & 2.43 & 1.50 & 127 + 56.6 & left & 2.41 & 1.48 & 176 + 70.7 & right & 2.38 & 1.46 & 230 + 84.8 & left & 2.26 & 1.39 & 288 + 99.0 & right & 2.59 & 1.60 & 348 + 113.1 & left & 2.47 & 1.52 & 410 + 127.3 & right & 2.43 & 1.50 & 473 + 141.4 & left & 2.41 & 1.48 & 537 + 155.5 & right & 2.36 & 1.46 & 601 + 169.7 & left & 2.13 & 1.31 & 666 + 183.8 & right & 2.52 & 1.55 & 732 + 198.0 & left & 2.45 & 1.51 & 798 +this work was supported by academy of finland ( project 287894 ) .the authors would like to thank dr .y. radi for his help and contribution during the experimental phase .v. s. asadchy , m. albooyeh , and s. a. tretyakov , `` optical metamirror : all - dielectric frequency - selective mirror with fully controllable reflection phase '' , _ j. optical soc . amer .b _ , vol . 33 , no . 2 , pp . a16-a20 feb . 2016 .v. s. asadchy , i. a. faniayeu , y. radi , s. a. khakhomov , i. v. semchenko , and s. a. tretyakov , `` broadband reflectionless metasheets : frequency - selective transmission and perfect absorption '' , _ phys .x _ , vol . 5 , no . 031005 , july 2015 .s. v. hum and j. perruisseau - carrier , `` reconfigurable reflectarrays and array lenses for dynamic antenna beam control : a review '' , _ ieee trans. antennas propagat ._ , vol .62 , no . 1 ,pp . 183198 , jan .n. yu , p. genevet , m. a. kats , f. aieta , j. -p . tetienne , f. capasso , and z. gaburro , `` light propagation with phase discontinuities : generalized laws of reflection and refraction '' , _ science _334 , no . 6054 , pp .333337 , oct .2011 .m. selvanayagam and g. v. eleftheriades , `` discontinuous electromagnetic fields using orthogonal electric and magnetic currents for wavefront manipulation '' , _ optics express _ , vol .21 , no . 12 , pp .1440914429 , june 2013 .b. o. zhu and y. feng , `` passive metasurface for reflectionless and arbitary control of electromagnetic wave transmission '' , _ ieee trans .antennas propagat ._ , vol .63 , no .12 , pp . 55005511 , dec .2015 .a. epstein and g .v .eleftheriades , `` passive lossless huygens metasurfaces for conversion of arbitrary source field to directive radiation '' , _ ieee trans .antennas propagat ._ , vol .62 , no . 11 , pp .56805695 , nov .2014 .n. k. grady , j. e. heyes , d. r. chowdhury , y. zeng , m. t. reiten , a. k. azad , a. j. taylor , d. a. r. dalvit , and h. -t .chen , `` terahertz metamaterials for linear polarization conversion and anomalous refraction '' , _ science _340 , no .6138 , pp . 13041307 , june 2013 .a. balmakou , m. podalov , s. khakhomov , d. stavenga , and i. semchenko , `` ground - plane - less bidirectional terahertz absorber based on omega resonators '' , _ optics letters _ ,40 , no . 9 , pp . 20842087 may 2015 .d. r. smith , w. j. padilla , d. c. vier , s. c. nemat - nasser , and s. schultz , `` composite medium with simultaneously negative permeability and permittivity '' , _ phys . rev . lett ._ , vol .84 , no . 18 , p. 4184, may 2000 .m. decker , i. staude , m. falkner , j. dominguez , d. n. neshev , i. brener , t. pertsch , and y. s. kivshar , `` high - efficiency dielectric huygens surfaces '' , _ adv ._ , vol . 3 , no . 6 , pp .813820 , june 2015 .j. k. gansel , m. thiel , m. s. rill , m. decker , k. bade , v. saile , g. freymann , s. linden , and m. wegener , `` gold helix photonic metamaterial as broadband circular polarizer '' , _ science _ , vol . 325 , no .5947 , sept .a. radke , t. gissibl , t. klotzbcher , p. v. braun , and h. giessen , `` three - dimensional bichiral plasmonic crystals fabricated by direct laser writing and electroless silver plating '' , _ adv ._ , vol . 23 , no .30183021 july 2011 .v. s. asadchy , i. a. faniayeu , y. radi , i. v. semchenko , and s. a. khakhomov , `` optimal arrangement of smooth helices in uniaxial 2d - arrays '' , in _7th international congress on advanced electromagnetic materials in microwaves and optics metamaterials 2013 _ , bordeaux , france , pp . 244246 , 2013 .
control of electromagnetic waves using engineered materials is very important in a wide range of applications , therefore there is always a continuous need for new and more efficient solutions . known natural and artificial materials and surfaces provide a particular functionality in the frequency range they operate but cast a `` shadow '' and produce reflections at other frequencies . here , we introduce a concept of multifunctional engineered materials that possess different predetermined functionalities at different frequencies . such response can be accomplished by cascading metasurfaces ( thin composite layers ) that are designed to perform a single operation at the desired frequency and are transparent elsewhere . previously , out - of - band transparent metasurfaces for control over reflection and absorption were proposed . in this paper , to complete the full set of functionalities for wave control , we synthesize transmitarrays that tailor transmission in a desired way , being `` invisible '' beyond the operational band . the designed transmitarrays for wavefront shaping and anomalous refraction are tested numerically and experimentally . to demonstrate our concept of multifunctional engineered materials , we have designed a cascade of three metasurfaces that performs three different functions for waves at different frequencies . remarkably , applied to volumetric metamaterials , our concept can enable a single composite possessing desired multifunctional response . multifunctional , transmitarray , metasurface , cascade , reflectionless .
recently , chaotic boltzmann machines were proposed as a deterministic implementation of boltzmann machines .the apparently stochastic behavior of chaotic boltzmann machines is achieved , without any use of random numbers , by chaotic dynamics that emerges from pseudo - billiard dynamics .it was shown numerically that the chaotic billiard dynamics of chaotic boltzmann machines can be used for generating sample sequences from the probabilistic distribution of boltzmann machines , and it was successfully applied to other spin models such as the ising model and the potts model . despite these numerical evidences ,there have been no theoretical proof that chaotic boltzmann machines yield samples from the probabilistic distribution of the corresponding boltzmann machines . in this brief note , as a first step of theoretical approach , we investigate the simplest system .namely , we show that chaotic boltzmann machines truly yield samples for the corresponding boltzmann machines if they are composed of only two elements .although our approach can not be applied to larger chaotic boltzmann machines with more than two elements , we expect that it gives some insights into the dynamics of larger chaotic boltzmann machines . since the proof is not entirely trivial , it is considered worth making available on arxiv .this note is an english translation ( with slight modifications ) of the article originally written in japanese .let and be random variables that take values on .we assume that the probabilistic model ] and ] of the elements . therefore, the state space of the chaotic boltzmann machine is ^ 2 ] , and changes its direction when it hits a side of the square .therefore , this system can be understood as a pseudo - billiard in the billiard table ^ 2 ] .namely , our goal in this note is to show the following proposition . for any initial values ^ 2 ] .to show this proposition , we rewrite eqs . ( [ eq : dx1 ] ) and ( [ eq : dx2 ] ) of the chaotic boltzmann machine . specifically , instead of the state of each element , we introduce a new state variable using the map then the state space ^ 2 ] ( see fig .[ fig : sspace ] ) . on this state space ,the dynamics of the chaotic boltzmann machine ( eqs .( [ eq : dx1])([eq : ds0 ] ) ) is rewritten as follows : ^{-1 } = \frac{r_1(s_1)r_2(s_2)}{p[s_1,s_2 ] } , \label{eq : dy1 } \\\frac{dy_2}{dt } & = r_2(s_2 ) c_2(s_1 ) p[s_2|s_1]^{-1 } = \frac{r_1(s_1)r_2(s_2)}{p[s_1,s_2 ] } , \label{eq : dy2}\end{aligned}\ ] ] and where the state can be determined from as when , and when . the state space for . ]the differential equations ( [ eq : dy1 ] ) and ( [ eq : dy2 ] ) always satisfy .hence , the state moves in the direction of in the rectangular state space . herewe introduce which moves along the orbit of at a constant velocity .then the orbit of is equidistributed in the state space in the following sense .let us consider the poincar section on . while the value of changes by , the value of changes by the same amount. therefore , the poincar map on is a rigid rotation with rotation number . if the rotation number is irrational , the poincar map is ergodic with respect to the uniform distribution . then the orbit is also equidistributed in the state space , and we have since moves along the same orbit as with the velocity described in eqs .( [ eq : dy1 ] ) and ( [ eq : dy2 ] ) , we have ,\ ] ] which completes the proof .in this brief note , we have shown that chaotic boltzmann machines with two elements have quasi - periodic dynamics and yield samples from the probabilistic distribution of the corresponding boltzmann machines , provided that the rotation number is irrational .the set of parameter values that make rotation numbers rational has lebesgue measure zero .this is analogous to probabilistic monte carlo sampling , which does not work with probability ( measure ) zero .however , when we design chaotic boltzmann machines artificially , it may be possible that the rotation number easily becomes rational .in such a case , we have to adjust by multiplying some constants to make the rotation number irrational .the performance as an sampling algorithm depends on the characteristics of the rotation number . at least , a chaotic boltzmann machine with a good rotation number that yields a low - discrepancy sequence is expected to exhibit better performance than the corresponding boltzmann machine .our approach in this note can not be applied to larger chaotic boltzmann machines with more than two elements , which exhibit chaotic behavior . however, some aspects are expected to be shared with larger systems ; for example , our preliminary numerical study shows that irrationality is important , for chaotic boltzmann machines composed of not many but more than two elements , to generate faithful sample sequences .therefore , we expect that our approach gives some insights into the dynamics of larger chaotic boltzmann machines .
in this brief note , we show that chaotic boltzmann machines truly yield samples from the probabilistic distribution of the corresponding boltzmann machines if they are composed of only two elements . this note is an english translation ( with slight modifications ) of the article originally written in japanese [ h. suzuki , seisan kenkyu 66 ( 2014 ) , 315316 ] .
clustering methods are very important techniques for exploratory data analysis with wide applications ranging from data mining , dimension reduction , segmentation and so on .their aim is to partition data points into clusters so that data in the same cluster are similar to each other while data in different clusters are dissimilar .approaches to achieve this aim include partitional methods such as -means and -medoids , hierarchical methods like agglomerative clustering and divisive clustering , methods based on density estimation such as dbscan , and recent methods based on finding density peaks such as cfsfdp and ldps .image clustering is a special case of clustering analysis that seeks to find compact , object - level models from many unlabeled images .its applications include automatic visual concept discovery , content - based image retrieval and image annotation . however , image clustering is a hard task mainly owning to the following two reasons : 1 ) images often are of high dimensionality , which will significantly affect the performance of clustering methods such as -means , and 2 ) objects in images usually have two - dimensional or three - dimensional local structures which should not be ignored when exploring the local structure information of the images .to address these issues , many representation learning methods have been proposed for image feature extractions as a preprocessing step .traditionally , various hand - crafted features such as sift , hog , nmf , and ( geometric ) cw - ssim similarity have been used to encode the visual information . recently, many approaches have been proposed to combine clustering methods with deep neural networks ( dnn ) , which have shown a remarkable performance improvement over hand - crafted features .roughly speaking , these methods can be categorized into two groups : 1 ) sequential methods that apply clustering on the learned dnn representations , and 2 ) unified approaches that jointly optimize the deep representation learning and clustering objectives . in the first group , a kind of deep ( convolutional ) neural networks , such as deep belief network ( dbn ) and stacked auto - encoders , is first trained in an unsupervised manner to approximate the non - linear feature embedding from the raw image space to the embedded feature space ( usually being low - dimensional ) .and then , either -means or spectral clustering or agglomerative clustering can be applied to partition the feature space . however , since the feature learning and clustering are separated from each other , the learned dnn features may not be reliable for clustering .there are a few recent methods in the second group which take the separation issues into consideration . in ,the authors proposed deep embedded clustering that simultaneously learns feature representations with stacked auto - encoders and cluster assignments with soft -means by minimizing a joint loss function . in ,joint unsupervised learning was proposed to learn deep convolutional representations and agglomerative clustering jointly using a recurrent framework . in ,the authors proposed an infinite ensemble clustering framework that integrates deep representation learning and ensemble clustering . the key insight behind these approaches is that good representations are beneficial for clustering and conversely clustering results can provide supervisory signals for representation learning .thus , two factors , designing a proper representation learning model and designing a suitable unified learning objective will greatly affect the performance of these methods . in this paper, we follow recent advances to propose a unified clustering method named discriminatively boosted clustering ( dbc ) for image analysis based on fully convolutional auto - encoders ( fcae ) .[ fig : dbc ] for a glance of the overall framework .we first introduce a fully convolutional encoder - decoder network for fast and coarse image feature extraction .we then discard the decoder part and add a soft -means model on top of the encoder to make a unified clustering model .the model is jointly trained with gradually boosted discrimination where high score assignments are highlighted and low score ones are de - emphasized .the our main contributions are summarized as follows : * we propose a fully convolutional auto - encoder ( fcae ) for image feature learning .the fcae is composed of convolution - type layers ( convolution and de - convolution layers ) and pool - type layers ( pooling and un - pooling layers ) . by adding batch normalization ( bn ) layers to each of the convolution - type layers, we can train the fcae in an end - to - end way .this avoids the tedious and time - consuming layer - wise pre - training stage adopted in the traditional stacked ( convolutional ) auto - encoders . to the best of our knowledge ,this is the first attempt to learn a deep auto - encoder in an end - to - end manner .* we propose a discriminatively boosted clustering ( dbc ) framework based on the learned fcae and an additional soft -means model .we train the dbc model in a self - paced learning procedure , where deep representations of raw images and cluster assignments are jointly learned .this overcomes the separation issue of the traditional clustering methods that use features directly learned from auto - encoders .* we show that the fcae can learn better features for clustering than raw images on several image datasets include mnist , usps , coil-20 and coil-100 . besides, with discriminatively boosted learning , the fcae based dbc can outperform several state - of - the - art analogous methods in terms of -means and deep auto - encoder based clustering .the remaining part of this paper is organized as follows . some related work including stacked ( convolutional ) auto - encoders , deconvolutional neural networks , and joint feature learning and clustering are briefly reviewed in section [ sec : related - work ] .detailed descriptions of the proposed fcae and dbc are presented in section [ sec : fcae - dbc ] .experimental results on several real datasets are given in section [ sec : experiments ] to validate the proposed methods . conclusions and future worksare discussed in section [ sec : conclusion ] .stacked auto - encoders have been studied in the past years for unsupervised deep feature extraction and nonlinear dimension reduction .their extensions for dealing with images are convolutional stacked auto - encoders .most of these methods contain a two - stage training procedure : one is layer - wise pre - training and the other is overall fine - tuning .one of the significant drawbacks of this learning procedure is that the layer - wise pre - training is time - consuming and tedious , especially when the base layer is a restricted boltzmann machine ( rbm ) rather than a traditional auto - encoder or when the overall network is very deep .recently , there is an attempt to discard the layer - wise pre - training procedure and train a deep auto - encoder type network in an end - to - end way . in , a deep deconvolution networkis learned for image segmentation .the input of the architecture is an image and the output is a segmentation mask .the network achieves the state - of - the - art performance compared with analogous methods thanks to three factors : 1 ) introducing a deconvolution layer and a unpooling layer to recover the original image size of the segmentation mask , 2 ) applying the batch normalization to each convolution layer and each deconvolution layer to reduce the internal covariate shifts , which not only makes an end - to - end training procedure possible but also speeds up the process , and 3 ) adopting a pre - trained encoder on large - scale datasets such as vgg-16 model .the success of the architecture motivates us that it is possible to design an end - to - end training procedure for fully convolutional auto - encoders .clustering has also been studied in the past years based on independent features extracted from auto - encoders ( see , e.g. ) .recently , there are attempts to combine the auto - encoders and clustering in a unified framework . in ,the authors proposed deep embedded clustering ( dec ) that learns deep representations and cluster assignments jointly .dec uses a deep stacked auto - encoder to initialize the feature extraction model and a kullback - leibler divergence loss to fine - tune the unified model . in ,the authors proposed deep clustering network ( dcn ) , a joint dimensional reduction and -means clustering framework .the dimensional reduction model is based on deep neural networks .although these methods have achieved some success , they are not suitable for dealing with high - dimensional images due to the use of stacked auto - encoders rather than convolutional ones .this motivates us to design a unified clustering framework based on convolutional auto - encoders .in this section , we propose a unified image clustering framework with fully convolutional auto - encoders and a soft -means clustering model ( see fig . [fig : dbc ] ) .the framework contains two parts : part i is a fully convolutional auto - encoder ( fcae ) for fast and coarse image feature extraction , and part ii is a discriminatively boosted clustering ( dbc ) method which is composed of a fully convolutional encoder and a soft -means categorizer . the dbc takes an image as input and exports soft assignments as output .it can be jointly trained with a discriminatively boosted distribution assumption , which makes the learned deep representations more suitable for the top categorizer .our idea is very similar to self - paces learning , where easiest instances are first focused and more complex objects are expanded progressively . in the following subsections , we will explain the detailed implementation of the idea .traditional deep convolutional auto - encoders adopt a greedy layer - wise training procedure for feature transformations .this could be tedious and time - consuming when dealing with very deep neural networks . to address this issue, we propose a fully convolutional auto - encoder architecture which can be trained in an end - to - end manner .part i of fig .[ fig : dbc ] shows an example of fcae on the mnist dataset .it has the following features : fully convolutional : : as pointed out in , the max - pooling layers are very crucial for learning biologically plausible features in the convolutional architectures .thus , we adopt convolution layers along with max - pooling layers to make a fully convolutional encoder ( fce ) .since the down - sampling operations in the fce reduce the size of the output feature maps , we use an unpooling layer introduced in to recover the feature maps . as a result , the unpooling layers along with deconvolution layers ( see ) are adopted to make a fully convolutional decoder ( fcd ) .symmetric : : the overall architecture is symmetric around the feature layer . in practice , it is suggested to design layers of an odd number .otherwise , it will be ambiguous to define the feature layer . besides , fully connected layers ( dense layers ) should be avoided in the architecture since they destroy the local structure of the feature layer .normalized : : the depth of the whole network grows in -magnitude as the input image size increases .this could make the network very deep if the original image has a very large width or height .to overcome this problem , we adopt the batch normalization ( bn ) strategy for reducing the internal covariate shift and speeding up the training .the bn operation is performed after each convolutional layer and each deconvolutional layer except for the last output layer . as pointed out in , bn is critical to optimize the fully convolutional neural networks .fcae utilizes the two - dimensional local structure of the input images and reduces the redundancy in parameters compared with stacked auto - encoders ( saes ) . besides , fcae differs from conventional saes as its weights are shared among all locations within each feature map and thus preserves the spatial locality .once fcae has been trained , we can extract features with the encoder part to serve as the input of a categorizer .this strategy is used in many clustering methods based on auto - encoders , such as graphencoder , deep embedding networks , and auto - encoder based clustering .these approaches treat the auto - encoder as a preprocessing step which is separately designed from the latter clustering step . however , the representations learned in this way could be amphibolous for clustering , and the clusters may be unclear ( see the initial stage in fig .[ fig : procedure ] ) ) . to address this issue, we propose a self - paced approach to make feature learning and clustering in a unified framework ( see part ii in fig .[ fig : dbc ] ) .we throw away the decoder of the face and add a soft -means model on top of the feature layer . to train the unified model ,we trust easier samples first and then gradually utilize new samples with the increasing complexity . here , an _ easier _ sample ( see the regions labelled 2 , 3 and 4 in fig . [fig : procedure ] ) is much certain to belong to a specific cluster , and a _ harder _ sample ( see the region 1 in fig .[ fig : procedure ] ) is very likely to be categorized to multiple clusters .[ fig : procedure ] describes the difference between these samples at a different learning stage of dbc .there are three challenging questions in the learning problem of dbc which will be answered in the following subsections : 1 . how to choose a proper criterion to determine the easiness or hardness of a sample ?2 . how to transform harder samples into easier ones ? 3 . how to learn from easier samples ?we follow dec to adopt the -distribution - based soft assignment to measure the easiness of a sample .the -distribution is investigated in to deal with the crowding problem of low - dimensional data distributions . under the -distribution kernel , the soft score ( or similarity ) between the feature ( ) and the cluster center ( ) is here , is the degree of freedom of the -distribution and set to be in practice .the most important reason for choosing the -distribution kernel is that it has a longer tail than the famous heat kernel ( or the gaussian - distribution kernel ) .thus , we do not need to pay much attention to the parameter estimation ( see ) , which is a hard task in unsupervised learning .we transform the harder examples to the easier ones by boosting the higher score assignments and , meanwhile , bring down those with lower scores .this can be achieved by constructing an underlying target distribution from as follows : suppose we can ideally learn from the soft scores ( denoted as ) to the assumptive distribution ( denoted as ) each time . then we can generate a learning chain as follows : the following two properties can be observed from the chain : * property 1 * if for any and , then for all and all time step . * proof * under the condition , and by ( [ eq : r ] ) we can deduce that . by the chainthis is equivalent to the fact that .thus , the conclusion follows recursively for all .* property 2 * if there exists an such that , then * proof * by ( [ eq : r ] ) we have ^{\alpha } = \big[\frac{s_{ij}^{(t-2)}}{s_{il}^{(t-2)}}\big]^{\alpha^2 } = \cdots=\big[\frac{s_{ij}^{(0)}}{s_{il}^{(0)}}\big]^{\alpha^t}.\ ] ] by the assumption , it is seen that for any . on the other hand , since , we have .thus , ^{\alpha^t } = 0,\;\;\forall j\neq l.\ ] ] since is finite , we have .finally , with the constrains , we obtain * property 1 * tells us that the _ hardest _ sample ( which has the equal probability to be assigned to different clusters ) would always be the hardest one .however , in practical applications , there can hardly exist such examples . *property 2 * shows that the initial non - discriminative samples could be boosted gradually to be definitely discriminative . as a result, we get the desired features for -means clustering .note that the boosting factor controls the speed of the learning process .a larger can make the learning process more quickly than smaller ones .however , it may boost some falsely categorized samples too quickly at initial stages and thus makes their features irrecoverable at later stages .besides , it can be helpful to balance the data distribution at different learning stages . in ,the authors proposed to normalize the boosted assignments to prevent large clusters from distorting the hidden feature space .this problem can be overcome by dividing a normalization factor for each of the . in the last subsection, it was assumed that we could learn from to the boosted target distribution .this aim can be achieved with a joint kullback - leibler ( kl ) divergence loss , that is , fig .[ fig : kl ] gives an example of the joint loss when , where is the loss generated by the sample with respect to the cluster ( or ) .regions marked in fig .[ fig : kl ] roughly correspond to the regions marked in fig .[ fig : procedure ] .clusters , so is a random guess probability .] intuitively , the loss has the following main features : * for an ambiguous ( or hard ) sample ( i.e. , ) , its loss according to * property 1*. therefore , it will not be seriously treated in the learning process .( region 1 ) * for a good categorized sample ( i.e. , there exists an such that ) , its loss will be much greater than zero , and thus it will be treated more seriously .( regions 2 and 3 ) * for a definitely well categorized sample ( i.e. , there exists an such that ) , its loss will be near zero .this means that its features do not need to be changed much more .( region 4 ) by ( [ eq : s])-([eq : kl - loss ] ) , the gradients of the kl divergence loss w.r.t . and can be deduced as follows : and the derivation of ( [ eq : gradient - z ] ) and ( [ eq : gradient - mu ] ) can be found in the appendix . in this section ,we summarize the overall training procedure of the proposed method in algorithm [ alg : dbc - i ] and algorithm [ alg : dbc - ii ] .they implement the framework showed in fig .[ fig : dbc ] . here is the maximum learning epochs , is the maximum updating iterations in each epoch and is the mini - batch size .the encoder part of fcae is : , which is parameterized by and the decoder part of fcae is : , which is parameterized by . , , , , , and //*stage i : train a fcae and clustering with its features * train a deep fully convolutional auto - encoder with the euclidian loss by using the traditional error back - propagation algorithm . extract features : clustering with the features : -means centers //*stage ii : jointly learn the fce and cluster centers * construct a unified clustering model with encoder parameters and cluster centers initialization : , forward propagate ( [ eq : dbcm ] ) and update the _ soft _ assignments update the target distribution forward propagate ( [ eq : dbcm ] ) with a mini - batch of samples .backward propagate ( [ eq : dbcm ] ) from ( [ eq : gradient - z ] ) and ( [ eq : gradient - mu ] ) to get and .update and with the gradients .hard _ assignments remain unchanged .in this section , we present experimental results on several real datasets to evaluate the proposed methods by comparing with several state - of - the - art methods . to this end, we first introduce several evaluation benchmarks and then present visualization results of the inner features , the learned fcae weights , the frequency hist of soft assignments during the learning process and the features embedded in a low - dimensional space .we will also give some ablation studies with respect to the boosting factor , the normalization factor and the fcae initializations .* datasets * we evaluate the proposed fcae and dbc methods on two hand - written digit image datasets ( mnist and usps ) and two multi - view object image datasets ( coil-20 and coil-100 ) .the size of the datasets , the number of categories , the image sizes and the number of channels are summarized in table [ tab : dataset ] ..datasets used in our experiments . [ cols="^,^,^,^,^,^",options="header " , ] one of the advantages of fully convolutional neural networks is that we can naturally visualize the inner activations ( or features ) and the trained weights ( or filters ) in a two - dimensional space .besides , we can monitor the learning process of dbc by drawing frequency hists of assignment scores . in addition , -sne can be applied to the embedded features to visualize the manifold structures in a low - dimensional space .finally , we show some typical falsely categorized samples generated by our algorithm . in fig .[ fig : vis - activation ] , we visualize the inner activations of fcae on the mnist dataset with three digits : 1 , 5 , and 9 . as shown in the figure , the activations in the feature layer are very sparse . besides , the deconvolutional layer gradually recovers details of the pooled feature maps and finally gives a rough description of the original image .this indicates that fcae can learn clustering - friendly features and keep the key information for image reconstruction .[ fig : vis - weights ] visualizes the learned filters of fcae on the mnist dataset .it is observed in that the stacked convolutional auto - encoders trained on noisy inputs ( 30% binomial noise ) and a max - pooling layer can learn localized biologically plausible filters .however , even without adding noise , the learned deconvolutional filters in our architectures are non - trivial gabor - like filters which are visually the nicest shapes .this is due to the use of max - pooling and unpooling operations .as discussed in , the max - pooling layers are elegant way of enforcing sparse codes which are required to deal with the over - complete representations of convolutional architectures .+ + we use frequency hist of the soft assignment scores to monitor the learning process of dbc .[ fig : vis - hist ] shows the hists of scores on the mnist test dataset ( a subset of the mnist dataset with samples ) .the scores are assigned to the first cluster at different learning epochs . at early epochs ( ) ,most of the scores are near this is a random guess probability because there are clusters .as the learning procedure goes on , some higher score samples are discriminatively boosted and their scores become larger than others . as a result , the cluster tends to `` believe '' in these higher score samples and consequently make scores of the others to be smaller ( approximating zero ) .finally , the scores assigned to the cluster become two - side polarized .samples with very high scores ( ) are thought to definitely belong to the first cluster and the others with very small scores ( ) should belong to other clusters .+ we visualize the distribution of the learned features in a two - dimensional space with -sne . fig .[ fig : vis - tsne ] shows the embedded features of the mnist test dataset at different epochs . at the initial epoch ,the features learned with fcae are not very discriminative for clustering . as shown in fig .[ fig : vis - tsne](a ) , the features of digits 3 , 5 , and 8 are closely related .the same thing happened with digits 4 , 7 , and 9 .at the second epoch , the distribution of the learned features becomes much compact locally .besides , the features of digit 7 become far away from those of digits 4 and 9 . similarly, the features of digit 8 get far away from those of digits 3 and 5 .as the learning procedure goes on , the hardest digits ( 4 v.s .9 , 3 v.s .5 ) for categorization are mostly well categorized after enough discriminative boosting epochs .the observation is consistent with the results showed in subsection [ sec : process ] . in fig .[ fig : vis - false ] , we show the top falsely categorized examples whose maximum soft assignment scores are over it can be observed that it is very hard to distinguish between some ground truth digits 4 , 7 and 9 even with human experience .lots of digits 7 are written with transverse lines in their middle space and would be thought to be ambiguous for the clustering algorithm .besides , some ground truth images are themselves confusing , such as those showed with the gray background . in this section , we make some ablation studies on the learning process with respect to different boosting factors ( ) , different normalization methods ( ) and different initialization models generated by fcae . fig .[ fig : ablations](a ) shows the acc and nmi curves , where equals to with a small ( ) , the learning process is very slow and takes very long time to terminate . on the contrary , when the factor is set to be very large ( ) , the learning process is very fast at the initial stages .however , this could result in falsely boosting some scores of the ambiguous samples . as a consequence ,the model learned too much from some false information so the performance is not so satisfactory . with a moderate boosting factor ( ) ,the acc and nmi curves grow reasonably and progressively . in dec ,the authors pointed out that the balance normalization plays an important role in preventing large clusters from distorting the hidden feature space . to address this issue , we compare three normalization strategies : 1 ) the constant normalization for comparison , that is , , 2 ) the normalization by dividing the sum of the original soft assignment score per cluster , that is , , which is adopted in dec , and 3 ) the normalization by dividing the sum of the boosted soft assignment score per cluster , that is , . fig .[ fig : ablations](b ) presents the value curves of acc and nmi against the epoch with these settings .initially , the normalization does not affect acc and nmi very much .however , the constant normalization can easily get stuck at early stages .the normalization by dividing has certain power of preventing the distortion .our normalization strategy gives the best performance compared with the previous methods .this is because our normalization directly reflects the changes of the boosted scores . to investigate the impact of the fcae initialization on dbc, we compare the performance of dbc with three different initialization models : 1 ) the random initialization , 2 ) the initialization with a half - trained fcae model , and 3 ) the initialization with a sufficiently trained fcae model .the comparison results are shown in fig .[ fig : ablations](c ) .as illustrated in the figure , dbc performs greatly based on all the models even when the initialization model is randomly distributed .however , if the fcae model is not sufficiently trained , the resultant dbc model will be suboptimal .in this paper , we proposed fcae and dbc to deal with image representation learning and image clustering , respectively .benchmarks on several visual datasets show that our methods can achieve superior performance than the analogous methods .besides , the visualization shows that the proposed learning algorithm can implement the idea proposed in section [ sec : dbc ] .some issues to be considered in the future include : 1 ) adding suitable constraints on fcae to deal with natural images , and 2 ) scaling the algorithm to deal with large - scale datasets such as the imagenet dataset .this work was supported in part by nnsf of china under grants 61379093 , 61602483 and 61603389 .we thank shuguang ding , xuanyang xi , lu qi and yanfeng lu for valuable discussions .* a. derivation of ( [ eq : gradient - z ] ) .* we use the chain rule for the deduction .first , we set then it follows that now set so further , let then we have combine the above expressions to get the required result * b. derivation of ( [ eq : gradient - mu ] ) . * ( [ eq : gradient - mu ] ) can be derived similarly by exchanging and in the above derivations of ( [ eq : gradient - z ] ) .99 j. han , j. pei , m. kamber .data mining : concepts and techniques ._ elsevier _ , 2011 .s. hong , j. choi , j. feyereisl , b. han and l. s. davis .joint image clustering and labeling by matrix factorization . _ ieee transactions on pattern analysis and machine intelligence _ , vol .38 , no . 7 , pp . 1411 - 1424 , 2016 .p. vincent , h. larochelle , i. lajoie , y. bengio and p. manzagol .stacked denoising auto - encoders : learning useful representations in a deep network with a local denoising criterion ._ journal of machine learning research _ , vol .3371 - 3408 , dec . 2010 .h. lee , r. grosse , r. ranganath and a. ng .convolutional deep belief networks for scalable unsupervised learning of hierarchical representations .26th annual international conference on machine learning _ , pp .609 - 616 , 2009 .
traditional image clustering methods take a two - step approach , feature learning and clustering , sequentially . however , recent research results demonstrated that combining the separated phases in a unified framework and training them jointly can achieve a better performance . in this paper , we first introduce fully convolutional auto - encoders for image feature learning and then propose a unified clustering framework to learn image representations and cluster centers jointly based on a fully convolutional auto - encoder and soft -means scores . at initial stages of the learning procedure , the representations extracted from the auto - encoder may not be very discriminative for latter clustering . we address this issue by adopting a boosted discriminative distribution , where high score assignments are highlighted and low score ones are de - emphasized . with the gradually boosted discrimination , clustering assignment scores are discriminated and cluster purities are enlarged . experiments on several vision benchmark datasets show that our methods can achieve a state - of - the - art performance . image clustering , fully convolutional auto - encoder , representation learning , discriminatively boosted clustering
the discrete spherical bessel transform ( dsbt ) arises in a number of applications , such as , e.g. , the analysis of the cosmic microwave background , the numerical solution of the differential equations , and the numerical evaluation of multi - center integrals .many different sbt algorithms have been proposed so far . butnone of them possess all of the advantages of their trigonometric progenitor , namely the fast fourier transform ( fft ) .these advantages are the performance fastness , the uniform coordinate grid , and the orthogonality .an example of the problem requiring the simultaneous presence of all the advantages is the solving of the schrdinger - type equation ( se ) by means of the pseudospectral approach .the grid uniformity provides the same accuracy of the wave function description in the whole domain of definition .the grid identity for all orders of a spherical bessel functions ( sbf ) allows to switch to the discrete variable representation ( dvr ) .the dsbt orthogonality is needed to provide the hermiticity of the radial part of the laplacian operator in dvr .the lack of the laplacian operator hermiticity impedes the convergence of iterative methods ( such as conjugate gradient method ) for the solution of matrix equations ( which are obtained by dvr from the stationary se ) . in the case of time - dependent se ,the hermiticity of the laplacian operator is crucial for the conservation of the wave function norm during the time evolution .a pioneering approach based upon the convolution integral requires a number of operations of the order of for its performing , just like the fft does , that means that it is quite fast .however it employs a strongly nonuniform grid ( a node location exponentially depending on its number ) . hence the attempts of its utilization for the se solving ended in problems with the strong near - center localization of a wave function .a method rest on the spherical bessel functions expansion over the trigonometric functions also appears to be quite fast ( requiring as few as operations ) and employs a uniform grid .but it is not orthogonal and has stability difficulties because of the singular factors in the spherical bessel functions expansion over the trigonometric functions .next , a gauss - bessel quadrature based technique suggested in is orthogonal and converges exponentially , but it is not fast ( as the number of operations required scales as ) and needs an -dependent grid .nevertheless its fast convergence and the near - uniform grid motivated to apply it for a time - dependent gross - pitaevsky equation .finally , an approach rest on the sbf integral representation via legendre polynomials , proposed in , appears to be fast , makes use of the uniform grid , but it is not orthogonal .in the present work we are proposing the algorithm for the dsbt that is orthogonal , fast , and it implies the uniform grid . our approach is based upon the sbt factorization into the two subsequent transforms , namely the fft and the discrete orthogonal legendre polynomials derivatives based transform .the paper is organized as follows . in section [ sec : dsbt ] , we develop the orthogonal fast dsbt on a uniform grid .next , in section [ sec : testing ] , the proposed method is tested via the evaluation of the gaussian atomic functions transform and also the dsbt basis functions comparison to the exact sbfs . in section [ sec :example ] the dsbt- and dvr - based approach ( dsbt - dvr ) for the time - dependent schrdinger equation ( tdse ) solving is suggested and examined .the approach efficiency is illustrated by treating of the problem of the hydrogen molecular ion ionization by laser pulse .finally , in section [ sec : conclusion ] we briefly discuss the obtained results as well as the prospects of dsbt and dsbt - dvr application .a typical problem involving the spherical bessel transform ( sbt ) is the plane wave expansion of a three - dimensional function .the expansion over the spherical harmonics yields a radius - dependent function if the function has no singularities , then .let us introduce the sbt as here we perform the function substitution ( a magnetic quantum number is not used further , therefore from this point on we omit it from the denotation for the sake of simplification ) , then execute the expansion over the functions where is a spherical bessel function ( sbf ) of the first kind .the functions satisfy the normalization condition .the pre - integral factor in is introduced in order to make the transform unitary .the beginning of our derivation coincides with the one in the work .but unlike its authors we are going to aim at the factorization of the sbt into the two separate transformations , namely the fft and also the additional orthogonal transform which we denote fourie - to - bessel transform ( ftb ) .the sbf may be presented as where is the legendre polynomial of -th order . upon substituting the latter expression into eq . , we obtain here the integral over is different from the fourier transform of the function by the presence of the integrand factor . this factor might be represented as a result of taking a derivative of over .thus we get the expression making use of the legendre polynomials parity condition , one may further reduce the integral over from to to the one in the limits from to as \psi_\ell(r ) dr d\eta \label{c_vs_psi}\end{aligned}\ ] ] next , let us define a new function \psi_\ell(r ) dr \label{tilde_c_def}\end{aligned}\ ] ] the term /(2i^{\ell+1}) ] and , that is for the odd and for the even ones . the fourier transform may be written in the matrix form as it yields as a result the value of the vector of the fourier expansion coefficients related to the function as follows : where the weights since the fourier transform matrix is orthogonal , then under the transform the norm is conserved , that is .the transform performing through the fft algorithm requires the number of operations of the order of .the transform conserves the norm according to the approximation of the integrals in this relation by the trapezoidal rule yields where we introduce the vector composed of the coefficients of the bessel expansion next , let us write the direct and inverse discrete ftb ( dftb ) in the following form in order for to hold true , the matrix has to be orthogonal , that is if we attempt to apply the trapezoidal rule directly to , then we would obtain however this technique of the matrix construction does not provide its orthogonality .the reason is that the equation does not hold upon the approximate integration .the employing of the high - order newton - cotes rules instead of the trapezoidal rule does not make the situation better . in order for to be true ,it is necessary for to hold for all the subgrids with the arbitrary nodes number .the high - order newton - cotes rules do not provide high accuracy for an arbitrary subgrid . therefore the only way to preserve the transform orthogonality appears to be the modification of the integral kernel under the proceeding to the numerical integration . in the context of the summation on grids ,the properties analogous to those of the legendre polynomials are possessed by the so - called discrete legendre orthogonal polynomials ( dlop ) .dlop satisfy the orthogonality property given by and also the normalizing condition . here where is -th falling factorial of , .dlop might be presented as where are coefficients of the expansion of the shifted legendre polynomial .this means that can be obtained from through the substitution of for . as the grid size increases ,dlop tend to the usual legendre polynomials according to due to the orthogonality condition dlop possess a property similar to the property , as follows making use of this fact one can easily prove ( as shown in the appendix ) that for any discrete polynomial of the order true is the following relation where the weight function coincides with the weights of the trapezoidal integration rule for the grid with a unit step here we define a new discrete polynomial (i , n-1)\end{aligned}\ ] ] which is proportional to the backward difference (i , n-1)=p_\ell(i , n-1)-p_\ell(i-1,n-1 ) \label{backdiff}\end{aligned}\ ] ] and hence has the order .this polynomial tends to the derivative of the usual legendre polynomial as that is faster than the dlop tends to its non - discrete analogue .therefore we shall further refer to as the derivative of the discrete orthogonal legendre polynomial ( ddlop ) .it should be mentioned that this term has different meanings throughout the literature .it follows from the relation that the integral kernel in eq .can be approximated on the grid by means of ddlop according to .\label{approx_kernel}\end{aligned}\ ] ] let us suppose the transform matrix has the elements as follows : , \label{tdef}\end{aligned}\ ] ] where the lower triangular matrix is an approximation of the kernel of the integral via , defined by here the heaviside function is specified in , and the additional factor provides the weight function in the product .defined in such a way exist only for where since there are no dlop of the order at smaller . by making use of eq ., one may show ( see the appendix for details ) that the rows of the matrix are mutually orthogonal , that is [ \delta_{ml } - l_{ml } ] & = & \alpha_n^{-2}\delta_{nm};\quad n\geq n_{0\ell } \label{orthoiml}\end{aligned}\ ] ] the rows of the matrix are equal to those of the matrix multiplied by the normalizing constants ^ 2 \right\}^{-1/2}.\end{aligned}\ ] ] here the normalizing constants at .since the approximation implies the error scaling as ] .the eqs.([tdef],[ldef ] ) define only rows in the transform matrix . in order to make the basis complete we have to supplement it by extra vectors that are orthogonal to all other ones .the ddlop property eq .( which eq . follows from ) leads to the fact that the basis vectors specified via eqs.([tdef],[ldef ] ) are to be orthogonal to any polynomial of the order .that is to say , one might construct the extra basis vectors from the polynomials of the order . to provide these extra vectors to be orthogonal to not only the basic basis vectors , but to each other as well, we shall choose them as the dlops ( not the ddlop ! ) of the corresponding orders .so we shall define the extra basis vectors as follows , \label{tadd}\end{aligned}\ ] ] where the polynomial order is and the normalizing constant is } ] requires as less as operations . in totalone needs to evaluate sums ( appearing because of the fact that the half of the coefficients in are zero due to the ddlop parity ) , then to perform the summation over in , resulting in the altogether operations number scaling as . at the coefficients are to be computed according to .that means that for every one has to calculate the vectors scalar product that requires extra operations , that is the operations number scaling is the same as for the evaluating for via ( [ rec_sum_b_f],[fast_b_f ] ) . in sum , to accomplish the transform one needs to perform operations .now let us consider the inverse transform .the substitution of into yields for the following : where next , for we obtain where the sum may be evaluated according to the recurrence relation . \label{rec_sum_f_b}\end{aligned}\ ] ] thus , the inverse transform performing requires the number of operations , just as the direct transform does .the fast fourier transform operations number scales as .that is the dsbt in total requires operations . at large the fft strongly over - demands the dftb, hence the overall dsbt algorithm processing time appears to be defined by the fft processing time .the method convergence was examined on three grids with the same space step and the various values of the space region 51.2 , 102.4 , and 204.8 .that is , the grids possessed the same maximal momentum and various momentum steps .+ let us begin with the check of the convergence of our transform for smooth functions , which are commonly used in atomic physics namely gaussian atomic orbital functions the fig.[fig : proj ] shows the absolute value of the difference between the exact sbt result ( obtained by the numerical evaluation of the integral in eq . ) and the dsbt result {n} ] with the accuracy , so the method has a global error of . as the matrices , and are diagonal , the exponential functions of them reduce to the exponential functions of complex numbers .therefore each step of the method performing requires a number of operations . upon the employing of the approach that is being presented, the evolution of the phases of free spherical waves is evaluated more precisely , the greater the evaluation region .it emerges to be an important advantage in comparison to another space approximation techniques that are commonly used today ( finite - difference method , finite - element method and so on ) .this is extremely helpful for the problems that require the consideration of the long - duration wavefunction evolution in large space regions in variable external fields .it is worth mentioning that , although we are considering the tdse solving only , the reduction of a problem to the multiplication by the matrix of the form of might be also used in iteration methods for the stationary elliptic equations solving as well .+ as a first benchmark application let us consider the problem of 3d harmonic oscillator in external field , that has the analytical solution .as this problem possesses features somewhat opposite to those which are optimal for the employing of dsbt - dvr ( that is , it needs only the small spatial region size ) , it proves to be the most stringent test for the method .the spherically - symmetric three - dimensional harmonic oscillator potential is known to be the time - dependent external field can be presented by means of various ways which are equivalent in terms of theory , but different in terms of their implementation by a numerical scheme .we have accomplished the calculations for the external field representation both in the coordinate gauge and the velocity gauge the pulse form was supposed to be and we took the external field amplitude to be .computations were carried out for the two external field frequencies , namely corresponding to the oscillator resonant frequency , and the non - resonant . at the resonant frequencythe amplitude of the wavepacket center position oscillations ( against center of coordinate ) grows linearly with time , that is , higher and higher spherical harmonics are excited , whereas in the non - resonant case only small harmonics are excited .initial state function was set equal to the ground state function .we used the parameter to estimate the approximation error . herewave function is the analytical solution for the three - dimensional harmonic oscillator in a time - dependent external field , and is the numerical solution .we set the angular basis parameters and . in order to diminish the error of the split - operator method ( which is of no current interest ) ,the time step has been set very small .the figure [ fig : oscill ] shows the error of the obtained numerical solution as a function of time at for the three grids having the same step , but different 12.8 , 25.6 , and 51.2 .it is apparent that the solution error falls down as , as one should expect basing on the fact that the approximate transform error makes the most significant contribution to the overall scheme error at such scheme parameters .+ the figure [ fig : oscillcorr ] presents the same as the figure [ fig : oscill ] , except that we have employed the corrected from eq .. one can see that the rate of the solution convergence to the exact one depending on is the same as in the case , but the error absolute value is 4 to 5 times less .now let us turn to a problem of more physical use . as a benchmark example we shall consider the h molecule in the field of complex - shaped laser pulse consisting of the short ultraviolet ( xuv ) pulse combined with the long infrared radiation ( ir ) pulse .this emerges as a model of rapidly developing pump - probe techniques .the electron is emitted after being subjected to the xuv pumping pulse and then moves under joint action of the long - range coulomb field and slowly changing ir probe pulse field .modeling of this process requires computations for a long atomic time period as well as for the large simulation region size ( in order for the electron not to escape outside its boundaries ) .since a large implies a small momentum step , an dsbt based approach emerges to be perfectly appropriate for this problem solving .first we need to estimate the accuracy that our scheme provides for the singular potential problems which are frequently encountered in atomic physics . to this end , we have computed the eigenenergies of the approximate hamiltonian for the different singular potentials . the ground state energy and wavefunction have been evaluated by means of the imaginary time evolution method ( that is to substitute in eq . ) .the excited states have been evaluated via the imaginary time evolution method with the ortogonalization of the wavefunction to the lower states functions on each time step .we shall begin with the considering of the hydrogen atom whose nucleus potential is known to be [ cols= " < , < , < ,< , < " , ] [ tab : h2plusboundsedr02 ] the table [ tab : hboundser102 ] manifests the calculated energies for h ground and first excited states converge with the angular basis parameter increasing at the fixed and .the `` exact '' energies given here were obtained through the calculation via the method based upon the spheroidal coordinates utilizing .the error arising from the grid step is negligible in this case , therefore the table of convergence over the grid step is not presented here .next let us consider the evolution of the molecular ion h in the field of two overlapping linearly polarized laser pulses here the `` xuv '' pulse was supposed to have the gaussian envelope where is the full width at half maximum .next , the `` ir '' pulse was chosen to have a compact support and the , as follows \cos\omega_{ir}(t - t_{ir}),\ , |t - t_{ir}|<\tau_{ir}/2,\end{aligned}\ ] ] where is the overall pulse duration , is the shift of the arrival time of the ir - pulse center relative to that for the xuv pulse .the external field of this form is employed in the attosecond streaking method .the xuv - pulse triggers the ionization , then the detected electrons spectrum dependence on the time shift enables to determine the ir pulse genuine form , or , in the case of this form being known , to obtain the time delay of the electron emission during the ionization process .the probe pulse parameters was taken to be , , and ( which are common values in modern attosecond streaking experiments ) , and the pump pulse parameters , correspondingly , were ( standing for the molecule ground state energy , evaluated by means of the imaginary time evolution method ) , and .both pulses polarization were chosen to be co - directed with the molecular axis , . in all the examples referred to below we used the numerical scheme parameters as follows : , time step , evolution beginning time , and evolution termination time .[ fig : h2plusas ] shows the probability density that the electron is at .the calculations were performed for the scheme parameters and . due to the stationary phase approximation , for the time and for ,the relation holds , where is the differential cross section of electrons emission depending on momentum . on the left panel of the fig .[ fig : h2plusas ] , the peak near corresponds to the wavefunction of the ground and other stationary states , whereas the peak centered in the vicinity of emerges due to the one - photon ionization , and and the rest large peaks are caused by the multiphoton processes .this is apparently confirmed by the right panel of the fig .[ fig : h2plusas ] , where the enhanced probability ring has one node depending on , the circle of larger radius has two nodes corresponding to the dipole and quadrupole distributions arising from the absorption of one or two photons correspondingly .the left panel of the figure [ fig : h2plusas ] also demonstrates the probability density dependence on the ir pulse phase at the moment of the xuv pulse arrival .the theory predicts the probe pulse action causing the electron momentum shift equal to the magnitude of the ir pulse at the moment of the electron emission from the molecule ( which roughly coincides with the moment of the xuv pulse arrival ) .this is exactly what is observed on the right panel of the figure [ fig : h2plusas ] . besides, we have examined the probability density convergence rate depending on the step and on the angular basis size , .the left panel of the [ fig : h2plusasconv ] presents the pairwise differences for the probability densities evaluated on the three grids having the steps 0.4 , 0.2 and 0.1 at the fixed . for the sake of comparison for 0.1 and plotted on the same figure .it is apparent that even on the coarsest grid with 0.4 the error is of the order of 1% ; this value is quite small in terms of experimental accuracy which is common in the field in question . upon halving the step size, the error drops down by 1 - 2 orders of magnitude .however , the error in a particular point decreases non - uniformly , actually as expected due to the global basis functions using .the right panel of the [ fig : h2plusasconv ] displays the pairwise differences for the three different angular bases with 4 , 8 , and 16 at the fixed step 0.2 , as well as for 0.2 and .one can see that the error due to the angular basis small size is much larger than that due to the radius step .this is related to the molecular potential non - centrality . for error has magnitude about 25% ( in the vicinity of maxima ) , whereas upon the basis size increasing up to the error drops down to 6% .therefore has been chosen for the main part of our calculations .we have developed the algorithm for the dsbt that possesses the advantages of orthogonality , performing fastness and uniform grid .our approach is based upon the sbt factorization into the two subsequent orthogonal transforms , namely the fast fourier transform ( requiring the operations number ) and the orthogonal transform founded on the discrete orthogonal legendre polynomials ( requiring the operations number ) .our discrete transform converges to the exact sbt as the square of the momentum grid step . besides, basing on dsbt and dvr , we have also elaborated the 3d tdse solving method ( dsbt - dvr ) .the examination of the dsbt - dvr algorithm has demonstrated its efficiency for the purposes of solving of time - dependent problems in atomic and molecular physics .an dsbt based approach allows to evaluate the free spherical wave functions evolution the more accurately , the more is the spatial region size .it appears to be an advantage in comparison to another methods applied in this field .this is especially helpful for problems like the modelling of the attosecond streaking approach and other pump probe techniques , since they require the computation of the wavefunction evolution under the joint action of long - lasting pulses and the weak coulomb field on large spatial regions .another important preference of the method proposed is the fast convergence over grid step when applied to the problems with smooth ( or artificially smoothed ) potentials .it should be noted that the current dsbt - dvr version does not make any use of another helpful dbbt feature , namely the dsbt capability to be employed for the aim of the evaluation of multi - center integrals .the leveraging of this capability for the solving of both sse and tdse for the multielectron molecules is expected to be the matter of our future work .the author thanks dr .tatiana sergeeva for help in the preparing of the text of this paper .also , the author wish to thank dr .serguei patchkowskii for helpful discussions .the author acknowledges support of the work from the russian foundation for basic research ( grant no .14 - 01 - 00520-a ) .let us begin with the demonstration of the eq . validity .consider the sum (i , n-1 ) i^s\end{aligned}\ ] ] it may be transformed as follows according to , if the sums in the last string equal zero , hence .\label{sigmas}\end{aligned}\ ] ] here we also used the dlop parity property now let us take an arbitrary discrete polynomial and consider the sum (i , n-1 ) p(i ) = \sum_{s=0}^\mu c_s \sigma_s\end{aligned}\ ] ] by using eq .we obtain \label{sum1ddpp}\end{aligned}\ ] ] next , one can construct the weighted sum (i , n-1 ) p(i ) w_i(n ) = \\ & & \sigma -\frac{1}{2}\nabla [ p_\ell](0,n-1 ) p(0 ) -\frac{1}{2}\nabla [ p_\ell](n , n-1 ) p(n ) \end{aligned}\ ] ] making use of the eqs.([sum1ddpp ] , [ backdiff ] , [ dlopparity ] ) and the normalization condition yields (i , n-1 ) p(i ) w_i(n ) = \nonumber \\ & & \frac{1+p_\ell(-1,n-1)}{2 } \left [ ( -1)^\ell p(n ) - p(0 ) \right].\end{aligned}\ ] ] after the division of both sides of this equation by /2 $ ] , we arrive to eq.. now let us prove eq .. as this equation is symmetric with respect to the exchange of indices and , for definiteness we shall assume .since for , one can write [ \delta_{ml } - l_{ml } ] = - l_{nm } + \sum_{l = p_\ell}^{m } l_{nl } l_{ml } \label{orthoimlsec}\end{aligned}\ ] ] for sake of the notation simplicity , from now on we designate so , according to eq . , eq . holds true , when the substitution of elements definition from eq .yields by change of the summation index we can rewrite as due to eq ., ddlop have the parity property the sum in eq .might be split into the two sums as here we made use of the fact that when at even .next we apply the parity property to both ddlops in the summand of the second sum in eq . and make the summation index change the latter sum then might be combined with the ( remained unchanged ) first sum in eq . to get as is the polynomial of the order , we can apply eq . to obtain finally , after using the parity property eq ., the result becomes since for , we have thus proved eq . andtherefore eq .. 99 a.j.s .hamilton , uncorrelated modes of the nonlinear power spectrum , mnras 312 ( 2000 ) 257 - 284 .r. bisseling , r. kosloff , the fast hankel transform as a tool in the solution of the time dependent schrdinger equation , journal of computational physics 59 ( 1985 ) 136 .y. sun , r. c. mowrey , and d. j. kouri , spherical wave close coupling wave packet formalism for gas phase nonreactive atom - diatom collisions , j. chem .87 ( 1987 ) 339 .s. ronen , d. c. e. bortolotti , and j. l. bohn , bogoliubov modes of a dipolar condensate in a cylindrical trap , phys .a 74 ( 2006 ) 013623 .talman , numerical methods for multicenter integrals for numerically defined basis functions applied in molecular calculations , int .j. quantum chem .93 ( 2003 ) 72. m. toyoda , t. ozaki , numerical evaluation of electron repulsion integrals for pseudoatomic orbitals and their derivatives , j. chem .( 2009 ) 124114 .talman , numerical fourier and bessel transforms in logarithmic variables , j. comput . phys .29 ( 1978 ) 35 .sharafeddin , h.f .bowen , d.j .kouri , d.k .hoffman , numerical evaluation of spherical bessel transforms via fast fourier transforms , j. comput .100 ( 1992 ) 294 .d. lemoine , the discrete bessel transform algorithm , j. chem .phys . 101 ( 1994 ) 3936 .m. toyoda and t. ozaka , fast spherical bessel transform via fast fourier transform and recurrence formula , computer physics communications 181 ( 2010 ) 277 .siegman , quasi fast hankel transform , opt . lett . 1 ( 1977 ) 13 .talman , numsbt : a subroutine for calculating spherical bessel transforms numericaly , comput .comm . 180 ( 2009 ) 332 .d. lemoine , a note on orthogonal discrete bessel representations , j. chem .( 2003 ) 6697 .m. r. hermann , j.a .fleck , split - operator spectral method for solving the time - dependent schrodinger equation in spherical coordinates , phys .a 38 ( 1988 ) 6000 . v.v. serov , b. b. joulakian , d. v. pavlov , i. v. puzynin , and s. i. vinitsky , ( e,2e ) ionization of h+2 by fast electron impact : application of the exact nonrelativistic two - center continuum wave , phys .a 65 ( 2002 ) 062708 .
we propose an algorithm for the orthogonal fast discrete spherical bessel transform on an uniform grid . our approach is based upon the spherical bessel transform factorization into the two subsequent orthogonal transforms , namely the fast fourier transform and the orthogonal transform founded on the derivatives of the discrete legendre orthogonal polynomials . the method utility is illustrated by its implementation for the numerical solution of the three - dimensional time - dependent schrdinger equation . spherical bessel functions , hankel transforms , time - dependent schrdinger equation 02.30.uu , 31.15.-p
for a long time , solar dynamo theory was in an advantageous situation compared to planetary dynamo theory . in the sun , dynamo generated magnetic fields can be directly measured on the boundary of the turbulent , conducting domain where they are generated ; and the timescales of their variations are short enough for direct observational follow - up .solar observations also put many constraints on the motions in the convective zone , significantly restricting the otherwise very wide range of admissible mean field dynamo models . in the past 15 years , however , planetary dynamo theorists have turned the table .realistic numerical simulations of the complete geodynamo have been made possible by the rapid increase in the available computing power . while the parameter range for which such simulations are feasible is still far from realistic , extrapolations based on the available results have allowed important inferences on the behaviour of planetary dynamos ( , ) . in the light of these developments ,learning from solar dynamo theory may have lost much of its former appeal to geophysicists , especially as it has become increasingly clear that the two classes of dynamos operate in fundamentally different modes , under very different conditions . yet , keeping track with advance in the other field may not be without profit for either area .even though the overall mechanisms may be very different , there may be many elements of each system where intriguing parallels exist .a comprehensive review of solar dynamo theory is beyond the scope of the present article ; for this , we refer the reader to papers by , , and . aside from reproducing the dipole dominance and other morphological traits of the geomagnetic field , two aspects of the geodynamo that are critical for judging the merits of its models are field reversals and long - term variations : whether or not a model can produce such effects in a way qualitatively , and if possible quantitatively similar to the geological record , has become a testbed for geodynamo simulations .it may thus be of special interest to review our current understanding of the analogues of these phenomena in the sun .this is the purpose of the present paper . in section 2we outline what solar observations suggest about the causes of reversals , i.e. the babcock leighton mechanism .two questions naturally arising from this discussion are given further attention in sections 3 and 4 .section 5 briefly discusses some issues related to long - term variations of solar activity , while section 6 concludes the paper , pointing out some interesting parallel phenomena in solar and planetary dynamos .information about large scale flows in the solar convective envelope can place important constraints on dynamo models . in the solar photosphere (the thin layer where most of the visible radiation originates ) these flows can be directly detected by the tracking of individual features and by the doppler effect .but in recent decades _ helioseismology _ ( a technique analoguous to terrestrial seismology , see review by ) has also shed light on subsurface flows . in particular , the internal _ differential rotation _pattern is now known in much of the solar interior .it is characterized by a marked latitudinal differential rotation ( faster equator , slower poles ) throughout the convective envelope , while the radiative interior rotates like a rigid body . a thin transitional layer known asthe _ tachocline _ separates the two regions ._ meridional circulation , _ on the other hand is currently only known in the outermost part of the convective zone where it is directed from the equator towards the poles .a return flow is obviously expected in deeper layers but this has not yet been detected .the invention of the magnetograph in 1959 marked a breakthrough in the study of solar magnetism .the output of this instrument , the magnetogram , is basically an intensity - coded map of the circular polarization over the solar disk .circular polarization in turn is due to the zeeman effect and , in a rather wide range ( up to field strengths of kg ) , it scales linearly with the line of sight component of the magnetic field strength .apart from the saturation at kilogauss fields , then , a magnetogram is essentially a map of the line of sight magnetic field strength over the solar disk in the photosphere .conventionally , fields with northern polarity are shown in white , while fields with southern polarity are shown in black .figure 1 shows an arbitrarily chosen magnetogram as an example .it is immediately clear that the strongest magnetic concentrations , called _ active regions _ ( ar ) , occur in bipolar pairs .filtergrams and non - optical images showing the higher layers of the solar atmosphere confirm that these pairs mark the footpoints of large magnetic loops protruding from the sun s interior into its atmosphere . in white light imagesthese active regions appear as dark _ sunspot groups _ and bright , filamentary _ facular areas . _the lifetime of spots , facul and active regions is finite : turbulent motions in the solar photosphere ultimately lead to their dispersal over a period not exceeding a few weeks .in addition to the strong active regions , fig . 1 also displays some weaker and more extended magnetic concentrations , also in bipolar pairs ( e.g. a bit upper left from the center ) .these features , only seen in magnetograms , are the remains of decayed active regions , the bipolar pair of flux concentrations being dispersed over an ever wider area of the solar surface . ultimately, all that is left is a pair of _ unipolar areas _ areas of quiet sun where the ubiquitous small - scale background magnetic field is dominated by one polarity or another .we can see that the bipolar magnetic pairs are mostly oriented in the e - w direction , ( with a slight tilt , to be discussed below ) . in the northern hemisphere ,the n polarity patches lie to the west of their s polarity pairs , while in the southern hemisphere the situation is opposite . in the image ,the direction of solar rotation is left to right and the rotational axis is approximately vertical and in the plane of the sky .`` western patches '' are therefore referred to as the _ preceding polarity _part of the active region , while their eastern pairs are the _ following polarity _ part .the rule we have just noticed then says that , at any given instant of time , preceding polarities of solar active regions are uniform over one hemisphere and opposite between the two hemispheres .measurements performed during the course of several solar cycles show that , in any given hemisphere , the preceding polarity remains unchanged during the course of each 11-year solar cycle , while it alternates between cycles .this regularity , known as _hale s polarity rules , _ is schematically illustrated in fig .this implies that the true period of solar activity is 22 years in the magnetic sense .this sketch also shows one further regularity in the polarities .the weak large - scale magnetic field is usually opposite near the two rotational poles , and these polarities also alternate with a 22 year periodicity .however , the phase of this 22 year cycle is offset by about from the active region cycle , i.e. magnetic pole reversal does not occur in solar minimum .instead , reversals typically take place 12 years after solar maximum , right in the middle of a cycle ( as cycle profiles are asymmetric ) . systematic magnetograph studies coupled with pioneering work on magnetic flux transport shed light on the apparent origin of the field reversal already in the 1960s ( , ) . for a better understanding of how this so - called babcock leighton mechanism works , consider a _synoptic magnetic map _ like the one shown in fig .such synoptic maps are essentially constructed by taking a narrow vertical strip from the center of each daily magnetogram ( i.e. along the central meridian ) and sticking them together from left to right , in a time sequence covering one synodic solar rotation . in this waywe arrive at a `` quasi - instantaneous '' ( in as much as years ) magnetic map of the full solar surface . in fig .3 we see further ample evidence of hale s polarity rules , but what is more interesting is the systematic deviation of the orientation of bipolar pairs from the e w direction .it is clear that following polarity parts of active regions are systematically closer to the poles than their preceding polarity counterparts , and the tilt angle of the ar axis increases with heliographic latitude a phenomenon known as _ joy s law ._ as a consequence of joy s law , upon the decay of active regions the resulting following polarity unipolar areas will be predominantly found on the poleward side of the active latitude belt .now recall from fig . 2 that in the early phase of a solar cycle , the following polarity in a given hemisphere is opposite to the polarity of the corresponding pole .as the decay of ever newer active regions replenishes its magnetic flux , this following polarity belt , located poleward of the active latitudes , will expand towards the pole , a process greatly helped by turbulent magnetic diffusion and by the advection of the large scale magnetic fields due to the sun s large scale meridional circulation .ultimately , shortly after solar maximum , the preceding polarity patches around the two poles shrink to oblivion , and following polarity takes over : a reversal has taken place .the process can be followed in the `` magnetic butterfly diagram '' shown in fig .each pixel in this plot represents the intensity - coded value of the longitudinally averaged magnetic field strength at the given heliographic latitude and time .the butterfly wing shaped areas of intense magnetic activity at low latitudes represent the activity belts , migrating from medium latitudes towards the equator during the course of each solar cycle .the poleward drift apparent at high latitudes , in turn , corresponds to the poleward expansion of following polarity areas discussed above .it is clear that the reversal of the predominant magnetic polarity near the poles is due to this poleward drift , confirming the babcock leighton scenario .however compellingly the observations argue for such a scenario , two open questions remain .what is the origin of the latitude - dependent tilt of active regions ( joy s law ) ?+ in the babcock leighton scenario this tilt is clearly the ultimate cause of flux reversals . dynamo models rather naturally lead to predominantly toroidal , i.e. e w oriented magnetic fields , so the tilt compared to this prevalent direction is likely to develop during the process of _ magnetic flux emergence _ from deeper layers into the atmosphere .this leads us to discuss flux emergence models in section 3 .do magnetic flux redistribution processes seen at the surface indeed play an important role in the dynamo , or are they just manifestations of similar , more robust processes taking place deep within the sun ?the nice consistent cause - and - effect chain involved in the babcock leighton scenario ( ar tilt ar decay -polarity unipolar belt ;-polarity belt flux transport reversal ) seems to argue strongly for the first option .however , as we will see in section 4 below , keeping the surface physically decoupled from the deeper layers is not easy , and such models invariably need to rely on some rather dubious physical assumptions .thus , while parametric models of the first type can be fine - tuned to show impressive agreement with observations , their physical foundations remain shaky .conversely , models based on more sound and plausible physical assumptions have had limited success in reproducing the details of observations .this constitutes the main dichotomy in current ( mainstream ) solar dynamo thinking , to be discussed in section 4 .the current mainstream picture of the subsurface magnetic structure of solar active regions is sketched in fig .the observed distribution and proper motion patterns of sunspots and other magnetic elements in active regions are strongly suggestive of the so - called _ magnetic tree scenario : _ the cluster of smaller and larger magnetic flux tubes that manifest themselves as sunspots , facular points etc . in the photospheric layers of an active regionare all connected in the deeper layers in a tree - like structure .the characteristic sizes of unipolar patches in a typical bipolar active region suggest that the trunk of this magnetic tree starts to fragment into branches at a relatively shallow depth , on the order of % of the thickness of the convective zone . during the emergence of the tree structure , magnetic elements , corresponding to mesh points of the tree branches with the surface ,naturally seem to converge into larger elements , ultimately forming large sunspots .this is just how sunspots are observed to be formed .the natural initial configuration for such a magnetic loop , fragmented in its upper reaches , is a strong toroidal ( i.e. horizontal and e w oriented ) magnetic flux bundle lying below the surface .the high field strength is a requirement imposed by the strict adherence to hale s polarity rules .indeed , there are almost no exceptions to these rules among large active regions , so the drag force associated with the vigorous turbulent convective flows in the solar envelope must be small compared to other forces acting on the tube , in particular the curvature force . as the total magnetic flux of the tube must be similar to the flux of large active regions ( ) , from this condition it can be estimated that the field strength must significantly exceed the equipartition field where the magnetic and turbulent pressures are equal ( ) .it follows that g ( t ) in the deep convective zone .it is however well known ( ) that such strong horizontal magnetic flux tubes can not be stably stored in a superadiabatic or neutrally stable environment .the magnetic pressure implies that the gas pressure inside the flux tube is reduced in order to maintain pressure equilibrium with the environment .the associated decrease in density of the compressible plasma makes the flux tube magnetically buoyant . in order for such tubes to remain rooted in the solar interiorwhile only a finite section of them emerges in the form of a loop , the initial tube must reside below the convective zone proper , in the subadiabatically stratified layer below .this layer coincides with the region of strong rotational shear between the rigidly rotating solar interior and the differentially rotating convective envelope , the above mentioned tachocline .the emergence of flux loops formed on such strong toroidal flux bundles lying in the tachocline is driven by an instability. often referred to as the parker instability , this is a buoyancy driven instability of finite wavenumber perturbations of the tube ( while the mode remains stable , so the tube remains rooted in the tachocline ) .the first models of this process employed the thin flux tube approximation ( ) where the tube is essentially treated as a one - dimensional object .the emergence process was first followed into the nonlinear regime , throughout the convective zone , by ; the action of coriolis force and spherical geometry were incorporated in later models . sincethe mid-1990s models have started to break away from the thin flux tube approximation and by now full 3d simulations of the emergence process have become the norm ( see review by ) .indeed , flux emergence models represent probably the most successful chapter in the research of the origin of solar activity in the last few decades . as the total magnetic flux of the toroidal flux tubeis set equal to typical active region fluxes , emergence models are left with only one free parameter : the initial field strength .( twist is introduced as a further free parameter in 3d models . ) comparing models with different values of to the observed structure and dynamics of sunspot groups , it was recognized about two decades ago ( ) that the value of must be surprisingly high , in the order of g , exceeding by an order of magnitude .this came as a surprise , as flux expulsion in a turbulent medium was known to concentrate diffuse magnetic fields in tubes and amplify them to , but not to significantly higher values .yet there are at least four independent lines of argument to support this assertion . 1 ._ fragmentation depth ._ as the rise through the convective zone takes place on a relatively short timescale ( ) , matter inside the flux tube expands adiabatically .most of the convective zone ( except the uppermost few hundred km ) is also very nearly adiabatically stratified , and the magnetic pressure is negligible compared to the thermal pressure here , so the relative density contrast between the inside of the tube and its surroundings will remain relatively small . the field strength inside the loop is then given by , the 0 index referring to values at the bottom of the convective zone .the resulting values of as a function of depth are plotted in fig .[ fig : fluxemerg ] .it is apparent that at a certain depth drops below the turbulent equipartition field strength . above this levelthe magnetic field is unable to suppress turbulence and the external turbulent motions will penetrate the tube .flux expulsion processes taking place in parallel with the further emergence of the loop are then expected to fragment the top of the loop into a number of smaller flux tubes , resulting in the magnetic tree structure . as we have seen , observations suggest that this fragmentation occurs at a depth of a few tens of megameters : fig .[ fig : fluxemerg ] then clearly suggests g .emergence latitudes ._ for weaker tubes , the buoyancy and the curvature force are also weaker , so the coriolis force , independent of the field strength , will play a more dominant role in the dynamics of the emerging loop .tubes with g are so strongly deflected by the coriolis force that they will emerge approximately parallel to the rotation axis .as the bottom of the convective zone lies at 0.7 photospheric radii , weak flux tubes emerging from here can not reach the surface at latitudes below 45 degrees , in contradiction to observations ( , ) ._ joy s law ._ tubes with higher values of will emerge approximately radially , yet the effect of coriolis force on them is not negligible .the meridional component of the coriolis force acting on the downflows in the tilted legs of the emerging loop will twist the plane of the loop out of the azimuthal plane , resulting in a tilt in the orientation of active regions relative to the e w direction .this tilt increases with heliographic latitude , thus explaining the observed joy s law .quantitative comparison between models and observations again shows that joy s law is best reproduced for g ( ) .sunspot proper motions . _the azimuthal component of the coriolis force acting on the downflows in the tilted legs of the emerging loop has two consequences .on the one hand , as this component is westward in both legs , it will distort the shape of the emerging loop so that it will become asymmetrical , the following leg being less inclined to the vertical than the preceding leg .on the other hand , this westward force results in a wavelike translational motion of the loop as a whole compared to the ambient medium : the active region is then expected to rotate faster than quiet sun plasma .it is indeed well known that sunspots and other magnetic tracers generally show some superrotation compared to the doppler rotation rate of the ambient plasma . in the past, some confusion arose due to the fact that this superrotation appears to be time dependent : newborn sunspot groups rotate fastest , their rotation rate steadily declines during the growth phase of the group , until it becomes stagnant at a rate only slightly above the plasma rotation rate from the time when the group reaches its maximal development .this apparent change in the rotation rate , however , was shown to be a purely geometrical projection effect ( ) , as a consequence of the asymmetrical shape of the loop ( cf .[ fig : asymm ] ) . again, detailed quantitative comparisons of sunspot proper motion observations with the dynamics of emerging flux tube models ( , ) indicate optimal agreement for g . in summary :flux emergence models have led to the rather firm conclusion that solar active regions are the product of the buoyancy driven rise of strong magnetic flux loops through the convective zone .the loops arise from small perturbations of strong toroidal flux bundles lying in the solar tachocline , at the bottom of the convective zone .( these tubes may be preexistent , or alternatively they may detach from a continuous flux distribution as a result of the perturbation . ) a number of independent arguments indicate that the field strength in these toroidal tubes is on the order of g .the origin of the tilt in active regions orientations relative to the e w direction is clearly identified as the action of coriolis force on the emerging flux loops , twisting them out of the azimuthal plane .as we have seen in sections 2 and 3 , the babcock leighton mechanism offers a very attractive explanation of the cyclic polar reversals and activity variations observed on the sun . during the flux emergence process , the coriolis force twists the plane of the flux loops out of the azimuthal plane , so they acquire a poloidal magnetic field component . with the turbulent dispersal of the active regions , this poloidal field component contributes to the large - scale diffuse solar magnetic field .advected towards the poles by meridional circulation , it ultimately brings about the reversal of the global poloidal magnetic field of the sun .this reversed poloidal field is then advected down into the tachocline by the meridional circulation near the poles .continuity requires that plasma advected to the poles near the surface by meridional circulation must be returned to the equator at some depth ; in the simplest case of a one - cell circulation this will occur near the bottom of the convective zone .this deep equatorward counterflow then advects the poloidal field towards the equator at some slower speed , while differential rotation winds it up , resulting in an ever stronger toroidal field component . by the time it reaches latitudes below about ,this advected toroidal field reaches , at least intermittently , the intensity of g and starts to erupt in the form of buoyancy driven loops , closing the cycle .the equatorward propagation of active latitudes , manifest in the butterfly diagram , is thus solely due to the advection of toroidal fields by the meridional circulation .flux transport dynamos : + interface dynamos : + this scenario was already qualitatively outlined by ( with the difference that he attributed the rise and tilt of flux loops to helical convection ) . bynow we have detailed quantitative models for the solar dynamo along these lines ( , ) . as all migration phenomena in the butterfly diagramare explained by the advection of magnetic flux in this picture , these models are known as _ flux transport dynamos ._ flux transport models have become quite successful in reproducing many of the observational details such as the shape of the butterfly diagram .this is to a large extent due to their very good parametrizability .there are , however , serious doubts concerning their physical consistency .one issue concerns vertical flux transport . in order to have a poleward flux transport near the surface and an equatorward transport near the bottom, the surface and the tachocline must be kept _incomunicado _ on a timescale comparable to the solar cycle . however , simple mixing length estimates suggest that the turbulent magnetic diffusivity in the convective zone is in the range /s , so the timescale for the surface field to diffuse down to the bottom across the the convective zone of depth km is a few years , certainly less than the solar cycle length .the above mentioned value of the diffusivity is confirmed by calibrations based on surface flux redistribution .so , in order to effectively decouple surface and bottom , flux transport dynamos invariably need to rely to some ad hoc assumptions regarding magnetic diffusivity : suppressing it in the bulk of the convective zone , making it highly anisotropic etc . a further difficulty is related to the flow pattern envisaged in these kinematic models .the equatorward return flow of meridional circulation is assumed to spatially overlap with the tachocline , i.e. the subadiabatic layer where the toroidal field can be stably stored and where it is amplified by the strong rotational shear . but this assumption is very dubious from a dynamical / thermal point of view . to penetrate the subadiabatically stratified upper radiative zone , the plasma partaking in the meridional flow would have to get rid of its extra entropy so buoyancy will not inhibit its submergence .this , however , can only occur on a slow , thermal timescale .numerical estimates show that the maximal circulation speed in the subadiabatic tachocline is a few cm / s , way too slow for flux transport models to work .an alternative approach to the solar dynamo is , then , to try to construct models from first principles , without introducing physically unsubstantiated assumptions .such a model can not ignore the achievements of flux emergence models and needs to be based on the assumption that the strong toroidal flux tubes responsible for solar activity phenomena reside in the subadiabatic tachocline , and then are presumably also generated there , given the strong rotational shear . significant work has been done on the role of a stably stratified convective overshoot layer , coincident witha shear layer ( tachocline ) in large - scale dynamos ( cf. and ) .the slow meridional circulation allowed here can not be responsible for the latitudinal migration of the toroidal field as seen in the butterfly diagram .instead , this must be the manifestation of a classic dynamo wave , as already suggested by .the tachocline is , however , probably not turbulent enough to support a strong -effect , so the site of the -effect must be in the convective zone , adjacent to the tachocline , where convective downflows diverge , resulting in in the northern hemisphere as required for an equatorward propagating dynamo wave at the low latitudes where . in these _interface dynamos _, then , the dynamo wave is excited as a surface wave on the interface between the tachocline and the convective zone .their classic analytical prototype was constructed by . at higher latitudes , where this model naturally results in a poleward propagating dynamo wave , possibly explaining the poleward migration of unipolar areas and other phenomena in this part of the sun .this is an attractive feature of these models , but the question naturally arises , whether the equatorward return flow that must be present in the lower convective zone if not in the tachocline , i.e. on one side of the interface will affect these results .this was examined by in an extension of parker s analytical work to the case of a meridional flow .it was found that for parameters relevant to the solar case the meridional flow is unable to overturn the direction of propagation of dynamo waves , nor will it significantly affect their growth rates . for a related study of the effect of other flux transport mechanisms on the interface dynamos see .a number of detailed numerical interface dynamos models have been constructed for the sun ( , , ) .it must be admitted that at present they are unable to reproduce the observed features of the solar activity cycle as satisfactorily as some flux transport models can .however , this is clearly a consequence of the fact that these models lack the kind of physical arbitrariness that characterizes some flux kinematical transport models , where the arbitrary prescription of flow geometries and amplitudes ( esp. for the meridional flow ) leaves much more space to play around with parameters until an acceptable fit to observations results .in the geodynamo , field reversals and long - term variations are closely related .indeed , reversals are the most important kind of long term variation . in the sun, however , the field regularly reverses in every 11-year cycle : in fact , reversals are the essence of the 11/22-year cycle . long term variations , in turn , are a completely independent topic .it is impossible to give a comprehensive review of this area in the present introductory paper ( see , for much more exhaustive overviews ) ; instead , we limit ourselves to mentioning some salient points and recent results .figure [ fig : spotcycle ] presents the variation of the relative sunspot number in the last three centuries , for which the most reliable direct data exist . from historical solar observations going back another century we know that immediately before the period covered by fig .[ fig : spotcycle ] , the sun underwent an unusally quiet period lasting nearly 70 years : in this so - called maunder minimum there were hardly any spots seen on the sun at all . at the otherextreme lies the modern maximum : the series of unusually strong cycles that started in the mid-20th century and seems to be just about to come to an end now .various terrestrial proxies of solar activity have made it possible to reconstruct long term activity variations ( albeit not necessarily individual cycles ) over a substantially longer time span .the best results were found from studies of the abundance of the cosmogenic isotope in greenland ice cores .these studies led to a reconstruction of the history of solar activity in the last 9000 years with a time resolution of about 10 years ( ) .recently , data with even better , annual resolution have become available for the last six centuries ( ) .one interesting result from these studies concerns the histogram of decadal activity levels ( ) .it was found that the overall shape of this histogram is compatible with a normal distribution ; however , significant excesses or `` shoulders '' appear at both extremes .in other words , times of very low and very high solar activity are overabundant relative to a gaussian statistics .these results suggest that grand minima and grand maxima are more than random fluctuations : they are indeed physically distinct states of the dynamo .long term variations in the intermediate states , in turn , seem to be driven by stochastic effects , resulting in nearly gaussian statistics . attempts to explain the long term variation in solar activity , as well as grand minima and maxima , include stochastic fluctuations in dynamo parameters ( , ) ; nonlinear dynamos with chaotic behaviour ( , ) or with two alternative stationary dynamo solutions ( ) .the shape of the histogram discussed above suggests a bimodal solution with strong stochastic forcing resulting in an actual solution that random walks in between the attractors for most of the time .it should be noted that radionuclide based reconstructions of solar activity involve several uncertainties . on the long timescales we are concerned with, the most important such effect is the parallel long term variation in the geomagnetic field .the atmospheric cosmic ray flux , and thereby the radionuclide production rate , is the net result of the shading effects of both the interplanetary and the terrestrial magnetic fields , so long - term normalization of the solar modulation crucially depends on the correct subtraction of the geomagnetic contribution . issues such as just how exceptionally strong the most recent grand maximum ( the modern maximum ) was , are strongly influenced by these effects .at the most basic level , the same processes generate magnetic field in the earth s core and inside the sun : the shearing of magnetic field lines by differential rotation ( -effect ) and their twisting by helical motions ( -effect ) that obtain a preferred handedness from the action of coriolis forces .also , some magnetic phenomena may have similar causes .for example , paired flux spots at low latitudes in the geomagnetic field at the core - mantle boundary have been tentatively explained by the expulsion of toroidal flux tubes ( ) , in analogy to the generation of sunspots .but although this interpretation is supported by some geodynamo models ( ) , other explanations for low - latitude magnetic structures have been put forward ( ) .meridional flow is thought to play the essential role for reversals of the solar magnetic field .this has also been demonstrated in a simple reversing geodynamo model ( ) .however , the reversal behaviour in this model is nearly cyclic as in case of the sun , in contrast to the stochastic reversals of the geomagnetic field . in a less idealized geodynamo model with random reversals ,impulsive upwellings have been identified as the cause for polarity changes ( ) .these upwellings transport and amplify a multipolar magnetic field from depth to the outer boundary .while possible analogies between the solar dynamo and the geodynamo can stimulate our thinking , we must keep their limitations in mind . which differences in physical conditions lead to the rather distinct behaviour of the solar dynamo and the geodynamo ? one important difference is that the plasma in the solar convection zone is sufficiently compressible and that the field strength is high enough so that magnetic pressure and magnetic buoyancy play an essential role for the dynamics of flux tubes .these effects are probably unimportant in earth s core .another difference is that the coriolis force has a stronger influence in the geodynamo than it has in the slowly rotating sun .this notion is supported by the observation that much more rapidly rotating stars of low mass seem to have strong large - scale magnetic fields that are frequently dominated by the axial dipole component ( ) .their observed field strengths follow the same scaling law as the observed fields of earth and jupiter and the field intensity found in geodynamo models at sufficiently rapid rotation ( ) .a third difference arises from the much slower motion and therefore lower magnetic reynolds number in the geodynamo .the moderate value of the magnetic reynolds number makes the magnetic induction process in the earth s core amenable to direct numerical simulations without the need to take recourse to turbulent magnetic diffusivities or parameterized turbulent -effects .this is perhaps the most important reason for the success of geodynamo models in reproducing many observed properties of the geomagnetic field without need for ad - hoc assumptions .however , our more limited knowledge of the geomagnetic field at the top of the core , in comparison to that of the field in the solar photosphere , makes the task simpler for a geodynamo modeller .also , helioseismology has revealed the distribution of zonal flow in the solar convection zone and a fully consistent solar dynamo model must reproduce this flow pattern as well as the magnetic field properties .comparable information is lacking for the earth and a geodynamo model can be declared successful once it captures the general properties of the large - scale geomagnetic field .k. petrovay s work on this review was supported by the hungarian science research fund ( otka ) under grant no .k67746 and by the european commission through the rtn programme solaire ( contract mrtn - ct-2006 - 035484 ). + j .- f .donati , j. morin , p. petit , x. delfosse , t. forveille , m. aurire , r. cabanac , b. dintrans , r. fares , t. gastine , m. m. jardine , f. lignires , f. paletou , j. c. ramirez velez , s. thado , large - scale magnetic topologies of early m - dwarfs . , 545 ( 2008 )
a didactic introduction to current thinking on some aspects of the solar dynamo is given for geophysicists and planetary scientists .
the term cancer refers to hundreds types of neoplasms which share specific prototypical traits , summarized by hanahan and weinberg , collectively leading to malignant growth . during the past few decadesmolecular biologists have produced much cancer - related data which has shown cancer as an extremely stochastic , heterogeneous and complex disease . to analyze them, cancer research applies many concepts originally developed in different branches of science , such as applied mathematics , nonlinear dynamical systems , and statistical physics . at present , evolutionary nature of carcinogenesisis accepted and implications for cancer robustness ( exemplified by resistance to therapy ) are often emphasized .darwinian view to carcinogenesis implicitly puts genetic ( and epigenetic ) changes into microenvironmental context .consequently , tumor microenvironment is viewed as an eventual target for chemoprevention and cancer reversion . on the other hand ,anticancer research and therapy concentrate mainly on molecular data and tend to overlook its evolutionary nature . optimality model applied in experimental evolution describes the evolution as simple generalized trade - offs , presuming that genomes adapt successfully and freely enough and , consequently , genetic details become irrelevant .mathematical approaches to carcinogenesis often apply concepts of feedback and optimal control theory instead of molecular or genetic data .komarova et al . have solved the optimization problem for cancerous growth and proposed optimal strategies . however , as they state , the ideal ( optimal ) strategy may be not realistic due to many constraints in nature which escape modeling , but can make a strategy impossible . in the paper we concentrate on the abstract mechanisms of attaining an optimal strategy instead of the strategy itself .we view any process in which solutions replicate with errors and numbers of their copies depend on their respective qualities as an evolutionary optimization process .as carcinogenesis conforms the above definition , we identify it with an evolutionary optimization process and apply concepts and results of the long lasting research in the evolutionary optimization . keeping in mind an eventual therapeutic application , we focus on those aspects of evolutionary optimization which decrease or inhibit efficiency of the optimization process .strict adherence to the optimization framework has led us to counterintuitive implications .in the optimization theory , the quality of a solution is usually defined explicitly in the form of a _ fitness function _ ( also _ fitness landscape _ or _ fitness _ ) , quantifying how well a candidate solution meets required criteria .the ultimate aim of the optimization procedure is to find a solution for which the fitness function receives optimum value .large group of optimization algorithms , called evolutionary algorithms ( ea ) , performs the task by mimicking biological evolution implementing the genetic - like mechanisms , such as mutation , selection and reproduction . applyingea in various engineering optimization applications has enabled to recognize those aspects of fitness landscapes which support efficient evolutionary optimization and , at the same time , those which prevent it .theoretical analysis of the most popular ea variant , the genetic algorithms ( ga ) , has been performed by j. h. holland . in the simplest engineering applications ,canonical ga ( cga ) applies : \1 .initial population of random binary strings is generated 1011101011 ... + 0001100101 ... + .+ 1011110010 ... + \2 .each of the bit string is projected and scaled to get the real parameter set and its fitness function value is determined \3 .child population of the bit strings is constructed from the parents population applying the genetic operators - selection depending on the strings fitnesses , crossover and mutation ; after it is complete it replaces the parent population \4 . until some convergence criterion applies go to the step 2 .theoretical analysis of the process enabled to identify the driving force behind the biological - like manipulations with binary strings representing parameters of the model .it was recognized that the population - based optimization algorithm is driven by the fitnesses of the correlations of bits in the binary strings ( called `` schemas '' ) .the schema can be viewed as a bit pattern over the bit positions in the string .if the bit alphabet \{0,1 } is assumed , the schema can be easily constructed over the ternary alphabet \{0,1 , * } , where * matches both , 0 and 1 , at the respective position : let s have 4 binary strings a 10100101 + b 01011011 + c 11100010 + d 00010001 and two schemas , x and y x * 1***01 * + y 0 * 01***1 the schema x is contained in the strings b and c , the schema y in the strings b and d. it is usually said that strings b and c are instances of the schema x and the strings b and d are the instances of the schema y. specificity and robustness of the schemas are quantified by the schema order , , and defining length , .the schema order is the number of fixed positions in the schema .the defining length is the distance between the leftmost and the rightmost fixed positions .to predict the number of instances of a schema in generation , holland derived the schema theorem ( st ) ,\ ] ] where and are numbers of instances of the schema in and , respectively , is the probability of strings crossover , is mutation rate , is the length of the binary string , is the average fitness in the population , and is the schema fitness in defined as the average fitness of all the instances of the schema in the population in . st ( [ schematheorem ] ) states that during the ga optimization the number of the above average schemas increases on the account of less favorable schemas .moreover , it has been demonstrated , that ga allocates its trials among alternative solutions during the search ( known as -arm bandit problem ) in optimum way as long as the schemas fitnesses are correctly estimated . as in the paper we have identified carcinogenesis with the evolutionary optimization process ,any feature or mechanism which decreases efficiency of the optimization process is interesting from the point of view of its eventual therapeutic application . recognizing st ( [ schematheorem ] ) as the principal mechanism driving the evolutionary optimization , an explicitly optimization - preventing therapy can be identified with substituting into ( [ schematheorem ] ) wrong schemas fitnesses estimates .below we list in the ga literature most often presented reasons preventing reliable estimates of the schemas fitnesses . _i ) too large sampling errors . _the factors influencing the reliability of statistical sampling are the number of evaluated candidate solutions and their distribution in the search space ( i.e. population heterogeneity ) .they should cover as much of the search space ( fitness landscape ) as possible so that the convergence to the optimum was as probable as possible .the sampling errors can be reduced by the appropriate choice of mutation rate .if mutation rate is too low , optimization sticks in a suboptimal solution ( known as premature convergence ) .if mutation rate is too high , optimization procedure turns into so - called blind search ._ ii ) dynamic fitness landscape ._ as the evolutionary optimization procedure converges towards optimum solution in a stationary fitness landscape , heterogeneity of the population decreases .the parts of the search space near the optimum become overpopulated , and , at the same time , other parts only sporadically populated , or even empty .the role of the observed increase of population heterogeneity in changed environment is well interpretable using the terms of evolutionary optimization , namely evolution algorithms in dynamic environments .therefore , mechanisms of heterogeneity maintenance have been developed in optimization theory and deeply studied .efficient transition from the old optimum to optimum(a ) in a new fitness landscape requires i ) detection of the fitness landscape change , and ii ) response to that change . for that, candidate solutions must be appropriately distributed in the search space so that evolutionary algorithms could perform representative statistical sampling to determine reliable schemas fitnesses estimates which are necessary for optimal allocation of trials during optimization . if there are no ( or too few ) evaluations in the changed part of the fitness landscape , the change goes undetected ._ iii ) deceptiveness of fitness landscape . _ to answer the question which fitness landscapes are ga - hard , bethke expressed a fitness function as a linear combination of walsh monomials and showed the relationship between the schema s fitness and walsh coefficients .consequently , he applied the walsh transform to characterize functions as easy or hard for ga optimization .it has been understood that the principal problem for ga optimization is the class of deceptive fitness functions , in which lower order ( lower number of defined bits ) schemas lead the search towards bad higher order schemas .goldberg showed the possibility of constructing high - order deceptive functions using low - order walsh coefficients in special cases .exact convergence analysis of ea requires much better mathematical definition of the relevant fitness landscape and more obvious parametrization of a solution than one typically disposes with biological systems . regardingthe above introduced schema formalism a few differences between cga and carcinogenesis should be mentioned . at first, carcinogenesis is an asexual process , therefore constant in ( [ schematheorem ] ) equals zero .the second difference is that no spatial relation between offsprings and their parents is assumed in ( [ schematheorem ] ) .the third difference regards unknown parametrization - obviously higher structures than nucleotides ( or genes ) are relevant .nevertheless , neither of the differences puts in doubt importance of reliable estimates of the schemas fitnesses for optimal allocation of trials during carcinogenesis .in addition , as often used in evolutionary optimization practice , we use the term optimum solution in a sense of a winning solution , i. e. the best solution obtained after reasonable ( or affordable ) long optimization , instead of exact , mathematically proved , solution .* fitness landscape * the term represents central concept in biological evolution as well as in optimization theory . in biology ,the fitness is usually understood in a sense of `` reproduction '' fitness , meaning that the more copies solution has the more fit it is ( and vice versa ) , and obtains factual meaning in specific environment and time scales . during the genome s evolution selection acts at two different hierarchical levels respective to the two units of replication : cells and organisms ( multicellular bodies ) . as a result ,the genome is the trade - off between two processes : i ) maximization of the multicellular ( organismic ) reproduction fitness ( acting during millenia ) , and ii ) maximization of cellular reproduction fitness ( acting during individual lifespan ) , respectively .the former process presumes social cooperation of cells ( such as limited replicative potential , production of growth signals , sensitivity to antigrowth signals , cellular senescence , apoptosis , etc . ) and severe prohibition of the cells selfishness , the latter favors selfishness instead of cooperation .the trade - off is mediated by the initial genomic stability , evolved to postpone short scale evolution in the respective environment beyond reproduction period of the respective organism .* heterogeneity * extensive genomic studies by sjblom et al . have clearly demonstrated extreme heterogeneity in colorectal cancer tumors .they have revealed that mutational patterns in samples of colorectal cancers are unexpectedly individualistic , with none of the three most often mutated genes ( apc , p53 , k - ras ) mutated in all the samples .it has been shown that sets of mutated genes in two samples of colorectal cancers overlap to only a small extent and it is anticipated to be general feature of most solid tumors . similarly , resuming studies in breast and renal cancer , gatenby andfrieden concluded that probably no prototypical cancer genotype exists and every tumor seems to possess a unique set of mutations indicating that multiple genetic pathways may lead to invasive cancer as would be expected in a stochastic non - linear dynamical system .clonal diversity in a subset of patients with early stage haematopoietic malignancy has been demonstrated and it has been shown that such clones may arise independently .it has been also observed , that time to disease progression and overall survival after treatment were significantly shorter in those patients with egfr heterogeneity .maley et al . have demonstrated that clonal diversity predicts progression to cancer and that accumulation of viable clonal genetic variants is a greater risk for progressing to cancer than homogenizing clonal expansion .mathematical model by komarova et al . shows that tumors thrive when cancerous cells mutate to speed up malignant transformation , and then stay that way by turning off the mutation rate .interpretation of heterogeneity is crucial for understanding of carcinogenesis .it can be , in extreme cases , interpreted either as noise hiding a common pattern , or redundancy ( all the cases are causative as a whole , no common pattern exists ) .if interpreted as a noise , the effort to filter it out by analyzing as many cancer cases as possible to see the common mechanism is justified .if , however , each sample is interpreted as a unique , nevertheless causative set of genes , alternative approaches are needed .the above mentioned studies at genetic level indicate that heterogeneity should be interpreted in the latter way .they report that every tumor harbors a complex combinations of low - frequency mutations thought to drive the cancer phenotypes .consequently , a strategy to study mechanisms of cancer by reducing heterogeneity may be assumed to be a flawed approach . *optimization behind * putting fitness landscape and heterogeneity into optimization context , the wild - type genome represents optimum solution in the respective past fitness landscape ; its further optimization in unchanged fitness landscape is , by definition , inhibited .after the fitness landscape has changed , optimization of the genome becomes possible .regarding the structure of the fitness landscape , during the optimization two fitness landscapes are sampled , each for the respective unit of replication - organism or cell . as there are many cellular fitness evaluations during the organism s lifetime , only cellular fitness landscape may be sampled representatively enough to provide reliable schemas fitnesses ( [ schematheorem ] ) which result in optimal allocation of trials driving the short time evolution of the genome into an optimum in the changed cellular fitness landscape .the organismic fitness landscape , selecting for intercellular cooperation , does not apply during the lifetime of the body and the optimization process is driven purely by cellular fitness landscape for which the intercellular cooperation is not selectable trait . from this point of view ,any short - scale change of the fitness landscape is not only mutagenic but also carcinogenic , as it selects for destroying intercellular cooperation . applying the quasispecies model , forster and wilkehave demonstrated that competitive dynamics of finite populations of as few as two strains , adapted to the long - term and short - term environment changes , respectively , is quite complex .heterogeneity represents crucial aspect of carcinogenesis . at the same time , in engineering applications , evolutionary optimization starts with heterogeneous , typically randomly generated , initial population of candidate solutions . in the case of stationary fitness landscapes, heterogeneity decreases towards some minimum level as the optimization procedure converges to the best solution ( the analogy with a homogenizing clonal expansion inflicts itself ) , despite keeping constant mutation rate . on the other hand , evolutionary optimization in changing fitness landscapes shows importance of avoiding total homogenization . in computer experiments where mutation rate is not exempted from optimization , its increase ( followed by the increase of heterogeneity )is observed after the fitness landscape has changed .it has been reliably demonstrated that rapid or extreme environmental change leads to the selection for greater evolvability .similarly , selection of mechanisms for increased mutation rate in biological systems , like rna viruses , in unstable environments was reported .donaldson - matasci et al . shown that optimal amount of diversity depends on environmental uncertainty which can lead to the evolution of either generalist or specialist strategy .cancer - susceptibility genes are classified as caretakers , gatekeepers and landscapers .mutations in caretakers leads to genomic instability , mutated gatekeepers are responsible for increased cellular proliferation and landscapers defects generate an abnormal stromal environment . in general , the cancer - susceptible genes govern statistics of the cell population , either directly ( caretakers and gatekeepers ) , or indirectly by maintaining fitness landscape ( landscapers ) . within the frame of evolutionary theoryit is understood that heterogeneity confers cancer cells population with the ability to cope with environment uncertainties .optimization theory derives efficiency of an optimization method from its ability to allocate appropriately future trials . the schema theorem ( [ schematheorem ] )guarantees giving at least exponentially increasing number of trials to the observed best building blocks .implicitly , optimal allocation of trials between alternative solutions requires as reliable schemas fitnesses estimates as possible . in addition , as evolving clones implicitly undergo competition , the schemas fitnesses must be determined as fast as possible .for that , representative ( regarding the respective fitness landscape ) statistics of the population must be at hand .the ability of the clone to evolve ( or not ) towards representative statistics comes from specific defects in cancer susceptibility genes .causality in evolutionary processes is actually provided by the feedback from environment .the evolutionary process is a fitting procedure , which is the method of solving ( typically ill - posed ) inverse problems .enormous genetic heterogeneity of cancers indicates that most cancer occurrences are the unique solution of the fitting problem .it implies that the fitting problem solved by cancer is highly underdetermined , which results in the arbitrariness of a fit ( i. e. model ) and it is consistent with the metaphoric conclusion by witz and levy - nissenbaum , who stated `` ... the extreme complexity of the signaling cascades operating in the microenvironment and the interactive cross - talk between these cascades , generates the feeling that anything that can happen - it will '' .traditional therapies are based on comparisons of cancerous and non - cancerous cells , which , by definition , presumes existence of reliable enough ( in an ideal case dichotomic ) splitting into two respective groups .consequently , therapeutic actions are taken to attack the tumor cells group ( cancer cell - kill paradigm ) .it is implicitly believed that therapeutic efficiency depends on how close to dichotomic the splitting is .for instance , the two main therapeutic treatments , chemotherapy and radiation , exploit the enhanced sensitivity of cancer cells to dna damage . novel targeted and gene therapies goeven further - they are aimed to interfere directly with the specific molecules or genes participating in carcinogenesis ( the magic bullet concept ) .the effort to find the criterion(a ) enabling to approach to dichotomic splitting as close as possible is omnipresent in cancer therapy .varshavsky proposed the therapy which distinguishes cancer and normal cells according to harboring ( or not ) homozygous dna deletions .skordalakes points out that inappropriate activation of a single enzyme , telomerase , is associated with the uncontrollable proliferation of cells observed in as many as 90% of all of human cancers and proposes that the high - resolution structure of the enzyme will be the key to efficient anti - cancer therapies . however , putative existence of dichotomic splitting is in contradiction with the evolutionary nature of carcinogenesis which , as any other evolutionary process , crucially depends on the variability of traits observed at many levels . extreme tumor cells heterogeneity gives cancer robustness , exemplified by the resistance to therapy , and it is the most tormenting problem in cancer research to which therapies and experimental models must face .heng et al . emphasize the key role of heterogeneity by stating that without heterogeneity , there would be no cancer .below we present specific insights and implications for anti - cancer therapy stemming from the above presented optimization view to carcinogenesis .some of them are intuitive and consistent with established anti - cancer therapies , some others are quite counterintuitive and , hopefully , novel and put in question some current trends in the development of anti - cancer therapies . within the frame of the above outlined identification of carcinogenesis as the evolutionary optimization process , therapy is a purposeful effort to decrease the efficiency of that optimization process or , hopefully , inhibit it completely . for that purposes , we have listed above the three most frequent obstacles to the efficient evolutionary optimization , stemming from validity of the schema theorem ( [ schematheorem ] ) .these are : too large sampling errors , dynamic ( or changing ) fitness landscapes and deceptiveness of fitness landscape . in all the casesthe estimation of the schemas fitnesses is not reliable ( or systematically wrong ) which prevents the optimization process to allocate its trials optimally . _i ) too large sampling errors ._ it is understood that heterogeneity plays a central role in evolution and provides species ( or clones ) with the capacity to cope with environmental uncertainty . on the other hand ,if it exceeds a certain threshold , deleterious effects outweigh the above selection advantage . the existence of the critical mutation rate in evolution beyond which darwinian selection does not operate has been predicted by eigen s theory of quasispecies .sole and deisboeck applied the simple mathematical model of quasispecies dynamics to quantify the upper limit of affordable genetic instability ( error threshold ) in cancer cells population , beyond which genetic information is lost . consistently with the fact that tumor cells have defective stability pathways , cahill et al . proposed that tumor cells could be target for direct attack by instability drugs .however , from the point of view of the evolutionary optimization theory , competitiveness of the clone depends on its capability to allocate its further trials among emerging alternatives which requires representative statistics of the cells population , not merely specific genetic ( in)stability .therefore we speculate that forced increase of sampling errors by instability drugs , abruptly shifting population statistics away from the optimal in the respective fitness landscape , would be compensated by selecting for change(s ) in other evolutionary attribute(s ) , such as reproduction rate , cellular mortality rate , internal stability ( the mechanism does not matter at this point ) , etc ._ ii ) dynamic fitness landscape ._ changing the fitness landscape can be a double edged sword .on the one hand , cancer cells reveal increased adaptivity enabling them to respond to environmental changes to keep high ( reproductive ) fitness . on the other hand , higher adaptivity of cancer cells can be therapeutically exploited , as outlined by maley et al . . they proposed to select for the cells sensitive to cytotoxins before applying cytotoxic therapy . _iii ) deceptiveness of fitness landscape ._ deceptive landscapes can be interpreted as the landscapes in which correlations of traits systematically lead away the search from the global optima . to our knowledge , there is no therapeutic approach explicitly exploiting deceptiveness of the fitness landscape .we anticipate that combining biological intuition , the results of mathematical analysis of deceptive fitness landscapes and digitized evolution can bring novel insights into the evolution of cancer phenotype .* is therapy a penalty function ? * from the evolutionary optimization point of view therapy is a purposeful change of the fitness landscape , namely decrease of reproduction fitness in the relevant area of the search ( sequence ) space at reasonable time scales .all the well established traditional therapies ( surgery , radiotherapy and chemotherapy ) make an effort to remove all the cancer cells , or , at least , as many of them as possible .evolutionary optimization theory implies that ultimate therapeutic success depends not only on how many cancer cells survived the therapy , but also on the distribution of the cells in the search space , i. e. statistics of the remaining population .if the population statistics is sufficient for the efficient optimization , the regrowth appears .below we present eventual counterintuitive consequence of therapy resulting from the optimization facet of carcinogenesis . it has been reported that therapy - surviving tumor cells are frequently more malignant and aggressive than the initial tumor population .inhibition of angiogenesis has been envisioned as promising anticancer therapeutic strategy for a long time . since then , modes of resistance to antiangiogenic therapy , such as evasive and intrinsic resistance , has been reported .it has been found by paez - ribes at al . that targeting the vascular endothelial growth factor ( vegf ) induces ( apart from anti - tumor effects to primary tumor ) higher invasiveness and , in some cases , increased lymphatic and distant metastasis .ebos et al . have found that the vegfr / pdgfr kinase inhibitor can accelerate metastatic tumor growth and decrease overall survival in mice receiving short - term therapy .similarly , it has been reported that the resistance to some synergistic drug combinations evolves faster than the resistance to individual drugs . in their review kim and tannock reportthat repopulation of cancer cells after radiotherapy as well as chemotherapy is often accelerated in comparison to untreated cases .the mechanism of this acceleration has not yet been understood . in the spirit of our workwe attribute the above increase of invasiveness and acceleration of the evolution of resistance during repopulation to the optimization facet of carcinogenesis . in engineering applications of evolutionary optimization oneoften applies _ ad hoc _ penalty function to disadvantage some part(s ) of fitness landscape to accelerate convergence of the process into the optimum in desirable parts ( figure [ penalty ] ) .the simplification of fitness landscape enables to perform more representative schema sampling of more promising parts at the same price obtaining more reliable schemas fitnesses evaluations resulting , accordingly to ( [ schematheorem ] ) , in closer - to - optimum allocation of the trials among alternative solutions .if cancer , metaphorically said , solves the optimization problem , the same mechanism applies .we hypothesize , that if therapy does not remove decisive portion of cancer cells ( hopefully all ) , it may , eventually , result in unwanted simplification of fitness landscape for therapy - resistant clone(s ) .we emphasize , that this hypothesis is aimed purely to interpret sometimes reported cases when accelerated progression of therapy - resistant tumors was observed and it does not propose any alternative to well established therapies .recent experimental evidence shows that heterogeneity , stochasticity and dynamics play in carcinogenesis much more important role than envisioned a few decades ago .this new picture requires corresponding conceptual framework .here presented evolutionary optimization view to carcinogenesis implicitly includes connection between statistics of cells population and statistics of fitness landscape and applies results of long - standing research in the stochastic evolutionary optimization algorithms , especially in dynamic fitness landscapes . here we have put some of the observed cancer features , such as increased heterogeneity , clonal expansion , consequences of changing the fitness landscape and accelerated evolution of resistance to chemotherapy into optimization scenario .carcinogenesis is , unquestionably , a physical process . at the same time, it can be formally viewed , as all the evolutionary processes , as the optimization procedure .straightforward approaches study carcinogenesis and develop anticancer strategies analyzing biochemical or genetic details . in the paperwe have speculated that it may be not relevant _ per se_. instead , we have proposed that cancer relates primarily to the cells population statistics and all the therapies lead ( more or less intentionally or explicitly ) to its modification .traditional therapies rely on comparison between cancerous and non - cancerous cells which may be motivated by the long lasting effort to reduce cancer cells population by some straightforward action .evolutionary view suggests that carcinogenesis could be inhibited by a purposeful modification of evolutionary attributes , such as mutation rate , effective population size or generation time of the self-renewing cells . nevertheless , except for trivial cases , evolutionary theory does not give instructive enough answer how should be the evolutionary attributes changed .here presented optimization view to carcinogenesis proposes that the crucial mechanism of cancer progression is , as in any other evolutionary optimization process , optimal ( or , more realistically , better than by other clones ) allocation of trials , based on more representative population statistics enabling more reliable estimations of schemas fitnesses ( [ schematheorem ] ) .efficiency of the schema sampling depends on the number of sampled points and their distribution in the fitness landscape , as well as the cell s fitness estimation time .these attributes adapt to statistical features of the fitness landscape by selecting respective mutations in genes ( a posteriori denoted as cancer - susceptible genes ) . as , at the same time , efficiency of the sampling determines the cancer s perspective, we conclude that the therapeutic outcome could be influenced by manipulation with statistical properties of the fitness landscape , such as roughness or dynamics , in a purposeful cancer - inhibiting way. the above statistical view may be relevant especially for advanced malignancies , where high heterogeneity of the cancer cells population enables them to adapt successfully to therapeutically - changed environment . classifying carcinogenesis as the evolutionary optimization process does not contradict to often presented view of cancer as the result of accumulating specific mutations in the only transformed cell .it emphasizes , however , importance to combine molecular data with statistical view which may play crucial role before and during carcinogenesis .the principal question remains if the novel conceptual framework can be exploited to trigger novel , explicitly anti - optimization based , therapeutic approach .the authors acknowledge financial support from vega , slovak republic ( grants 1/4021/07 , 1/0300/09 ) and apvv ( grant lpp-003006 ) .d.h . acknowledges financial support by the lestudium fellowship of the region centre and the centre national de la recherche scientifique .taniguchi k , okami j , kodama k , higashiyama m , and kato k. intratumor heterogeneity of epidermal growth factor receptor mutations in lung cancer and its correlation to the response to gefitinib .cancer sci 2008;99:929935 .maley cc , reid bj , and forrest s. cancer prevention strategies that address the evolutionary dynamics of neoplastic cells : simulating benign cell boosters and selection for chemosensitivity .cancer epidemiol biomark prev 2004;13:13751384 .clune j , misevic d , ofria c , lenski re , elena sf , and sanjuan r. natural selection fails to optimize mutation rates for long - term adaptation on rugged fitness landscapes .plos comput biol 2008;4:e1000187 .
_ any process in which competing solutions replicate with errors and numbers of their copies depend on their respective fitnesses is the evolutionary optimization process . as during carcinogenesis mutated genomes replicate according to their respective qualities , carcinogenesis obviously qualifies as the evolutionary optimization process and conforms to common mathematical basis . the optimization view accents statistical nature of carcinogenesis proposing that during it the crucial role is actually played by the allocation of trials . optimal allocation of trials requires reliable schemas fitnesses estimations which necessitate appropriate , fitness landscape dependent , statistics of population . in the spirit of the applied conceptual framework , features which are known to decrease efficiency of any evolutionary optimization procedure ( or inhibit it completely ) are anticipated as `` therapies '' and reviewed . strict adherence to the evolutionary optimization framework leads us to some counterintuitive implications which are , however , in agreement with recent experimental findings , such as sometimes observed more aggressive and malignant growth of therapy surviving cancer cells . _
interference resulting from quantum coherence causes an abundance of effects that contradict our classical intuition .most people would probably be inclined to negate both the interference of independent photons or of mesoscopic molecules if there was no clear experimental evidence for the existence of both effects ; and despite the fact that interference phenomena in quantum mechanical systems have been observed for more than a century , we can still find large missing pieces in our understanding of quantum coherence .the fact that a coherent superposition of at least two path alternatives ( two state - vectors in a more general , abstract description ) is necessary for an _ interference pattern _ to emerge , and that the achievable contrast increases with the number of states that are coherently over - imposed is one of the best established notions of elementary physics .however , going beyond this qualitative observation , our intuition is typically not able to answer the question of how many path - alternatives are needed to generate a particular interference pattern with reduced contrast .the overall aim of this paper is to explore the information content stored in the interference pattern , and to develop a framework which addresses the above question .the formal definition of quantum coherence requires a set of mutually orthogonal states , with respect to which coherence is defined . in an interferometric situation these states would correspond to different path alternatives and the number of paths that are being taken coherently is often referred to in terms of the lateral coherence length . in the case of molecular networks one is typically interested in the number of chromophores over which an excitation is coherently distributed , so that coherent delocalization is defined in terms of the excited states of the individual chromophores ; and in transport theory or quantum thermodynamics this reference basis is given by energy eigenstates . in general , a pure state is considered -coherent in terms of a given set of basis states , if at least of the amplitudes are non - vanishing .since decoherence processes which are unavoidably present in actual physical situations result in the deterioration of quantum coherence , the description in terms of mixed states or density matrices becomes necessary . as every mixed state intuitively be understood as an average over pure states , averaging over _ incoherent _ states will not result in any interference phenomena .consequently , any mixed state that can be decomposed into a mixture of incoherent states with is considered incoherent .analogously , every mixed state that can be expressed in terms of an average over pure states with no more than -coherence is _ not _ -coherent .this motivates the commonly employed definition that a mixed state is -coherent , if any ensemble that satisfies for some set of probabilities contains at least one -coherent state vector .the notion of -coherence is thus similar to the concept of multipartite entanglement , since a mixed state is called -partite entangled if any of its ensemble decompositions involves at least one -partite entangled pure state .quantum coherence has recently been recognized as a resource in the sense that there are processes whose realization is facilitated by the consumption of coherence .various tools known from the entanglement theory have thus been adapted for the classification and quantification of quantum coherence .reconstruction of the complete density matrix is required to assess most of these tools , and only few schemes work with fewer observables to be measured . on the one hand, this fact poses a rather high threshold for the analysis of coherence in laboratory experiments , and , on the other hand , the abstract nature of the aforementioned tools limits the intuition that might be gained from their use. we will strive for the identification of quantum coherence based on the interference pattern only .if an interference pattern can be decomposed into a sum of patterns resulting from -path interference , then , this pattern does not permit to conclude on -coherence ( as exemplarily depicted in fig .[ fig : interferencemixed ] for and ) .we will therefore identify ( in section ii ) properties of interference pattern asserting that such a decomposition is not possible . in section iiiwe provide numerical evidence underlining the performance of the tools developed in section ii .the numerical studies have been designed to capture major practical issues , such as difficulties with the proper identification of the maximum of a complicated interference pattern , or its coarse graining . ]we consider a rather general physical situation in which a superposition of different states is being established , and a certain level of decoherence results in the fact that this superposition is not perfectly coherent .a specific realization of such a situation would be a mach - zehnder type of interferometer , as schematically depicted in fig .[ fig : interference ] for , where the different path - alternatives define the basis states . ]an incoming object impinges on a beam - splitter ( bs ) , that creates a coherent superposition of the basis states .the phase shifters permit to generate an interference pattern that can be read off , once the object has crossed the second beam splitter .the interference pattern with is defined as the normalized probability distribution to observe an object in the output mode , where is the state before crossing the second beam splitter . since on average only one out of objects exits through the output mode ,the interference pattern is given in terms of an over - normalized state with .the simplest case of path alternatives corresponds to the original mach - zehnder - interferometer , in which one can record the interference pattern by tuning a single phase shifter . in general , the interference pattern is obtained by tuning phase shifters . beyond the obvious increase of dimensionality , also the structure of the patterntypically gets more complicated with growing since the dependence of the detection probabilities on the phases gets more sensitive .our aim shall thus be to capture a more global part of the desired information , which is robust against small deviations of the tuned parameters .to this end we examine various _ moments _ of the interference pattern in question .one can certainly obtain some information on coherence from the maximum of the interference pattern as its value , when larger than , unambiguously identifies to be -coherent . in practice , however , this is not necessarily the best choice .in particular , for highly coherent states , the interference pattern is a rapidly oscillating function so that optimizations will often identify only local maxima with a resulting under - estimation of coherence properties . since , again , for highly coherent states , the optimum is given by a very narrow peak , an extremely accurate reconstruction of the interference pattern becomes necessary .a much more practical alternative would be to employ the uniform statistical moments the first moment is just the norm of the interference pattern , but the higher moments carry non - trivial information .one would expect that increasing the order of moments improves the identification of -coherence , because taking the limit is equivalent to finding the maximum ( [ max ] ) . on the other hand the required accuracy of the reconstructed pattern ( _ e.g. _ from experimental data ) necessary to assess a moment grows with .we thus strive for an approach that is based on moments of reasonably low order , which are more robust against small deviations of the interference pattern . in order to find a good compromise between a sensitive identification and robustness with respect to imperfections, we utilize the _ generalized moments _ defined in terms of a suitably chosen -dimensional probability distribution .the simplest case reproduces the uniform moments eq .( [ uniform ] ) . in the opposite case ,when is strongly localized around the maximum of the interference pattern , the value of the generalized moment approximates the maximum .this specific choice calls for the search of an optimal probability distribution which might be flawed by the same issues as encountered for eq . .with a sufficiently wide distribution , on the other hand , the optimization landscape is substantially flatter than in eq . what eases the optimization a lot .using the generalized moments of low order together with the distribution encoding additional information , namely the expected position of the maximum of the interference pattern , one can reasonably merge advantages of both interference peaks ( eq . ) and regular statistical moments ( eq . ) , avoiding complications brought by each of two approaches alone . since the interference pattern defined in eq .is given in terms of all the phases as independent variables , it is reasonable to define in terms of independent distributions for each phase , _ i.e. _ .the variable is thus distributed according to , with having the same functional form for all . by denote the expectation value of each of these distributions . although strictly speaking not necessary, we will assume equal width of all distributions and denote their standard deviation by .an evaluation of eq . requires the construction of so - called _ trigonometric moments _ defined for any integer . due to the fact , that the phases are defined only in an interval of width of , this step can be done explicitly for most typically employed distributions like the lorentz or gauss distributions . to this end , it is helpful to take advantage of a _ wrapped _ version of a distribution . in the case of the wrapped normal distribution ,the trigonometric moments are equal to the characteristic function of the normal ( unwrapped ) distribution evaluated at integer arguments , , where . with the help of the function one can perform the integration in eq . and express the generalized moments as as argued above , the use of low order moments is desirable .we will therefore focus in the following on , and .there is however no fundamental obstacle for generalizations to higher values of . before one can use the moments defined in eqs . or to rigorously identify coherence properties , one needs to find the maximum that can adopt for -coherent states .in the present case such an optimization can be done explicitly , confirming that the maximum among all -coherent states is provided by _i.e. _ a perfectly balanced coherent superposition of basis states . to arrive at this conclusion , one may first realize that the generalized moments are convex , _i.e. _ for and any pair of density matrices and .this is a direct consequence of the two facts that the power of a linear functional like is convex for , and that the integral preserves convexity as is non - negative .since states that are at most -coherent ( for any value of ) define a convex set ( _ i.e. _ is no more than -coherent ) the maximum of over -coherent density matrices is always reached for a pure state . the most general -coherent pure state reads assuming ( without loss of generality ) that exactly the first basis states are comprised with non - vanishing weights in the coherent superposition . in appendix [ sec : derivation - of - threshold - values ] it is shown that the optimization of the phase factors can be performed independently of the optimization over the real amplitudes , and that the maximum is obtained if the coincide with the expectation values of .also the remaining optimization over the can be performed very generally .as further shown in appendix [ sec : derivation - of - threshold - values ] , the quantity to be optimized is a schur - concave function which is maximized for for . summarizing the above considerations , the maximum of that can be adopted for -coherent states with a given distribution reads with the coefficients given in table [ table1 ] .any excess of this threshold value is an unambiguous identification of coherence properties beyond -coherence .coefficients that characterize the threshold values in eq . for , and .the index runs from to and is a short hand notation .[ cols="<,<,<,<",options="header " , ] the results of our numerical analysis are presented in table [ tab : detection_ratios_random_states ] . comparing with the ratios in the suboptimal setting ( positive ) , we confirm the previous observation that the performance of the moments ( those with narrow distributions ) can be highly sensitive to uncertainty of .similarly to the case of the first ensemble investigated , the second and third moment with positive width can significantly increase the detection ratio in this suboptimal setting .in section [ sec2 ] we showed that the generalized moments are convex with respect to the density matrix , what further implies that their maxima are provided by pure states ( see eq . ) .the identification of the maximum value of that can be taken for mixed states with given purity , thus allows to strengthen the detection of coherence in mixed states .as we explicitly demonstrate here for the case , there are however strongly mixed states that yield values of close to the achievable maximum . that is , the present approach is by no means limited to pure or weakly mixed states , but, it can indeed identify -coherence also for substantially mixed states . in appendix[ sec : derivation - of - maximum ] we show that the global maximum taken over all states with given purity is obtained for the state with defined in eq . .moreover , any state of the form eq . with purity is at most -coherent . is thus the smallest purity that permits to identify -coherence with the present moments .[ fig : opt ] depicts as function of ( solid curve ) , and for a -dimensional system with .the symbols ( triangles , rectangles , circles ) depict the numerically obtained maximum of with the maximization performed over all at most -coherent states with given purity ( see eq . in the appendix ) ; the horizontal lines depict the corresponding values for to guide the eye . depict the numerically obtained , purity - dependent threshold values of -coherence for in a -dimensional system and .the solid curve denotes the analytical result obtained for the state ( [ state1 ] ) .the horizontal lines denote the original threshold values obtained by pure states .[ fig : opt ] ] as one can see , the values coincide with for ; for , is nearly constant , _i.e. _ there is a very small increase with .this means , that there are rather highly mixed -coherent states that yield values above the threshold values of for -coherence .the approach developed hitherto is thus able to identify coherence reliably even for rather strongly mixed states , even if no information on purity is available .since the range over which is nearly constant , is the larger ( _ i.e. _ including lower values of ) , the smaller is , this holds in particular for the identification of low coherence .that is , in particular for , ( as it typically is the case in excitation transport ) , the present framework can detect coherence very well even for quantum states with substantial degree of mixing ; but if necessary , one may always resort to the purity - dependent threshold values in order to improve the detection .as we have seen , an interference pattern permits to draw rigorous conclusions on coherence beyond the intuitive , qualitative expectation that an interference contrast grows with increasing coherence properties . from a practical point of view, the freedom in choice of sampling as well as the possibility to include additional information ( like purity ) makes this approach flexible , so that it can be tailored for the specific properties of a system under investigation .that is , limitations on experimentally variable quantities may be compensated through suitably chosen distributions with variable widths that reflect the realistically achievable sampling . in particular , with noisy data that does not permit to reconstruct an entire interference pattern reliably , the generalized moments of low order still can characterize coherence properties reliably . here , we have been considering the case of independently adjustable phases , but the underlying framework can be generalized also for the variation of fewer phases. moreover , we developed the general framework which can utilize a wrapped version of an arbitrary probability distribution .while the normal distribution seems to be the most natural first choice , more sophisticated distributions can even better support a particular experimental realization . beyond the conceptual connection between a directly observable interference pattern and the underlying , abstract coherence properties , the present approachthus provides a versatile method to characterize coherence properties in a wide range of systems . at the end , let us establish a general link between the generalized moments discussed in this paper and the commonly employed characterization of quantum coherence in terms of the -norm of coherence .since the interference pattern ( eq . ) satisfies for any value of , is bounded , , in terms of the moments .it is thus not surprising , that contains enough information to provide a valuable description of quantum coherence as well as practical criteria for identification of -coherence .we thank alexander stibor and bjrn witt for interesting and helpful conversations .financial support by the european research council under the project odycquent is gratefully acknowledged .we start the derivation by inserting with given by ( [ kcohpure ] ) into the expression ( [ eq : final - expression - for - p_n ] ) : with an estimate applied in the second line implies that the maximum of with respect to is achieved when the peak position of the probability distribution coincides with the maximum of the interference pattern . in the next step we employ the concept of schur - concavity .for any two vectors and such that is majorized by ( so that ) and any schur - concave function one gets . in the case of pure , -coherent states all vectors majorize the uniform vector to finish the proof we thus only need to show that the function defined in ( [ eq : p_n_res ] ) is schur - concave for and . to this endit is sufficient to show that satisfies the condition to proceed further we need an explicit form of both functions ; that is why we define and where denotes the sum over all pairwise different indices running from to . with this , one obtains and since all parameters are non - negative we can treat each term in ( [ eq : g2]-[eq : g3 ] ) separately and find ( for ) where denotes the sum with the additional exclusion of the values . from eq .one may see that ( with ) contains only terms proportional to , and .since all these terms are separately non - positive , is a sum of schur - concave functions , and thus schur concave .the function involves additional terms which are not of the form of and are not schur - concave .it is , however , possible to show that the terms are schur - concave for . for the wrapped normal distribution we get , so that is schur - concave as well .since the generalized moments ( [ eq : final - expression - for - p_n ] ) are real and non - negative we have the estimate since the right hand side does not depend on , the same upper bound applies to the maximum of ( maximized with respect to the center of the distribution ) . the right hand side of ( [ b1 ] )explicitly reads and saturates if for all , so that the density matrix has only real and non - negative entries . using the inequality of arithmetic and geometric mean we get the estimates since we assume that the purity is fixed we get what also implies that the maximum of ( [ identity ] ) , as well as maxima of ( [ 13 ] , [ 13 - 1 ] ) and ( [ 16 ] ) are provided by the uniform distribution this observation immediately leads to a state independent maximum of .the above maximum may be saturated only when the inequalities used in ( [ 13 ] , [ 13 - 1 ] ) and ( [ 16 ] ) saturate too , i.e. when the last conclusion proves eq .( [ state1 ] ) , showing that the maximum is always global .the next step is to determine when the state ( [ state1 ] ) does not happen to be -coherent .for example , if , the global maximum is attained by a -coherent pure state .we start with the following observation : to realize any -coherent state it is sufficient to consider the form : with being for each a unique set of different indices taken from . by denote arbitrary ( not normalized ) positive semi - definite matrices .we shall now construct with all diagonal elements equal to and all off - diagonal elements equal to .the maximal value of is provided by the case when for every , all diagonal elements of are equal to some and all corresponding off - diagonal elements are given by some . from combinatorial considerations we find thus obtain ^{2}b^{2}\nonumber \\ & \leq & \frac{1}{d}+d\left(d-1\right)\left[\left(\begin{array}{c } d-2\\ k-2 \end{array}\right)\right]^{2}h^{2}\nonumber \\&=&\mathcal{p}_{k}.\end{aligned}\ ] ] in that way we have recovered the range given in eq . .28 c. k. hong , z. y. ou , and l. mandel , phys . rev. lett . * 59 * , 2044 ( 1987 ) .o. nairz , m. arndt , and a. zeilinger , am . j. phys .* 71 * , 319 ( 2003 ) .p. facchi , j. mod . opt . * 51 * , 1049 ( 2004 ) .s. eibenberger , x. cheng , j. p. cotter , m. and arndt , phys .lett . * 112 * , 250402 ( 2014 ) .f. fassioli , r. dinshaw , p. c. arpin , and g. d. scholes , j. r. soc .interface * 11 * , 20130901 ( 2014 ) .t. scholak , f. de melo , t. wellens , f. mintert , and a. buchleitner , phys .e * 83 * , 021912 ( 2011 ) .m. o. scully , m. s. zubairy , g. s. agarwal and h. walther , science * 299 * , 862 ( 2003 ) .m. lostaglio , k. korzekwa , d. jennings , and t. rudolph , phys .x * 5 * , 021001 ( 2015 ) .d. girolami , phys .lett . * 113 * , 170401 ( 2014 ) .f. levi and f. mintert , new j. phys .* 16 * , 033007 ( 2014 ) . c. smyth and g. d. scholes , phys .a * 90 * , 032312 ( 2014 ) .r. horodecki et al , rev .mod . phys . * 81 * , 865 ( 2009 ) .m. cramer , t. baumgratz , and m. plenio , phys .* 113 * , 140401 ( 2014 ) .j. berg , phys .lett . * 113 * , 150402 ( 2014 ) .v. vedral , m. arndt , and t. juffmann , hfsp journal * 3 * , 386 ( 2009 ) .t. r. bromley , m. cianciaruso , and g. adesso , phys .lett . * 114 * , 210401 ( 2015 ) .d. p. pires , l. c. cleri , and d. o. soares - pinto , phys .rev . a * 91 * , 042330 ( 2015 ) .j. flusser , b. zitova , t. suk , _ moments and moment invariants in pattern recognition _ , wiley & sons ltd ., 2009 , sec .6.5 , p. 204s. r. jammalamadaka and a. sengupta , _ topics in circular statistics _ ( world scientific ) , 2001 k. yczkowski and m. ku , j. phys .a : math . gen .* 27 * , 4235 ( 1994 ) .k. yczkowski and h - j .sommers , j. phys .34 * , 7111 ( 2001 ) .b. witt and f. mintert , new j. phys . *15 * , 093020 ( 2013 ) .s. wang , t. zhang , and b. xi , _ schur convexity for a class of symmetric functions . in information computing and applications _( springer berlin heidelberg ) , 2011 a. w. marshall and i. olkin , _ inequalities : theory of majorization and its applications _( academic press ) , 1979 f. levi , s. mostarda , f. rao and f. mintert , rep . prog .phys . * 78 * , 082001 ( 2015 ) .
we develop a rigorous connection between statistical properties of an interference pattern and the coherence properties of the underlying quantum state . with explicit examples , we demonstrate that even for inaccurate reconstructions of interference patterns properly defined statistical moments permit a reliable characterization of quantum coherence .
information retrieval ( ir ) has nowadays become the focus of a multidisciplinary research , combining mathematics , statistics , philosophy of language and of the mind and cognitive sciences .in addition to these , it has been recently argued that ir researchers should be looking into particular concepts borrowed from physics .particularly , it was first evoked in 2004 in van rijsbergen s pioneering manuscript `` the geometry of information retrieval '' that quantum theory principles could be beneficial to ir . despite being an extremely successful theory in a number of fields ,the idea of giving a quantum look to information retrieval could be at first classified as unjustified euphoria .however , the main motivation for this big leap should be found in the powerful mathematical framework embraced by the theory which offers a generalized view of probability measures defined on vector spaces .events correspond to subspaces and generalized probability measures are parametrized by a special matrix , usually called _ density matrix _ or _density operator_. from an ir point of view , it is extremely attractive to deal with a formalism which embraces probability and geometry , those being two amongst the pillars of modern retrieval models .even if we believe that a unification of retrieval approaches would be out - of - reach due to the intrinsic complexity of modern models , the framework of qt could give interesting overlooks and change of perspective thus fostering the design of new models .the opening lines of van rijsbergen manuscript perfectly reflect this interpretation : `` it is about a way of looking , and it is about a formal language that can be used to describe the objects and processes in information retrieval '' . to this end , the last chapter of van rijsbergen s book is mainly dedicated to a preliminary analysis of ir models and tasks by means of the language of qt . amongst others , the author deals with coordinate level matching and pseudo - relevance feedback . since then , the methods that stemmed from van rijsbergen s initial intuition provided only limited experimental evidence about the real usefulness and effectiveness of the framework for ir tasks . several proposed approaches took inspiration from the key notions of the theory such as superposition , interference or entanglement .in , the authors use interference effects in order to model document dependence thus relaxing the strong assumption imposed by the probability ranking principle ( prp ) . an alternative solution to this problemhas been proposed in , in which a novel reranking approach is proposed using a probabilistic model inspired by the notion of quantum measurement . in ,the authors represent documents as subspaces and queries as density matrices .however , both documents and queries are estimated through passage - retrieval like heuristics , i.e. a document is divided into passages and is associated to a subspace spanned by the vectors corresponding to document passages .different representations for the query density matrix are tested but none of them led to good retrieval performance . in order to give a stronger theoretical status to qt as a necessary or more general theory for ir ,some authors step back into more theoretical considerations exposing potential improvements achievable over state - of - the - art models . in , the author shows how detection theory in qt offers a generalization of the neyman - pearson lemma ( npl ) , which is shown to be strictly linked to the prp .dramatic potential improvements could be obtained by switching to such more general framework .widdows observed that the vector space model ( vsm ) lacked a logic like the boolean model . through the formalism for quantum logic illustrated by birkoff and von neumann ,widdows defines a geometry of word meaning by expressing word negation based on the notion of orthogonality .recently , the work by melucci and van risjbergen and song et al . offered a comprehensive review of qt methods for ir along with some insightful thoughts about possible reinterpretations of general ir methods ( such as lsi ) from a quantum point of view .this paper shares the main purpose of the latter works . in the ending section of his book ,van rijsbergen calls for a reinterpretation of the language modeling ( lm ) approach for ir by means of the quantum framework . to our knowledge, such an interpretation has not been presented yet in the literature and this work can be considered as a first attempt to fill this gap .we provide a theoretical analysis of both lm and the vsm approach from a quantum point of view . in both models , documents and queriescan be represented by means of density matrices .a density matrix is shown to be a general representational tool capable of leveraging capabilities of both vsm and lm representations thus paving the way for a new generation of retrieval models . as a conclusion, we analyze the possible implications suggested by our findings .in qt , the probabilistic space is naturally encapsulated in a complex vector space , specifically a hilbert space , noted .we adopt the notation denotes a unit norm vector in and its conjugate transpose . ] to denote the standard basis vectors in . in qt, events are no more defined as subsets but as subspaces , more specifically as projectors onto subspaces . given a ket ,the projector onto is an elementary event of the quantum probability space , also called _dyad_. a dyad is always a projector onto a 1-dimensional space .generally , a unit vector , , , is called a _superposition _ of the where form an orthonormal basis for .a density matrix is a symmetric positive semi - definite matrix of trace one . in qt ,a density matrix defines the state of a system ( a particle or an ensemble of particles ) under consideration .gleason s famous theorem ensures that a density matrix is the unique way of defining quantum probability measures through the mapping .the measure ensures that .this is because , because is positive semi - definite .moreover , if form an orthonormal system for , the probabilities for the dyads sum to one , i.e. they can be understood as disjoints events of a classical sample space .given that , the identity matrix , we have .therefore , for orthogonal decompositions of the vector space of positive operators such that . the set is called positive - operator valued measure ( povm ) . therefore , the properties reported in this paper which apply to a complete set of mutually orthogonal projectors equally hold for a general povm . ] , a quantum probability measure reduces to a classical probability measure .any classical discrete probability distribution can be seen as a mixture over elementary points , i.e. a parameter , , .the density matrix is the straightforward generalization of this idea by considering a mixture over orthogonal dyads can not be easily interpreted as the probabilities assigned by the density matrix to each dyad .] , i.e. , . given a density matrix , one can find the components dyads by taking its eigendecomposition and building a dyad for each eigenvector .we note such decomposition by , where are the eigenvectors and their corresponding eigenvalues .this decomposition always exists for density matrices .note that the vector of eigenvalues belongs to the simplex of classical discrete distributions over points .if the distribution lies at a corner of the multinomial simplex , i.e. for some , then the resulting density matrix consists of a single dyad and is called _pure state_. in the other cases , the density is called _mixed state_. conventional probability distributions can be represented by diagonal density matrices . in this case, a classical sample space of points corresponds to the set of projectors onto the standard basis .hence , the density matrix corresponding to the multinomial parameter above can be represented as a mixture , . as an example, the density matrix below corresponds to a classical probability distribution with , is a pure state and is a general quantum density , a mixed state : the language modeling approach to ir , each document is usually assigned a unigram language model , i.e. a categorical distribution over the vocabulary sample space ( of size ) , , .a query is represented as a sequence of terms , sampled i.i.d .( independent and identically distributed ) from the document model .the score for a document is obtained by computing the likelihood for the query to be generated by the corresponding document model : this scoring function is generally called query likelihood ( ql ) . on the other hand ,kullback - leibler ( kl ) divergence models can be seen as a generalization of ql models introduced in order to facilitate the use of feedback information in language modeling framework . in kl - divergence models , both documents and queriesare assigned to unigram language models .the score for a document is calculated as the negative query to document kl - divergence : as presented in section [ sec : probability_overview ] , conventional probability distributions can be seen as diagonal density matrices .a straightforward quantum interpretation of the ql scoring function can be obtained by associating a diagonal density matrix to each document and consider a query as a sequence of dyads .formally , we associate the vocabulary sample space to the orthogonal set of projectors on the standard basis , .the density matrix for a document is a mixture over whose vector of weights corresponds to the parameter .therefore , .it is straightforward to show that restricted to , generates the same statistics as , i.e. : in the query likelihood view , the query is represented as an i.i.d .sample of word events . as word eventscorrespond to projectors onto the standard basis , we represent a query as a sequence of i.i.d .represents the state of a system , an i.i.d .set of quantum events is obtained by performing a measurement on different copies of and by recording the outcomes . ]quantum events belonging to , .therefore , the score for a document is computed by the following product : which indeed corresponds to the classical ql scoring function .however , we shall stress on an important point about the equation above .if the projectors included in the query sequence are mutually orthogonal ( as above ) , the calculation above behaves as a proper classical likelihood , i.e. the sum of the likelihoods of all possible samples of length is one . on the contrary ,the product can not be considered as a classical likelihood in general because quantum probabilities for arbitrary events does not need to sum to one .further considerations on these issues will be made in section 6 .the kl scoring function computes a divergence between a query language model and document language model . in qt ,the kl - divergence is a special case of a more general divergence function acting on density matrices called von - neumann ( vn ) divergence .note , and the eigendecompositions of two arbitrary density matrices . in the following ,the log function applied to a matrix refers to the matrix logarithm , i.e. the natural logarithm applied to the matrix eigenvalues , .the vn divergence writes as : this divergence quantifies the difference in the eigenvalues as well as in the eigenvectors of the two density matrices . in order to see how the classical kl retrieval framework is recovered , we assign a density matrix to the query very similarly to what has been done for a document .precisely , and are diagonal density matrices such that and . as ( ) is diagonal in the standard basis ,its eigenvalues correspond to ( ) , thus : which corresponds to the kl divergence . as conventional probability distributionscorrespond to diagonal density matrices , their eigensystem is fixed to be the identity matrix .intuitively , kl divergence captures the dissimilarities in the way they distribute the probability mass on that eigensystem , i.e. by their eigenvalues .in this section , we are attempting to look at the vsm in a new way . in its original formulation, no probabilistic interpretation could be given because of the lack of an explicit link between vector spaces and probability theory .in the model , documents and queries are represented in the non - negative part of the vector space , where is the number of terms in the collection vocabulary . in vsm, each term corresponds to a standard basis vector .the location of each object in the term space is defined by term weights ( i.e. _ tf _ , _ idf _ , _ tf - idf _ ) on each dimension .similarity between documents and queries are computed through a vector similarity score , where are the vector representations of the query and the document . in ,the authors show that normalizing document vectors is important to reduce bias introduced by variance on document lengths . by normalizing both document vector and query vector, the similarity score reduces to the cosine similarity between the two vectors , which is an effective similarity measure in the model .from now on , we consider , the normalized ( ) query vectors .documents can thus be safely ranked by decreasing cosine $ ] , which can not be negative because the ambient space is . in this interpretation of the vsm, each document is associated to a probabilistic `` model '' in the same spirit of the language modeling approach .we define a density matrix for the document as , which is a pure state , i.e. its mixture weights are concentrated onto the projector .note that this density matrix does not have a statistical meaning .it has been determined by merely normalizing heuristic weighing schemes and it can not be related to a statistical estimators such as maximum likelihood ( mle ) . a query can be represented as the quantum event corresponding to the subspace spanned by .this subspace naturally corresponds to the dyad .hence , a query can be seen as the sequence of quantum events of length one . in thissetting , its likelihood given the document model is calculated by : the above calculation shows that the quantum `` likelihood '' assigned to the event by the density is the square of the cosine similarity between the query and the document . when restricted to the non - negative domain , the square function is a monotonic , increasing transformation .this means that , i.e. the two formulations lead to the same document ranking . according to the original vsm, queries and documents should share the same representation and the scoring function should be a distance measure between these representations . in the previous formalization, this initial paradigm seems apparently lost .the following alternative quantum interpretation of the vsm is perhaps closer to the original vision of the model .we associate a density matrix both to the document and to the query .specifically , those density matrices would be pure states , projectors onto the corresponding vectors , i.e. , .it turns out that computing the fidelity measure between density matrices produces a ranking function equivalent to cosine similarity : obtained by noting that is a projector thus , and . as , ranking by fidelity measure is equivalent to ranking by cosine similarity ,thus .in this section , we will try to summarize the commonalities and the differences arising from the quantum formalizations of the two models given in the preceding sections .the following analysis is succinctly reported in table 1 . as a starting point, we shall note that the ambient space for both models is the hilbert space , where is the size of the collection vocabulary .each standard basis vector is associated to a word event .therefore , the vocabulary sample space corresponds to the set of projectors onto the standard basis vectors . in query likelihood interpretations ,the query is represented as a sequence of i.i.d .dyads . in the vsm ,the sequence contains one dyad corresponding to the projector onto the query vector . on the contrary , in the lm approachthe sequence contains a dyad for each classical word event , i.e. . besides the number of dyads included in the sequence , a major difference distinguishes the two formalizations .contrary to probabilistic retrieval models such as lm , a query is not considered as a sequence of independent classical word events but as a single event and a particular kind thereof .the query event is a _ superposition _ of word events .this can be seen because the vector can be expressed , up to normalization , as where is the weight for term in the query vector .this kind of event neither can expressed using set theoretic operations nor it has a clear classical probabilistic interpretation : it does not belong to thus it can only be justified in the quantum probabilistic space .arguing further , we would say that , in the case of vsm , term weighting methods aim at estimating the `` best '' query event , i.e. the event which is the most representative for the information need of the user .intuitively , if a single choice would be given to us on what to observe , we would rather be observing in the `` direction '' of important words in the query .llll + & query & document & scoring + vsm & & & + lm & & & + + vsm & & & + lm & & & + it follows from the considerations above that vsm creates query representations by accessing the whole projective space through appropriate choices of . on the contrary ,lm `` sees '' , and consequently can handle , only events from the classical sample space . however , the principled probabilistic foundations of the model give the flexibility of adding an arbitrary number of such events in the sequence , thus refining query representation .in the next section , this kind of duality between vsm and lm approaches will be strengthened by analyzing the properties of the density matrices used in the two models . before continuing, we shall make one last consideration about the `` likelihood '' written in eq .this equation and its corresponding maximization algorithm have already been proposed by lvovsky et al . in quantum tomography applications in order to achieve a maximum likelihood estimation ( mle ) of a density matrix . as we have already pointed out, reduces to a classical likelihood if and only if the projectors in the sequence are picked from the same eigensystem .therefore , the product in its general form can not be understood as a proper likelihood .we believe that it would be interesting to focus future research in finding a proper likelihood formulation in the quantum case that would enable principled statistical estimation and bayesian inference ( see for a recent attempt in formulating a bayesian calculus for density matrices ) . visualized using the bloch sphere parametrization .highlighted in black are the region of used by lm ( to the left ) and vsm ( to the right ) . ] in the divergence view , a density matrix is associated both to the document and to the query and the scoring function is a divergence defined on the set of density matrices .valuable insights can be provided by noting that the models gain access to different regions within . as an example , in figure 1 , we plot the set using the well known bloch parametrization . highlighted in black are the regions of the space used by lm ( to the left ) and vsm ( to the right ) .distinct regions are likely to denote different representational capabilities . in the case of lm ,density matrices are restricted to be diagonal , i.e. mixtures over the identity eigensystem . for two density matrices to be different , one has to modify the distribution of the eigenvalues .therefore , lm ranks based upon differences in the eigenvalues between density matrices .the picture of the vsm approach appears as the perfect dual of the preceding situation .query and documents are represented by _ pure states _ ,i.e. dyads .whatever the dimensionality of the hilbert space , the mixture weights of these density matrices are concentrated onto a single projector . in order to be different , density matrices must be defined over different eigensystems .therefore , vsm ranks based on the difference in the eigensystem between query and document density matrices .the set of diagonal density matrices is represented in figure 1 ( left ) .any two antipodal points on the surface of the sphere correspond to a particular eigensystem .diagonal density matrices are restricted to the identity eigensystem .however , they can delve inside the sphere by spreading the probability mass across their eigenvalues .the black circle in figure 1 ( right ) highlights pure states with real positive entries .these naturally lie on the surface of the bloch sphere . in summary ,the vsm restriction to pure states leaves free choice on the eigensystem while fixing the eigenvalues .conversely , by restricting density matrices to be diagonal , i.e. classical probability distributions , lm leaves free choice on the eigenvalues while fixing the eigensystem .leveraging both degrees of freedom by employing the machinery of density matrices seems to be a natural step in order to achieve more precise representation for documents and queries .vsm and lm also differ in the choice of scoring functions .the former uses the fidelity measure which is a metric on .the latter uses an asymmetric divergence on .more insights into these differences are given in the next section , where we try to contextualize our considerations by referring to common ir issues and concepts .in , the author presents kl divergence models as `` essentially similar to the vector - space model except that text representation is based on probability distributions rather than heuristically weighted term vectors '' .the analysis done in the previous section extends this remark and highlights how vsm and lm leverage very different degrees of freedom by allocating different regions in .however , no clue is given about what should be the meaning of the eigensystems and the eigenvalues from an ir point of view , nor why controlling both could be useful for ir .we will try to give some perspective for the potential usefulness of the enlarged representation space . in basic bag - of - words retrieval models such as lm or vsm ,terms are assumed to be unrelated , in the sense that each term is considered to be an atomic unit of information .to enforce this view , lm associates to each term a sample point and the vsm a dimension in a vector space .our analysis showed that sample points correspond to dimensions in a vector space .the heritage left by lsi suggests that a natural interpretation for such dimensions is to consider them as _concepts_. in this work , we interpret projectors onto directions as concepts .because terms are considered as unrelated , the projectors onto the standard basis in form a _ conceptual basis _ in which each term labels its own underlying concept . from this point of view, lm builds representations of queries and documents by expressing uncertainty on which concept chosen from the standard basis represents the information need . on the contrary , vsm does not have the flexibility of spreading probability weights .however , it can represent documents and queries by a unique but arbitrary concept . in vsm ,the similarity score is computed by comparing how similar the query concept is to the document concept . in this picture , the cosine similarity reveals to be a measure of relatedness between concepts . in lm ,the score is not at all computed on concept similarity , but by considering how the query and the document spread uncertainty on the same conceptual basis . in order to see how this all could be instantiated ,let us suppose that compound phrases such as _ `` computer architecture '' _ express a different concept than _ `` computer '' _ and _ `` architecture '' _ taken separately . modeling interactions between terms has been a longstanding problem in ir ( for example , see ) .we conjecture that a very natural way to handle such cases stems from our analysis .assume that both _`` computer '' _ and _ `` architecture '' _ are associated to their corresponding single term concepts , i.e. , .the concept expressed by the compound could be associated to a superposition event where and is a weight function ( assuming normalization ) expressing how compound and single term concepts are related . in thissetting , the enlarged representation space turns out to be the perfect fit in order to express uncertainty on this set of concepts .one could build a density matrix associated both to a query and to a document assigning uncertainty to both single term concepts and compound concepts .this could be done , for example , by leveraging quantum estimation methods such as described in .as we have pointed out before , the vn divergence could be the right scoring function in order to take into account both divergences in uncertainty distribution and concept similarities .indeed , we have defined an ir model in this way .details can be found in .our experiments on several trec collections show that the model leads to higher effectiveness than the existing models ( in particular , lm ) . as a last remark, we shall point out that the accounts made until now do not need the whole machinery of complex vector spaces .we do not have a practical justification for the usefulness of vector spaces defined over the complex fields ( see for a discussion on these issues ) .however , we speculate that these could bring improved representational power and thus remains an interesting direction to explore .in this work , we showed how vsm and lm can be considered dual in how they allocate the representation space of density matrices and in the nature of their scoring functions . in our interpretation , vsm adopt a symmetric scoring function which measures the concept similarity .lm fixes the standard conceptual basis and scores documents against queries based on how they spread the probability mass on such basis . we argued that leveraging both degrees of freedom could lend a more precise representations of documents and queries and could be especially effective in modelling compound concepts arising from phrasal structures .this has been confirmed by another study .scott deerwester , susan t. dumais , george w. furnas , thomas k. landauer , and richard harshman .indexing by latent semantic analysis . in _ journal of the american society for information science _, 41:391407 , 1990 .d.song , m. lalmas , k. van rijsbergen , i. frommholz , b. piwowarski , j. wang , p. zhang , g. zuccon , p. bruza , s. arafat , l. azzopardi , a. huertas - rosero , y. hou , m. melucci and s. rger .how quantum theory is developing the field of information retrieval , in _ proc .of qi _ , 2010 .
in this work , we conduct a joint analysis of both vector space and language models for ir using the mathematical framework of quantum theory . we shed light on how both models allocate the space of density matrices . a density matrix is shown to be a general representational tool capable of leveraging capabilities of both vsm and lm representations thus paving the way for a new generation of retrieval models . we analyze the possible implications suggested by our findings .
[ cols="<,<,<",options="header " , ] open multiprocessing ( openmp or omp ) and message passing interface ( mpi ) are two strategies for using multiple processors for a single problem .the key difference between them is that in mpi , different nodes have their own memory and they communicate with each other when needed ; but with openmp , the memory is shared between threads .here is an example .suppose we have a two dimensional lattice with 4 sites in each direction , and we are using four nodes or threads , as shown below .+ 1in 1&2&3&4 + 5&6&7&8 + + 9&10&11&12 + 13&14&15&16+ suppose node / thread 1 corresponds to sites 1 , 2 , 5 and 6 .in mpi , node 1 has information only about sites on that node , namely 1 , 2 , 5 and 6 .if it needs information about other sites , for example about sites 3 or 7 which are nearest neighbors of sites 2 and 6 respectively , it has to use communication routines .contrast this with openmp , where all threads have access to data for all sites , but thread 1 does computations only for sites 1 , 2 , 5 and 6 .table 1 summarizes the differences between the two strategies . a trend towards shared memory parallel machines or clusters of symmetric multiprocessing ( smp ) nodes rather than the older paradigm of massively parallel processing ( mpp ) machines makes a study of openmp parallelism timely .openmp was designed to exploit certain characteristics of shared - memory architectures .the ability to directly access memory throughout the system ( with minimum latency and no explicit address mapping ) combined with very fast shared memory locks , makes shared - memory architectures best suited for supporting openmp .the advantage of openmp is that it is easier to program . unlike mpi, one does not have to worry about passing messages between nodes . in this paper , we study how openmp performs relative to mpi , and whether combining the two strategies gives better performance .in this section , we give some details about how a c code that works for a single processor is changed to work on multiple threads .the number of threads is determined by an environment variable , ` omp_num_threads ` .the code is executed serially , on a single thread , until a parallel construct is encountered , which is executed on multiple threads and then serial execution is resumed . to define a parallel construct ,lines beginning with ` # pragma omp ` are added to the code .such pragmas are ignored by the usual c compiler , so the code may also be run as an ordinary serial code .they are , however , interpreted by an openmp compiler to identify parallel regions .there are several constructs that can be made to execute in parallel ; here is an example for a `` ` for ` '' construct .+ ` # pragma omp parallel for for(i=0;i <n;i++ ) { my_job(i ) ; } ` this will run the function ` my_job ` in parallel on different threads .note that though the memory is shared , each thread must have a private copy of some variables , like ` i ` in the above example .loop variables are made private by default but other such variables have to be declared `` ` private ` '' .some variables may need to be summed over all the sites .this is accomplished with a ` reduction ` statement .the syntax is as follows : + + ` j=0 ; ` + ` # pragma omp parallel for reduction(+:j ) ` + ` for(i=0;i <n;i++ ) { j+=my_function(i ) ; } ` this is equivalent to + .+ + note that the sum is performed over all threads though each thread works only on part of the total number of iterations .identifying ` private ` and ` reduction ` variables is necessary for getting correct results .the milc code is a set of publicly available codes developed by the mimd lattice computation ( milc ) collaboration for doing qcd simulations .this code has been run on a variety of parallel computers , using mpi , for many physics projects .the files are organized in different directories the ` libraries ` directory contains low level routines like matrix multiplication , the ` generic ` directory contains oft - needed but somewhat higher level routines , including the updating and inversion routines .then there are various application directories .for this project , we only concentrated on the conjugate gradient inverter , file ` d_congrad5.c ` in ` generic_ks ` directory in version 6 of milc code .the code uses a macro ` forallsites ` defined as + ` # define forallsites(i , s)\backslash for(i=0,s = lattice;i< sites_on_node;\backslash i++,s++ ) ` + where ` lattice ` is an array of sites , ` site ` is a structure containing variables defined at each lattice point and ` sites_on_node ` is the number of lattice points on a given node .we needed to redefine the macro ` forallsites ` because the openmp compiler we used could not deal with two variables ( ` i ` and ` s ` ) in a parallel ` for ` statement .here is the macro redefinition .+ ` # define forallsites(i , s)\backslash for(i=0;i < sites_on_node;i++){\backslash s=&(lattice[i ] ) ; ` we used another macro ` end_loop ` , which is just defined to be a closing brace ` } ` to match the opening brace in the above macro .we used the kap / pro toolset for this project .it includes the following : * ` guidec ` : openmp compiler for c. * ` guideview ` : openmp parallel performance visualization tool .it gives details of program execution , in particular , time spent in serial and parallel execution , imbalance in different regions of the code , etc .* ` assurec ` : compiler to be used with debugger which works by comparing single thread and multiple thread executions . * ` assureview ` : openmp programming correctness tool for viewing details of errors or conflicts which occur if different threads try to read / write the same variables at the same time .to add openmp parallelism to the milc code , the following steps were required .first , we had to redefine the macro as explained above .then we added the parallel for pragmas , specifying ` private ` and ` reduction ` variables .for example , in the ` forallsites ` loop , ` s ` was made ` private ` .we changed ` cc ` to ` guidec ` in our makefiles , and we had to modify those compiler options that ` guidec ` did not recognize . adding ` backend ` before a compiler option forces ` guidec ` to use ` cc ` compiler options .then , we ran ` assurec ` and ` assureview ` to locate and remove conflicts .finally , we ran the executable on different number of threads and verified that the output agreed with the mpi output . even after one has a working openmp code , there are some issues to consider when comparing its performance with that of mpi .some performance problems are openmp issues , while others are not .if single thread openmp performance does not match that with a single node under mpi , that may indicate a culprit other than openmp .for example , thread safe compilation requires the ` -mt ` switch on sun .if using this switch on the original serial code decreases performance substantially , then the performance issue lies with the sun compiler and its runtime libraries , not with openmp . if the code uses many ` malloc / free ` pairs then thread - safe memory allocation is likely the culprit .again , this is not an openmp issue , but an issue with the quality of the vendor s thread - safe compiler / runtime implementation .we verified that the ` -mt ` option on the serial version on sun did not affect the performance significantly . except for the case where we combine openmp and mpi , no ` malloc / free ` statements are used in the region of the code where the performance is evaluated .thus , to the best of our knowledge , this is a fair comparison between openmp and mpi performance .we first ran the modified codes on a sun e10000 at indiana university .the details of architecture for this computer can be found online .benchmarks were done for various lattice sizes and numbers of threads .as the number of threads increased , the lattice dimensions were increased to keep the volume per thread constant at .the number of threads was increased from 1 to 16 by factors of 2 .for example , for a given , the lattice size for 2 threads is and for 16 threads it is . was increased from 4 to 14 in steps of 2 .reported in fig .1 is the performance on the kogut - susskind quark conjugate gradient routine in megaflop / s per cpu for both omp and mpi . for smaller number of threads / nodes , the omp rates are quite comparable to mpi .they lag behind for larger number of threads .there are two factors involved here : the overhead for setting up threads and the use of the cache . for small lattice sizes , since there is only a small number of computations to be performed , the former degrades the performance , but if significant portion of the problem can fit in the cache , the execution is speeded up . on the other hand , for larger latticesthe thread initialization overhead is a much smaller fraction of the total computation time , but the problem size is too big to fit into the cache .we see that omp has a `` sweet spot '' at size 6 , much as the mpi performance peaks at size 8 .since we keep the load per thread constant , for the same lattice size the performance monotonically decreases in most cases as we increase the number of threads .= 7.0 cm = 6.0 cm = 7.0 cm = 6.0 cm next we benchmarked the code on blue horizon .this ibm sp machine at the san diego supercomputer center has 8-way smp nodes but with the current switch can support only 4 mpi processes per node .figure 2 contains the preliminary results from blue horizon .these results are qualitatively similar to the e10000 results .a hybrid approach combining openmp parallelism within mpi processes may offer better performance than either individual approach .we tried different combinations of threads and mpi processes on blue horizon .the hybrid approach fared better at times .figure 3 shows the results for a total of eight processors .it can be seen again that the mpi performance peaks at size 8 ( the left - most bars in fig .3 ) and the omp at size 6 ( the right - most bars ) . the combination of 2 threads and 4 nodes works best for smaller sizes .the processors on blue horizon were upgraded after these runs .we should repeat these calculations and extend the study to a larger number of cpus .= 7.0 cm = 6.0 cmon both computers studied , openmp performance was very similar to mpi performance for a small number of threads , but it deteriorated much faster as the number of threads increased , for smaller lattice sizes .thus , openmp may be a viable option for someone writing a code to be used with a modest number of processors on smp machines .the milc collaboration , however , already has a working mpi code that scales well on many machines . for almost all the combinations of problem sizes and number of cpus studied in this paper , mpi is at least as good as openmp , if not better .the only case where we get a considerable improvement over mpi is when we combine openmp and mpi on blue horizon for and 6 .not only does the hybrid approach give the best performance on a single smp node , it should allow us to run multi - node jobs using all eight processors on each node rather than the limit of four with the current switch .we have added openmp parallelism to the milc code only for the conjugate gradient inverter for this test project .it will require considerably more effort to modify the whole code to run on multiple openmp threads .it is our pleasure to thank bill magro and henry gabbs at kai for many useful discussions .we gratefully acknowledge the help provided by the staff at research and technical services at iu , especially david hart , mary papakhian and stephanie burks .this work was supported by the doe under grant de - f002 - 91er 40661 .we thank the san diego supercomputer center and npaci for use of blue horizon .
a trend in high performance computers that is becoming increasingly popular is the use of symmetric multiprocessing ( smp ) rather than the older paradigm of mpp . mpi codes that ran and scaled well on mpp machines can often be run on an smp machine using the vendor s version of mpi . however , this approach may not make optimal use of the ( expensive ) smp hardware . more significantly , there are machines like blue horizon , an ibm sp with 8-way smp nodes at the san diego supercomputer center that can only support 4 mpi processes per node ( with the current switch ) . on such a machine it is imperative to be able to use openmp parallelism on the node , and mpi between nodes . we describe the challenges of converting milc mpi code to using a second level of openmp parallelism , and benchmarks on ibm and sun computers .